Strengthening the New Protocols & Ideas Domain
Gm DAO and delegates.
After the feedback of last month, we have been working and iterating on possible ideas to make the grant program stronger.
The following modifications to the program have been shared with several delegates, from which we gathered feedback and implemented feedback based on desired results and willing to have more precise KPI.
Executive Summary
The New Protocols & Ideas domain of the D.A.O. Grant Program has approved only 6 projects out of ~180 applications in the first six months of Season 3, an approval rate of just 3%. This is significantly below both Season 2 (~30 approvals) and other domains (12–22%). Delegates have expressed concern that funds are being under-deployed in this domain.
We believe the main issue lies not in the evaluation process itself, but in the quality and nature of deal flow: the domain’s broad scope attracts many low-fit applications, while builder momentum in Arbitrum has shifted toward larger projects, multichain deployments, and ecosystems with stronger short-term narratives.
To address this, we propose a targeted operational adjustment: Castle Labs, the Domain Allocator, will take on a business development and sourcing mandate to actively bring in promising early-stage teams and mentor “almost-ready” applicants. This initiative will be funded with $24K over 6 months, reallocated from the existing legal budget (no additional cost to the DAO).
KPI: Increase approvals from 6 to 21 by December 2025 (250% growth), with an intermediate checkpoint in October to allow adjustments.
Outcomes:
- Success: ≥21 approvals, proving that targeted BD and a controlled increase in risk threshold can surface quality projects.
- Failure: Continued under-approvals, suggesting structural issues with domain scope or builder flow—informing a rethink for Season 4.
This amendment gives Castle Labs the mandate and resources to improve deal flow, while providing the DAO with clear, measurable outcomes to evaluate effectiveness.
FAQ:
Q: Will the DAO need to allocate more money to the program?
A: No, the further budget has been taken from the existing one, specifically from the portion of OpEx budget allocated to legal. We are neither asking for funds from the DAO nor taking it from the funds that are intended for grantees
Q: Will this modification require a snapshot vote to ratify it?
A: In our opinion, and in the opinion of the delegates we enquired as well as the Foundation, no, since the funds are coming from the OpEx portion. At the same time, and in line with the transparency that has been so far been an important part of this program, we do want to share publicly with the DAO this change, especially because we are partially enlarging the scope of the team through this function of active outreach
Q: What is a success scenario?
A: Success would mean achieving the KPIs mentioned in this draft (21 proposals at minimum, a light but good research on the builders targeted by this program). At the same time we want to take this opportunity to “experiment in prod” on an approach that is different from what has been so far tried in most grant programs of our DAO, with elements of Business Development and incubation. We think this data can potentially be meaningful in case of a renewal, and in general for our community.
Q: Why are the KPIs limited to December?
A: Our program should last until mid March, or until funds expire. Since this is an experimentation, we wanted to define an intermediate milestone to be able to gather feedback around it and eventually introduce further change in the last quarter of the program.
Strengthening the New Protocols & Ideas Domain - Action plan
Over the first six months of the D.A.O. Grant Program, we have seen strong results. Applications have already exceeded 600, compared to 500 in the same period of Season 2 and 200 in Season 1. More than $2M has been earmarked for over 70 approved proposals across five different domains.
The initial KPIs from the original Tally proposal focused on the number of total submissions, completion rate, and project sustainability. It is still too early to fully evaluate these metrics, but we believe we are on track to meet and surpass previous seasons, with applications already up 20% year over year.
We also recognize that grant programs should aim for qualitative outcomes, which are harder to measure but equally important. In this respect, the program is also on track, delivering improvements in transparency, reporting, and overall mindshare for the DAO—outcomes not present in prior iterations.
That said, we take seriously the feedback from delegates regarding expectations for the program. These expectations are also shaped by market context and may differ from the discussions and temp-check votes held nearly a year ago.
One of the main concerns so far is that, in Season 3, the New Protocols & Ideas domain has approved only 6 projects out of ~180 applications—an approval rate of 3%. This contrasts with Season 2, which saw roughly 30 approvals, and with the approval rates in other domains, which range between 12% and 22%. Delegates have raised the concern that funds are being under-deployed in this domain. The question is whether this is due to a lack of strong builders in Arbitrum, or an evaluation threshold set too tightly.
Our perspective is as follows: the program is built on transparency and accountability. All RFPs and reviews are public on the Questbook platform, and most discussions with builders take place in public Discord channels. Delegates and builders can review interview transcripts, rubric scoring for each application, and the reasoning behind acceptance or rejection. Castle Labs has sought to be not only a transparent allocator but also a quality allocator, using an internal framework adapted from their investment process, validated alongside the PM who previously ran this domain in Seasons 1 and 2.
In our view, the profile of applicants has shifted compared to earlier seasons. While this post is not a deep dive into the causes, we believe several factors play a role:
- Multichain focus: Builders today are focused on deploying across multiple chains to maximize TAM, which makes it harder to find new applications built natively on Arbitrum.
- Experimentation: Relative to Base and Solana, experimentation on Arbitrum has declined. Arbitrum is perceived as a “serious” chain, excellent for robust projects, but less appealing to smaller teams, especially given the distribution advantages other ecosystems currently enjoy.
- Mindshare: Crypto trends are narrative-driven. Emerging ecosystems like Monad and MegaETH capture significant builder mindshare, even before going live.
These factors are compounded by the nature of the D.A.O. Grant Program itself. With a $50K funding cap, the program often attracts less experienced builders, or those struggling to find product–market fit and seeking funds for continuation or pivoting.
Our perspective is partially supported by other stakeholders who have paused their own grant programs in the Arbitrum ecosystem, citing similar challenges.
Finally, the New Protocols & Ideas domain differs from other domains in scope. Being a generalist category, it absorbs applications that do not neatly fit into Community & Events, Dev Tooling, Gaming, or Orbit, domains that are narrower and more specialized, and therefore tend to attract higher-quality, targeted applications.
While the program initially adopted a largely passive approach to sourcing, we believe there is a clear path to improvement. We propose a targeted operational adjustment: Castle Labs, as Domain Allocator, will take on a stronger business development and sourcing role. Instead of relying mainly on inbound applications, they will actively seek out promising early-stage teams, curate deal flow, and mentor applicants who show potential but are not yet grant-ready. Operations will be supervised by the PM, Jojo. Importantly, this adjustment uses the existing legal budget, meaning there is no additional cost to the DAO.
Budget: $4K/month for 6 months (total $24K), equivalent to 40 hours per month and to an increase of 50% from the 80 hours per month originally planned, reallocated from the legal budget ($30K).
Plan:
- September: Research the builder landscape (a light research and comparison on how Arbitrum is positioned for builders vs other ecosystems, and a study on ETHGlobal teams, smaller protocol teams, past hackathon participants). Goal: quantify how many teams can be reached from locations and events that are compatible with the nature of the D.A.O. Grant program and validate whether perceptions about Arbitrum’s appeal to builders hold true.
- A more in-depth document on the deliverable is available as appendix of the current write up
- October–February: Active sourcing/BD through Castle’s network, scanning hackathon finalists, and direct outreach.
- Provide mentorship to “almost-ready” applicants to raise them to approval level, adopting an approach closer to incubation than a traditional grant program.
- Flexibility: Increase the risk threshold in a controlled way by funding strong teams building new primitives or infrastructure even without proven product–market fit.
KPI: Increase approvals from 6 to 21 by December 2025 (a 250% increase). This will be achieved through external sourcing, network leverage, and mentorship. The KPI has been purposely set two and a half months before the end of the program, and around three months after the ratification of this amendment (excluding the research period of September) to purposely have time for proper feedback, evaluation and adjustments if needed before the natural end of the grant program in mid March..
Scenarios:
- Success: ≥21 approvals by December, demonstrating that targeted BD and a higher risk threshold can surface quality projects and improve fund deployment.
- Failure: Significantly fewer approvals despite these efforts, suggesting either insufficient builder flow into Arbitrum or a domain scope that is too broad. This would warrant reconsidering the domain structure or reallocating budget in Season 4.
In short, this amendment provides Castle Labs with resources and a clear mandate to improve deal flow, while giving the DAO measurable outcomes to determine whether the issue lies in sourcing or domain design—all without impacting the program’s overall budget. Delegates will be able to monitor progress through regular monthly reporting.
Appendix: Study on small builders’ landscape and sourcing
The following section of the post will describe the outcome of the research that Castle will provide by the end of September.
The goal is to understand why approval rates in the current grant program remain low and to recommend concrete steps that can be taken starting from September to improve outcomes. At the same time, address the broader question: “is Arbitrum attractive to builders?”. This will provide context for how the D.A.O. Grant Program is positioned for its target audience of smaller builders.
The focus is on diagnosing approval gaps and identifying quick ways to source and support better applications, with the overarching North Star (which won’t be the only focus of the current research) of understanding how attractive the Arbitrum ecosystem is for smaller, fast builders looking to innovate.
Research will remain light, focusing on a comparative scan of Arbitrum vs. other ecosystems (Base, Solana, for instance), a brief builder survey, and a structured review of hackathon teams.
Alongside this, we will emphasise BD work by building a shortlist of promising teams tapping into hackathons and our network, refining sourcing funnels, and drafting a simple incubation/guidance plan to raise submission quality.
Deliverables
We will deliver four outputs that can be acted on immediately:
- A dedicated 3-4 page section assessing how Arbitrum attracts builders compared to other ecosystems, combining light quantitative trends with qualitative flagship examples. It will also review the first six months of grant submissions to provide a concise yet clear picture of how ecosystem positioning relates to the early grant pipeline.
- A Hackathon Scoreboard (CSV + 1-pager) covering 25-40 hackathon finalist teams and a shortlist of the best potential outreach targets that could fit the D.A.O. program
- Survey Lite Results: One page of charts summarising builder perceptions, plus an appendix with the exact questions used.
- An October Playbook (2 pages) with sourcing channels, quick-win incubation steps, and refining sourcing funnels for the BD team.
This work will serve as a preliminary guideline for the BD/Incubation efforts that will take place during the remaining months of the D.A.O. Grant program.
Workstreams
-
WS1: Ecosystem Positioning & Grant Review. This workstream will demonstrate how Arbitrum compares to other ecosystems and what that means for the DAO Grant program. The aim is to provide context through a mix of concise data points and qualitative examples.
For instance, we will highlight trends in verified contracts on Arbitrum, Base, and Solana, and surface builder activity indicators (such as GitHub mentions, repository counts, and protocol trackers). While also tracking flagship examples of projects building on these ecosystems, looking for significant differences that can explain the gaps. Consequently, the focus will be on what type of builders develop their products based on which ecosystem, and what type of builders Arbitrum attracts, differentiating between large teams with a track record and fast-growing small builders.
Finally, following these insights, we will analyse the first six months of proposals by tracking volume, approval rates, and top rejection reasons. The output will be a 3/4-page Ecosystem Research, featuring visuals (a grant funnel and one ecosystem trend) that connect ecosystem positioning to the early grant pipeline. -
WS2: Hackathon scan: Map 25-40 ETHglobal hackathon finalist teams from the past year. Score them on activity, Arbitrum-fit, and contactability. The output includes a CSV scoreboard and a two-page memo estimating the number of more genuine builders versus bounty hunters, as well as a shortlist for outreach.
-
WS3: Builder Survey: Run a short survey to capture builder perceptions of Arbitrum’s grant program and chain-choice drivers without a full interview program. The target is 15–30 responses from current Arbitrum builders and non-Arbitrum teams in our network. The output will be a one-page chart summary and an appendix with the full question list.
-
WS4: Incubation/Guidance plan: Recommend actions to increase resubmission quality by identifying the most effective sourcing channels and actively engaging builders through BD, both from Castle’s own network and beyond. Provide a structured milestone framework with clear resubmission criteria, while offering lite guidance support to teams whose applications are close but not yet strong enough. This dual focus ensures we bring new builders into the Arbitrum ecosystem and also help existing applicants refine their proposals to reach approval.
Timeline for Research (September)
- Week 1: Finalise scope, extract funnel data, finalise survey, and launch distribution.
- Week 2: Collect hackathon team data, fill scoreboard fields, and nudge survey responses.
- Week 3: Analyse survey responses, prepare shortlists, and build approval trend visuals.
- Week 4: Assemble the Ecosystem Research and October Playbook, circulate drafts, and publish final outputs.
The following months, until the end of the program, will be focused on implementing the findings from the research period, with a focus on BD work and Incubation plans.
Roles
- NDW & Chilla: Data collection and analysis, drafting of Ecosystem Research and Playbook.
- JoJo: Project Manager, stakeholder alignment, survey distribution.
- Castle Labs: Editorial pass, scope guardrails, sign-off, BD, and media push.
Arbitrum Builder Survey (Lite)
We will run a short survey with 15-30 builders, including both current Arbitrum teams and non-Arbitrum projects in our network. The goal is to capture perceptions of Arbitrum’s grant program and understand the broader drivers behind chain choice and first-deploy preferences. The survey will remain lightweight, focusing on questions such as: Which chain would you deploy a new protocol on first? How clear are the criteria for Arbitrum DAO grants? What factors (e.g., liquidity, technical fit, BD, and grants) most influence your chain selection? The exact design will stay flexible to avoid narrowing the scope prematurely, while ensuring we gather actionable insights on builder motivations and pain points.