Arbitrum D.A.O. Grant Program Season 3 - Official thread

The fifth monthly report is live in our website!

TLDR:

  • referencing period: 16th of July to 17th of August

  • 572 proposals in total, of which 67 were approved and 10 were completed

  • slow period tied to seasonily, with a lower amount of new proposal month over month but we are starting to see several project coming closer to the finish line or completing their grant

  • focus of the month of the team has been in crafting guidelines for communication on social channels and in the forum for grantees

1 Like

Strengthening the New Protocols & Ideas Domain

Gm DAO and delegates.
After the feedback of last month, we have been working and iterating on possible ideas to make the grant program stronger.
The following modifications to the program have been shared with several delegates, from which we gathered feedback and implemented feedback based on desired results and willing to have more precise KPI.


Executive Summary

The New Protocols & Ideas domain of the D.A.O. Grant Program has approved only 6 projects out of ~180 applications in the first six months of Season 3, an approval rate of just 3%. This is significantly below both Season 2 (~30 approvals) and other domains (12–22%). Delegates have expressed concern that funds are being under-deployed in this domain.

We believe the main issue lies not in the evaluation process itself, but in the quality and nature of deal flow: the domain’s broad scope attracts many low-fit applications, while builder momentum in Arbitrum has shifted toward larger projects, multichain deployments, and ecosystems with stronger short-term narratives.

To address this, we propose a targeted operational adjustment: Castle Labs, the Domain Allocator, will take on a business development and sourcing mandate to actively bring in promising early-stage teams and mentor “almost-ready” applicants. This initiative will be funded with $24K over 6 months, reallocated from the existing legal budget (no additional cost to the DAO).

KPI: Increase approvals from 6 to 21 by December 2025 (250% growth), with an intermediate checkpoint in October to allow adjustments.

Outcomes:

  • Success: ≄21 approvals, proving that targeted BD and a controlled increase in risk threshold can surface quality projects.
  • Failure: Continued under-approvals, suggesting structural issues with domain scope or builder flow—informing a rethink for Season 4.

This amendment gives Castle Labs the mandate and resources to improve deal flow, while providing the DAO with clear, measurable outcomes to evaluate effectiveness.

FAQ:
Q: Will the DAO need to allocate more money to the program?
A: No, the further budget has been taken from the existing one, specifically from the portion of OpEx budget allocated to legal. We are neither asking for funds from the DAO nor taking it from the funds that are intended for grantees

Q: Will this modification require a snapshot vote to ratify it?
A: In our opinion, and in the opinion of the delegates we enquired as well as the Foundation, no, since the funds are coming from the OpEx portion. At the same time, and in line with the transparency that has been so far been an important part of this program, we do want to share publicly with the DAO this change, especially because we are partially enlarging the scope of the team through this function of active outreach

Q: What is a success scenario?
A: Success would mean achieving the KPIs mentioned in this draft (21 proposals at minimum, a light but good research on the builders targeted by this program). At the same time we want to take this opportunity to “experiment in prod” on an approach that is different from what has been so far tried in most grant programs of our DAO, with elements of Business Development and incubation. We think this data can potentially be meaningful in case of a renewal, and in general for our community.

Q: Why are the KPIs limited to December?
A: Our program should last until mid March, or until funds expire. Since this is an experimentation, we wanted to define an intermediate milestone to be able to gather feedback around it and eventually introduce further change in the last quarter of the program.


Strengthening the New Protocols & Ideas Domain - Action plan

Over the first six months of the D.A.O. Grant Program, we have seen strong results. Applications have already exceeded 600, compared to 500 in the same period of Season 2 and 200 in Season 1. More than $2M has been earmarked for over 70 approved proposals across five different domains.

The initial KPIs from the original Tally proposal focused on the number of total submissions, completion rate, and project sustainability. It is still too early to fully evaluate these metrics, but we believe we are on track to meet and surpass previous seasons, with applications already up 20% year over year.

We also recognize that grant programs should aim for qualitative outcomes, which are harder to measure but equally important. In this respect, the program is also on track, delivering improvements in transparency, reporting, and overall mindshare for the DAO—outcomes not present in prior iterations.

That said, we take seriously the feedback from delegates regarding expectations for the program. These expectations are also shaped by market context and may differ from the discussions and temp-check votes held nearly a year ago.

One of the main concerns so far is that, in Season 3, the New Protocols & Ideas domain has approved only 6 projects out of ~180 applications—an approval rate of 3%. This contrasts with Season 2, which saw roughly 30 approvals, and with the approval rates in other domains, which range between 12% and 22%. Delegates have raised the concern that funds are being under-deployed in this domain. The question is whether this is due to a lack of strong builders in Arbitrum, or an evaluation threshold set too tightly.

Our perspective is as follows: the program is built on transparency and accountability. All RFPs and reviews are public on the Questbook platform, and most discussions with builders take place in public Discord channels. Delegates and builders can review interview transcripts, rubric scoring for each application, and the reasoning behind acceptance or rejection. Castle Labs has sought to be not only a transparent allocator but also a quality allocator, using an internal framework adapted from their investment process, validated alongside the PM who previously ran this domain in Seasons 1 and 2.

In our view, the profile of applicants has shifted compared to earlier seasons. While this post is not a deep dive into the causes, we believe several factors play a role:

  • Multichain focus: Builders today are focused on deploying across multiple chains to maximize TAM, which makes it harder to find new applications built natively on Arbitrum.
  • Experimentation: Relative to Base and Solana, experimentation on Arbitrum has declined. Arbitrum is perceived as a “serious” chain, excellent for robust projects, but less appealing to smaller teams, especially given the distribution advantages other ecosystems currently enjoy.
  • Mindshare: Crypto trends are narrative-driven. Emerging ecosystems like Monad and MegaETH capture significant builder mindshare, even before going live.

These factors are compounded by the nature of the D.A.O. Grant Program itself. With a $50K funding cap, the program often attracts less experienced builders, or those struggling to find product–market fit and seeking funds for continuation or pivoting.

Our perspective is partially supported by other stakeholders who have paused their own grant programs in the Arbitrum ecosystem, citing similar challenges.

Finally, the New Protocols & Ideas domain differs from other domains in scope. Being a generalist category, it absorbs applications that do not neatly fit into Community & Events, Dev Tooling, Gaming, or Orbit, domains that are narrower and more specialized, and therefore tend to attract higher-quality, targeted applications.

While the program initially adopted a largely passive approach to sourcing, we believe there is a clear path to improvement. We propose a targeted operational adjustment: Castle Labs, as Domain Allocator, will take on a stronger business development and sourcing role. Instead of relying mainly on inbound applications, they will actively seek out promising early-stage teams, curate deal flow, and mentor applicants who show potential but are not yet grant-ready. Operations will be supervised by the PM, Jojo. Importantly, this adjustment uses the existing legal budget, meaning there is no additional cost to the DAO.

Budget: $4K/month for 6 months (total $24K), equivalent to 40 hours per month and to an increase of 50% from the 80 hours per month originally planned, reallocated from the legal budget ($30K).

Plan:

  • September: Research the builder landscape (a light research and comparison on how Arbitrum is positioned for builders vs other ecosystems, and a study on ETHGlobal teams, smaller protocol teams, past hackathon participants). Goal: quantify how many teams can be reached from locations and events that are compatible with the nature of the D.A.O. Grant program and validate whether perceptions about Arbitrum’s appeal to builders hold true.
    • A more in-depth document on the deliverable is available as appendix of the current write up
  • October–February: Active sourcing/BD through Castle’s network, scanning hackathon finalists, and direct outreach.
  • Provide mentorship to “almost-ready” applicants to raise them to approval level, adopting an approach closer to incubation than a traditional grant program.
  • Flexibility: Increase the risk threshold in a controlled way by funding strong teams building new primitives or infrastructure even without proven product–market fit.

KPI: Increase approvals from 6 to 21 by December 2025 (a 250% increase). This will be achieved through external sourcing, network leverage, and mentorship. The KPI has been purposely set two and a half months before the end of the program, and around three months after the ratification of this amendment (excluding the research period of September) to purposely have time for proper feedback, evaluation and adjustments if needed before the natural end of the grant program in mid March..

Scenarios:

  • Success: ≄21 approvals by December, demonstrating that targeted BD and a higher risk threshold can surface quality projects and improve fund deployment.
  • Failure: Significantly fewer approvals despite these efforts, suggesting either insufficient builder flow into Arbitrum or a domain scope that is too broad. This would warrant reconsidering the domain structure or reallocating budget in Season 4.

In short, this amendment provides Castle Labs with resources and a clear mandate to improve deal flow, while giving the DAO measurable outcomes to determine whether the issue lies in sourcing or domain design—all without impacting the program’s overall budget. Delegates will be able to monitor progress through regular monthly reporting.


Appendix: Study on small builders’ landscape and sourcing

The following section of the post will describe the outcome of the research that Castle will provide by the end of September.

The goal is to understand why approval rates in the current grant program remain low and to recommend concrete steps that can be taken starting from September to improve outcomes. At the same time, address the broader question: “is Arbitrum attractive to builders?”. This will provide context for how the D.A.O. Grant Program is positioned for its target audience of smaller builders.

The focus is on diagnosing approval gaps and identifying quick ways to source and support better applications, with the overarching North Star (which won’t be the only focus of the current research) of understanding how attractive the Arbitrum ecosystem is for smaller, fast builders looking to innovate.

Research will remain light, focusing on a comparative scan of Arbitrum vs. other ecosystems (Base, Solana, for instance), a brief builder survey, and a structured review of hackathon teams.

Alongside this, we will emphasise BD work by building a shortlist of promising teams tapping into hackathons and our network, refining sourcing funnels, and drafting a simple incubation/guidance plan to raise submission quality.

Deliverables
We will deliver four outputs that can be acted on immediately:

  1. A dedicated 3-4 page section assessing how Arbitrum attracts builders compared to other ecosystems, combining light quantitative trends with qualitative flagship examples. It will also review the first six months of grant submissions to provide a concise yet clear picture of how ecosystem positioning relates to the early grant pipeline.
  2. A Hackathon Scoreboard (CSV + 1-pager) covering 25-40 hackathon finalist teams and a shortlist of the best potential outreach targets that could fit the D.A.O. program
  3. Survey Lite Results: One page of charts summarising builder perceptions, plus an appendix with the exact questions used.
  4. An October Playbook (2 pages) with sourcing channels, quick-win incubation steps, and refining sourcing funnels for the BD team.

This work will serve as a preliminary guideline for the BD/Incubation efforts that will take place during the remaining months of the D.A.O. Grant program.

Workstreams

  • WS1: Ecosystem Positioning & Grant Review. This workstream will demonstrate how Arbitrum compares to other ecosystems and what that means for the DAO Grant program. The aim is to provide context through a mix of concise data points and qualitative examples.
    For instance, we will highlight trends in verified contracts on Arbitrum, Base, and Solana, and surface builder activity indicators (such as GitHub mentions, repository counts, and protocol trackers). While also tracking flagship examples of projects building on these ecosystems, looking for significant differences that can explain the gaps. Consequently, the focus will be on what type of builders develop their products based on which ecosystem, and what type of builders Arbitrum attracts, differentiating between large teams with a track record and fast-growing small builders.
    Finally, following these insights, we will analyse the first six months of proposals by tracking volume, approval rates, and top rejection reasons. The output will be a 3/4-page Ecosystem Research, featuring visuals (a grant funnel and one ecosystem trend) that connect ecosystem positioning to the early grant pipeline.

  • WS2: Hackathon scan: Map 25-40 ETHglobal hackathon finalist teams from the past year. Score them on activity, Arbitrum-fit, and contactability. The output includes a CSV scoreboard and a two-page memo estimating the number of more genuine builders versus bounty hunters, as well as a shortlist for outreach.

  • WS3: Builder Survey: Run a short survey to capture builder perceptions of Arbitrum’s grant program and chain-choice drivers without a full interview program. The target is 15–30 responses from current Arbitrum builders and non-Arbitrum teams in our network. The output will be a one-page chart summary and an appendix with the full question list.

  • WS4: Incubation/Guidance plan: Recommend actions to increase resubmission quality by identifying the most effective sourcing channels and actively engaging builders through BD, both from Castle’s own network and beyond. Provide a structured milestone framework with clear resubmission criteria, while offering lite guidance support to teams whose applications are close but not yet strong enough. This dual focus ensures we bring new builders into the Arbitrum ecosystem and also help existing applicants refine their proposals to reach approval.

Timeline for Research (September)

  • Week 1: Finalise scope, extract funnel data, finalise survey, and launch distribution.
  • Week 2: Collect hackathon team data, fill scoreboard fields, and nudge survey responses.
  • Week 3: Analyse survey responses, prepare shortlists, and build approval trend visuals.
  • Week 4: Assemble the Ecosystem Research and October Playbook, circulate drafts, and publish final outputs.

The following months, until the end of the program, will be focused on implementing the findings from the research period, with a focus on BD work and Incubation plans.

Roles

  • NDW & Chilla: Data collection and analysis, drafting of Ecosystem Research and Playbook.
  • JoJo: Project Manager, stakeholder alignment, survey distribution.
  • Castle Labs: Editorial pass, scope guardrails, sign-off, BD, and media push.

Arbitrum Builder Survey (Lite)

We will run a short survey with 15-30 builders, including both current Arbitrum teams and non-Arbitrum projects in our network. The goal is to capture perceptions of Arbitrum’s grant program and understand the broader drivers behind chain choice and first-deploy preferences. The survey will remain lightweight, focusing on questions such as: Which chain would you deploy a new protocol on first? How clear are the criteria for Arbitrum DAO grants? What factors (e.g., liquidity, technical fit, BD, and grants) most influence your chain selection? The exact design will stay flexible to avoid narrowing the scope prematurely, while ensuring we gather actionable insights on builder motivations and pain points.

1 Like

Great to hear that! There’s also a lot of very talented indie builders on Farcaster, I’m happy to make some intros or share them the survey link when you have it ready.

this extraordinary payment of $24k should be conditional on achieving the KPI of 21 accepted projects by December 2025.

so that, if the KPI is not achieved, the money should be returned by @CastleCapital to the multisig.

As someone still new to this community, I found this proposal very thoughtful, especially the builder survey part. One idea that comes to mind is that tracking how applications progress from draft to final submission could reveal where friction happens more clearly. That way, the survey’s perception data and the behavioral data could complement each other.

1 Like

I genuinely understand where this request comes from. In our opinion, is neither the fair nor the right approach for the following reasons:

  • the initial tally proposal didn’t require any BD effort, research, or active outreach from the DAs. While this has been (partially) always been done by the team in a personal capacity, in here we are talking about an activity that is way more structured and intense. To put it in another way, something more, not in the initial scope, and sized at 40h/month with specific KPIs
  • while there is a fair degree of confidence both from my side an Castle’s side about achieving the proposed KPIs, we can’t be certain about it. If we want to go in the specific, even just filtering through hackathon winners means, for example, filter good and bad teams (where bad can mean multiple things, from project that just don’t fit our ecosystem to eliminate hackathon farmers to name one), find a way to get in touch, and find a way to get them involved into our program. A lot of steps in this process. Again while we do believe the KPIs is achievable, there is a degree of risk for which this might not happen, number wise, just because of the activity itself. Which means your request here is for Castle to allocate further 200 hours from now to december, and then another 120 hours until end of the program, with no certainty of being compensated for this
  • the proposed mechanism also adds a perverse incentive: to potentially save 24k in case of not achieving these KPIs, we might be pushing Castle to just approve very low quality application to reach the quota. The end result at that point would be for the DAO to still spend the 24k plus to allocate grant money to applications that are just of subpar quality. This is the worst outcome possible.

Thanks for your suggestion!
Matter of the fact we are indeed finalizing in this period the final survey for applicants, for sure one of the thing we want to understand where we can improve and what were the pain points for grantees.

This can definitely be valuable and we welcome any introduction of any team to us, to check if they want to apply and guide them. Will also discuss with Castle their vision and opinion on Farcaster :slight_smile:

1 Like

They are already being compensated for 80h a month. At $100 USD per hour.

The additional $24k proposed here is for 40h a month, aka, half of that time.

The logic in my request is that, if they have been approving way less applications than the other domains, they have also, naturally, been spending less time than the other domains, aka, less than the 80h a month, since only the 6 approved applications had to go through a more fine-grained level of due diligence of course.

Therefore, I suspect they are not using the full 80 hours a month that they are already being paid for, and they could dedicate the remaining time to do this BD work proposed here.

Then in December, and only then, if they achieve the KPI of 21 accepted projects, they should get the $24k bonus proposed here.

Also, in general, we should not take lightly this kind of reallocation of budget without a DAO vote. Not at all. It sets a very bad precedent.

We’d like to thank the @CastleCapital team for surfacing this important issue and for putting forward a thoughtful solution. We particularly appreciate that instead of distributing funds to projects of questionable value, you engaged the DAO and recalibrated the program. This shows real commitment and seriousness in contributing positively to the sustainability of the grants program.

Overall, the plan looks well-structured and seems like a practical way forward. That said, we have one clarification: why 21 specifically? Over the last six months the program only identified six good matches in this domain. Setting a goal of 21 represents a ~250% increase, which may unintentionally create pressure to approve candidates that are not a strong fit, just to reach the target.

On a related note, we take a different view from @paulofonseca regarding Castle Capital’s compensation. We do not believe the $24k fee should be tied to achieving 21 approved applications. While we understand the reasoning, the greater risk in our view is incentivizing allocations that don’t bring value to the ecosystem. That could result in wasting hundreds of thousands of dollars from the $1.5M pool, simply to avoid paying a relatively small fixed fee.

Castle Capital and @JoJo have demonstrated integrity by flagging this issue early rather than pushing funds out the door. That alone should earn them the DAO’s trust to continue this experiment without undue pressure to meet a numerical quota.

2 Likes

Gm, thank you all for the feedbacks so far.

The logic in your request is assuming that, currently, Castle works less than 80 hours per month due to the lower amount of applications approved.

This assumption is wrong; while there are some differences in the modus operandi of the various DAs, there is a profound due diligence on all proposals.

For Castle, specifically, this translates in a first review and open discussion (most of the times through Questbook or through Discord, sometimes in Telegram), then a review of the new info provided by the team, a second more tailored interview, and eventually a call with the team. Alongside all of this, they obviously analyse the proposal, analyse the background of the product and team, and analyse the claim about the specific KPI, PMF or other data that is mentioned. This translates in notes and data that are then published in Questbook and in our notion in relation to the evaluation.

In parallel, as other DAs, they are involved as second reviewer and in general to give opinions on proposals from other domains as well.

As of today, they received 202 proposals in almost 6 months. In this quota I am not even counting the 40 that got resubmitted: in some cases they are almost totally new proposals due to the initial feedback, in other cases just have some light modifications, but I am mentioning this data point to give a full picture. If you think about the 80 hours/month, napkin math would suggest that you would need to spend just a bit more than 2 hours per proposals to reach that very quota.

Now, this is obviously not what happens, there is a lot of variance; up to some degree you are right in saying that less approved proposals means a bit less of workload cause there are less milestones to review (which is further time to add). But, and this is something that I don’t want to go more in depth here because we want to first finish this iteration, there are domains such as the event one or the new protocols and ideas one that are well over the 80 hours per month as quota. We are seeing how the mechanism implemented from the beginning is starting to reach limits in term of scalability for the work of the team, and this is something worth addressing, if any and if it will happen, in a new iteration of the program. Again, something not worth talking about for now.
But, to better understand, a strong indication can intuitively comes from the fact that New Protocols and Ideas received around double the application of Season 2 in the same amount of time.

Finally, would like to correct a bit what you just posted about “reallocation of budget”.

We are not utilising capital that comes from the portion destined to grant. Matter of the fact, the portion we are allocating here comes still from the OpEx budget, which is a budget to cover as the name suggests operation expenses. We are also adding this BD function, that was outside the original scope of the proposal, after the requests and feedback of several delegates, including among others yourself.

Thanks a lot for the support @karpatkey. To answer your question: the number came from a mix of “what is a target that makes sense in term of creating a strong trend projected in the final 6 months of the program” paired with an initial research and scouting that Castle did to understand the feasibility of this very target.
We do believe, after this internal research, that 21 proposals are achievable without compromising the quality of evaluations. At the same time, while confident, we obviously can’t guarantee this number will be reached; this is also the reason why it was set as KPI not for the end of program, March, but for December, so that we can effectively evaluate if what planned on paper worked in prod, and eventually adjust.


On a side note: we have had this critique about low approval rate, for several months. If the plan were to address it in a quick and dirty way, the team would have already lowered the bar and started approving applications that didn’t meet the minimum level of quality we established internally. This hasn’t happened so far, and we don’t see it happening into the next quarter as well.

We take pride in the transparency we are promoting in this program, whether in the decision making process, in what proposals we approve, or in how we approve them, as well as in how we report information to the community. We want to continue pushing for the ethical values we believe are right for this program and for the DAO, as well as for the growth of the ecosystem, and we want to stick to these values also in this side initiative we are plugging in.

1 Like

no @JoJo, it comes from the Legal budget. there’s no OpEx budget in the onchain proposal.

the Legal budget was $30k USD in the onchain proposal and you guys either didn’t spend it or don’t plan to spend it until the end of the program. let’s hope there’s no future request of more money because we were out of money for legal expenses.

Both screenshots are from the Tally proposal and, as mentioned a few minutes ago to you directly, live, in the GCR call, before you posted this very reply in the forum, that $30K is part of the OpEx budget of the program.

That said, I do think you stated your opinion strongly: I have taken in account your point of view here, as well as the point of view of several other delegates that reached out / were reached out by me in private, to see if any adjustment is needed.

I can say that, so far, nobody expressed concern about the financial agreement posted above; some delegates, like @karpatkey above, posted questions/concerns about the specific number, to which we tried to provided an answer at the best of our knowledge based on the last 3 weeks of study we did in private. But, in general, there has been a good acceptance of the modification, especially in the optic of understanding the current state of Arbitrum, and to understand if a grant program structured in a different way compared to what was historically done in our ecosystem is viable and useful.

2 Likes

it’s part of the Legal budget, inside the OpEx budget.

you make it sound as if it is backup money that was somewhat approved for that usage but it isn’t. it was onchain approved to be used for Legal expenses.

So as I said previously:

and it would make it more reasonable to do in the way you’re proposing, if it was a bonus payment to @CastleCapital for achieving the KPI until December.

The sixth monthly report is live in our website!

This is the mid term month, and we are releasing:

  1. the normal monthly report with all the data
  2. a more comprehensive and generalistic report with the consideration from each member of the team
  3. we have setup a call for tomorrow, at 4PM UTC, to discuss the mid term results and what we have been experiencing so far; we will also discuss the qualitative research from Castle Labs related to the small builders, which will be published here shortly.

We do invite all the delegates to the call tomorrow to give us feedback and know more about the program!

TLDR:

  • referencing period: 18th of August to 19th of September
  • 663 proposals in total, of which 89 were approved and 14 were completed
  • Similar to last month, slow period tied to seasonality, with a lower amount of new proposal month over month; at the same time, we have the highest number of completed and delivered milestones and we are starting to see several project coming closer to the finish line or completing their grant
  • focus of the month of the team has been on creating the BD function for the “New Protocols and Ideas” domain to address concerns from the DAO


D.A.O. Grant Program — Mid‑Term Forum Summary

The following contains a brief summary of a longer form attached to the end of this post.

New Protocols & Ideas — Castle Lab
This track surfaces the clearest ecosystem tension: Arbitrum is rich in flagship protocols but thin on the small, fast teams that turn fresh ideas into valuable products. Many drafts proposals, while good on paper, lack technical ownership or a clear Arbitrum‑first rationale; others are feature bundles or cross‑chain repackages that could live anywhere. Where momentum did appear, it came from founders who could explain trade‑offs, outline a minimal impact model, and anchor integrations on our stack. The path ahead could be to make it easier for very small teams to run lightweight experiments on Arbitrum and to reserve deeper, hands‑on support for the few that prove real differentiation. A simple, shared, cross-domain proof packet (what is new, how it ships, how value accrues here, and who is technically accountable), creates that clarity for builders and reviewers alike.

Education, Community Growth and Events — SeedGov
Education remains the ecosystem’s front door. When paired with credible partners and practical curricula, it builds trust and gives newcomers a map. The recurring weakness is continuity: too many efforts stop at visibility without guiding participants into repositories, capstones, or follow‑on support. The shift that is needed is cultural rather than cosmetic: treat education as a pipeline with a proper followup. That means fewer, better programs with enforceable outputs and light bridges into what should be a builder funnel, so first contact reliably matures into sustained contributions on Arbitrum.

Gaming — Flook
Reframing gaming season over season as a user‑acquisition lab has helped the domain focus on what matters and is realistically achievable with the cap of the program: whether creator campaigns, streams, and showcases actually bring players into Arbitrum games. This lens favors experimentation and iteration over vanity reach or products that can’t be built with the current capital. The headwinds are somehow familiar for people from this industry: mainstream creators still weigh brand risk in tapping into Web3, and audiences remain split between web‑native habits and crypto‑native incentives. As a scouting ground, this domain now informs where larger ecosystem bets deserve to follow with backing numbers, giving Arbitrum a practical way to address where players are and can be reached effectively.

Developer Tooling — Juandi & Erezedor
The strongest pull is around Stylus and tools that shrink the distance from idea to deployed code. AI‑assisted onboarding, credible benchmarks, and bridges to widely used developer standards make Arbitrum feel accessible. The limiting pattern is a wave of mirror projects that add little defensibility. The opportunity is to complement ease‑of‑use with genuine advantage: payments and account‑abstraction rails, analytics, and decentralized AI building blocks that let applications do something here they cannot easily do elsewhere. Tooling should remain, and get stronger in being, an engine in a more complex machine, pushing to reduce friction while compounding what is uniquely to the Arbitrum tech stack.

Orbit — MaxLomu
Orbit was launched to grow ecosystems around existing chains, not to create new ones, and to explore the moment when an app should graduate to its own environment or expand into existing ones. Early partnerships and targeted bets showed promise, but demand has been low and some proposals felt grant‑first rather than user‑first. Interoperability delays have also cooled momentum in our opinion. The clearer story should be business‑driven: validate on the main Arbitrum chain, then expand to Orbit only when it truly serves the product and its market. This track is also a natural place to trial ideas like geographically grounded “crypto cities,” built step by step through composable services that reflect Arbitrum’s strengths.

Program Manager — Jojo
Across domains, the program has been moving in the seasons from just simple “grant availability appealing” for the chain to a more robust and proper path that compounds through clearer ownership, reporting, and Arbitrum‑tied impact. The work ahead is to keep a balance between small, fast tests, what the market wants, and what builders are able to provide; to evolve, adding hands-on support for the ones who need it and strengthening evidence standard so that decisions feel consistent across the DAO in between programs and overarching strategy. To this goal, strengthening the alignment with Foundation, Offchain Labs and the DAO (which all still need to come to terms with this overarching strategy, at least at an inter communication level), is what is effectively needed to graduate this program to a very important stepstone of our ecosystem.

Executive summary

Based on a multi‑dimensional analysis of developer activity, user metrics and builder sentiment, the report concludes that Arbitrum is losing momentum to Base and Solana in attracting small, experimental teams while remaining a strong venue for institutional DeFi projects. We compared chains using Electric Capital data on open‑source repositories and active developers, verified‑contract counts, daily active users (DAU/MAU), throughput and capital flows, complemented by a survey of 22 builders and a post‑mortem on a hackathon continuation program.

The metrics consistently favoured Base and Solana. Electric Capital’s taxonomy shows Base and Solana still growing linearly in repository count while Arbitrum and Optimism exhibit an S‑curve plateau; Base has already surpassed Arbitrum’s repository count despite being live for a shorter time. Full‑time and established developer counts have risen 40 % on Base but fallen 34 % on Arbitrum. Verified contract deployments, though noisy, further underscore the divergence: Base routinely outpaces Arbitrum and Optimism due to programmes like Onchain Summer and getting structural support from the leading Centralized exchange, Coinbase, whereas Arbitrum’s daily verified contracts have been trending down since 2024.

On the user side, Solana averages about 1.5 m daily active addresses, double Base’s and roughly five times Arbitrum’s ≈300 k. Base also leads the EVM L2s in transaction counts and throughput growth, aided by deliberate engineering choices to scale to 250 Mgas/s by late 2025, whereas Arbitrum and Optimism lag on throughput. Arbitrum still records the largest capital flows because of its DeFi focus, but much of the inflow is attributable to bridging to a single venue, Hyperliquid. In account‑abstracted wallet adoption, Base again dominates, reflecting Coinbase’s consumer‑oriented tooling.

A comparative look at developer‑relations efforts shows why momentum diverges. Base leverages Coinbase’s reach through reward‑driven campaigns, identity services and mobile‑oriented SDKs. Solana combines high‑performance tooling with extensive education programs and AI‑assisted support. Arbitrum’s program still centres on grants and an incubator, with a growing focus on Rust/C/C++ support (Stylus), and Optimism emphasises retroactive public‑goods funding. Survey responses highlight that liquidity and business‑development support drive chain selection, while grant availability ranked last among launch drivers. Respondents valued co‑marketing and portal placement over small grants and often perceived Arbitrum chiefly as a DeFi chain or were unaware of its grant programme. Interest in structured incubation scored positively, suggesting that better advisory services could help projects meet feasibility and sustainability criteria. A separate hackathon pilot underscored that low marketing budgets and a hackathon‑first sourcing model attract low‑commitment teams; the report advocates larger initial investments ($100–150 k), research‑driven RFPs and better community building.

In sum, Base and Solana currently offer the most vibrant environments for small-team experimentation across both developer and user metrics, while Arbitrum remains a DeFi-centric hub with less traction among new builders. To reverse this trend, we recommend that Arbitrum invest in user-experience improvements, expand developer outreach beyond DeFi, increase or improve the grant program and marketing spend, and provide structured incubation and business-development support to help small teams convert ideas into sustainable deployments.

Builder Momentum: Comparable Proxies

To understand the builder momentum on different chains we used different proxies to try to gauge the general activity of the last two years. The chains we decided to compare to were Base chain, Optimism, and to a lesser extent Solana due to lack of verifiable data.

With the comparison we looked at the number of open source protocols deployed on a chain overtime (Crypto ecosystems), active developers (Developerreport), and verified contracts (respective to their own scan sites).

We want to caveat that the usage of verified contracts can be noisy, protocols like Manifold, Zora, Farcaster, Clanker, etc do add a lot of verified contracts to Base. But, all the different metrics combined paints a picture of where builders are generally gravitating towards.

Protocols deployed overtime

This metric is being maintained by Electric Capital. It is a taxonomy of open source repositories from blockchain, cryptography, and decentralized ecosystem. The charts show the number of ecosystem and repository growths over time. Arbitrum and Optimism are showing an S curve in growth, whereas Solana and Base are still growing somewhat linearly. Remarkably Base has outpaced the current repos on Arbitrum even after being live for a shorter amount of time; at the same time we could argue that the growth chart of Arbitrum is more organic, due to the nature of a central entity, Coinbase, behind Base that can push for specific spikes in metric.

NB: The dip and recovery in repos are due to a cleanup of old/multichain repos done by Electric capital.

Ecosystems

High-level categories like Bitcoin or Ethereum that describe communities or projects. They form the structure of the taxonomy rather than pointing to specific codebases.

Repositories

GitHub projects such as GitHub - OffchainLabs/arbitrum that are attached to exactly one ecosystem. They supply the actual code content that fills the ecosystem structure.

Active developers

Over the last two years the number of total developers, on all chains and in crypto in general, has decreased by 4%. Interestingly the full time developers metric has increased by 9%, but the part time developer has sharply decreased. When looking at each ecosystem respectively, we see that Base has grown significantly, up 40%, even overtaking Arbitrum in both full-time and established developers. Both Arbitrum and Optimism saw double digit % in decrease, with Arbitrum losing a staggering amount of 34% and optimism 18%. Solana on the other hand, is the big winner, with the highest number of full-time devs, and an increase of 62%.

Electric Capital Methodology

NB: Full time devs are developers who commit code 10+ days out of a month.

Verified contracts

What we see with this metric is that even though Arbitrum and Optimism are much older (2021), Base outlaps both in total contracts deployed, total contracts verified, and 24h contracts deployed (and verified). There could be multiple reasons as to why this is happening, we already named a few reasons, Clanker 4.0 on base for example has deployed over 107K verified contracts in 103 days, V3 did 140K, V2 44K, V1 11K.

There are multiple spikes of daily verified contracts (July 14th and September 2nd) on Base which could be attributed by the start and end of “Base Onchain Summer” which awarded $250,000 in total prizes for top apps, breakout new apps, and other categories.

Both Arbitrum and Optimism are showing a downward trend in daily verified contracts since 2024. As mentioned above, verified contracts in isolation doesn’t paint a full picture, as it can be noisy, but it does signal that there is activity on the chain which could persuade developers to launch there.

Source: Arbiscan.com

Source: Basescan.com

Source: optimistic.etherscan.io

As briefly mentioned above, the verified contracts are not a reliable metric that can be taken in isolation to draw strong conclusions; despite this, it is indeed an objective data point. Even by filtering out specific events like Base summer or the announcement of a possible token that for sure push for outlier activity, we do have a picture for which Arbitrum contracts are either flat or negative growing year over year, and Base still see events that, while could be framed as “inorganic”, spur the activity of the developers.

Snapshot most recent verified contracts

The snapshot is not indicative of what kind of contracts are being verified or deployed, but along other signals it does aligned with the idea that most of the contracts being deployed on Base are tokens (nearly 50% of the page), whereas Arbitrum has generalized Defi tone to the verified deployments.

Arbitrum

Base

Optimism

Conclusion for Builders

Small-team momentum and contract deployment velocity appear strongest on Base right now. While Arbitrum’s headline protocols remain strong, its small-builder on-ramp looks thinner with fewer active devs than a year ago. While these metrics aren’t conclusive of one ecosystem being better than the other for smaller builders, it does show that builders are going more gravitating towards Base. Information on verified contract deployments, or programs, on Solana isn’t aggregated, but the crypto-ecosystem repos show that it is the biggest ecosystem for builders, went counting repos.

The take-away is that Base currently has more momentum, but Solana still offers a larger ecosystem; Arbitrum is strong for institutional DeFi, but doesn’t carry the same strength for mid to small projects and builders. However, deployments are meaningful only when matched by active users and throughput which we will explore in the following section.

User Metrics

On-chain metrics are important for developers as it can tell them whether it makes sense to deploy on a certain chain. The metrics that matter are the number of users on a chain, whether the chain is attracting /transacting value (flows/TVL), how much throughput (gas/s) a chain has, and what native support developers can expect.

DAU/MAU

Active addresses are commonly used to gauge engagement on a blockchain, though the metric isn’t perfect since one person can have multiple addresses. While it gives an overall sense of network activity, it can be inflated through Sybil attacks, where many fake addresses are created to skew the data. Especially with users trying to farm certain networks in hope for an airdrop.

The data suggests that Solana still is the active chain for users, with double the amount of Base at 1.5m, Arbitrum has 20% of the users at 300k, and Optisim is much worse at less than 100K.

Source: Growthepie.com

Source: tokenterminal.com/

TX counts/fees/gas usage

TX counts

The total number of transactions on a blockchain is a useful measure of activity, but it doesn’t tell the whole story. A network with fewer transactions might still be moving far more value because it’s heavily used for high‑value DeFi trades, while a chain supporting gaming or other low‑value use cases could see a high transaction count with relatively small amounts of capital involved. Therefore, transaction volume should be considered alongside the types and value of transactions to understand actual usage. Base is leading in user activity here when ignoring Solana.

Source: Growthepie.com

Source: tokenterminal.com

NB: Source on voting/raw transactions solanacompass.com

While hereby provided for the sake of completeness, we don’t think including Solana’s transactions is a useful metric: it is much harder to gauge as it is inflated by validators reaching consensus by voting on chain (80%-90% of transactions are usually voting) and the chain being cheaper makes it prone to spam. But nonetheless, with those considerations the transaction count is severely higher than Base, at over 100 million daily transactions.

Gas usage

Building on transaction count we can look at Gas usage or throughput which is a more precise indicator of scalability because it measures the network’s overall compute capacity rather than simply counting transactions, which can vary greatly in complexity, from roughly 21,000 gas for a basic ETH transfer to around 280,000 gas for even a simple Uniswap swap. Throughput directly reflects how much work a blockchain can process and how close it is to its performance limits. For app developers and Layer 2 teams, this metric is indispensable for gauging growth potential, estimating costs, and understanding where bottlenecks might emerge.

It is clear that Base’s throughput is ever increasing, which started in February, the moment when Base focused on scaling the chain, with a north start of reaching 1 Ggas/s at is poised to achieve 250 Mgas/s by end of 2025. Both Arbitrum and Optimism have been lagging behind in throughput, this could be a reason for developers to deploy on Base rather than Arbitrum or Optimism due to lack of throughput.

Source: Growthepie.com

Is interesting to notice how Base’s engineers realised that running with a 35 Mgas/s target was creating problems: actual demand was lower, so fees stayed too cheap, the network attracted spam and fee spikes were poorly managed, forcing users to pay extra priority fees. They also found that at this level the rollup was brushing up against Geth performance limits and Ethereum’s data‑availability capacity. To fix this, they deliberately lowered the gas target to 25 Mgas/s and raised the gas limit to 75 Mgas/s, increasing the elasticity ratio; this change makes fees more responsive during congestion, discourages spam and priority‑fee bidding, and keeps node operators and L1 resources within safe bounds. Read more here.

Flows (bridge net flows)

As for value flows, the net flow of all chains, besides Solana, are actually down. Interestingly Arbitrum has the biggest flows, which is reasonable for being known as the DeFi chain, but what it is glaring is that more than half of the inflow is due to users bridging to Hyperliquid (Net flow of $-5.9B, with inflows $42B, outflows $48B). For Base the main netflow is -$4.7B to ETH. For Optimism, the netflows are negligible compared to the other two. What is interesting for Optimism, however, is that the biggest inflows were from now considered ‘dead chains’, Zksync($128M), Blast($101M), Scroll ($57M), Mode ($32M). Finally, the biggest net outflow for Solana is to Arbitrum ($214M).

Source: Artemisanalytics.com

Account Abstracted wallets (ERC-4337 // EIP-7702 )

A different interesting metric to look at is the number of Smart accounts on each of the chains. This metric can signal what kind of user there is on the chain. A high amount of smart accounts generally indicates a better UX for the user, as a lot of the complex activities are being abstracted away for the user. This could influence a developers decision to deploy on certain chains.Across all the metrics, Base, again, is winning. They have the most monthly active smart accounts, highest number of successful UserOps (proxy for transactions), and are earning the most fees. The numbers can be explained by the coinbase’s Base Account (previously, Smart Wallet) which significantly simplifies the developer experience of integrating wallets into dApps.

As this is EVM related, Solana is ignored.

Source: dune.com/niftytable

Additional information:

https://www.bundlebear.com/eip7702-overview/all

Summary user surface

Across all user‑side indicators,active addresses, transaction throughput and smart‑account adoption, Base and Solana clearly outshine Arbitrum and Optimism. Solana has roughly double the active users of Base (about 1.5 million daily addresses), and Base itself surpasses Arbitrum (≈300 k) and Optimism (<100 k). Base also leads the EVM L2s in transaction counts and throughput growth, while Arbitrum and Optimism have lagged, which may deter developers seeking high‑capacity environments.

In flows, Arbitrum still attracts substantial capital because of its DeFi focus, but much of the inflow is driven by bridging to a single venue (Hyperliquid).

Finally, Base has the largest number of active smart‑account wallets and user operations, a reflection of Coinbase’s consumer‑oriented tooling. Taken together, these metrics paint a consistent picture: Base (and Solana) offer the most vibrant, scalable user bases for new deployments, whereas Arbitrum remains a strong DeFi hub but a thinner destination for experimental teams. That imbalance likely influences where early‑stage builders choose to launch and underscores why better user‑onboarding and throughput on Arbitrum could be crucial to attract small builders.

Devrel

Developer relations (DevRel) is the practice of building a two‑way relationship between a technology provider and the developers who use its products. Effective DevRel teams make it easier for developers to get started by providing clear documentation, helpful SDKs and sample projects. They also run hackathons, office hours and support channels where developers can ask questions and get quick answers. Good DevRel shortens the learning curve, encourages experimentation and fosters a sense of community around a platform. These efforts lead to wider adoption, more polished applications and stronger feedback loops between builders and product teams, ensuring that the platform evolves in ways that meet real‑world needs. Without active engagement, even technically sound protocols may struggle to attract and retain developers.

Developer relations are central to the success of blockchain networks. Base, Arbitrum, Optimism and Solana all recognise that attracting and supporting builders drives network adoption. Base leverages Coinbase’s reach and structured reward programmes to onboard developers; Arbitrum expands its tooling with Stylus and pairs grants with hands‑on incubator support; Optimism’s OP Stack and retroactive funding model incentivise both innovation and public‑goods projects; and Solana couples its high‑performance architecture with improved tooling, AI‑enhanced support and education programmes such as Solana U. While each network has a different philosophy and technical stack, they all invest heavily in documentation, SDKs, community engagement and financial incentives.

Builder Survey

The survey provides a quick empirical snapshot of how developers view Arbitrum and other ecosystems. To collect this pulse check, we circulated a 13‑question survey across our network and received 22 responses. The questions asked participants to identify their protocol’s stage (emerging, growing or established), indicate whether they currently build on Arbitrum, and choose which chain they would deploy a new project on if starting today. We also asked respondents to rank the importance of factors such as liquidity, business development support, technical fit, grants and community when deciding where to launch. This limited sample is not meant to be exhaustive but offers a directional sense of builder sentiment and priorities. It helps us gauge how developers weigh ecosystem features and what might incentivize them to choose Arbitrum over competing chains.

After analyzing the responses we came to the following conclusions:

  1. Launch decisions are liquidity/BD-led. The survey mirrors our 6-month funnel: applicants who fail often do so on Feasibility/Milestones and Alignment, not because they don’t want funding. Builders go first where users, liquidity and BD surfaces are easiest to access (Base by a wide margin in this pulse).

  2. Distribution beats grants. “Grant availability” ranked last of the five launch drivers. Respondents are more sensitive to distribution (co-marketing, portal placement) and co-incentives, exactly the levers they say would change their mind to build on Arbitrum.

  3. Incubation helps at the margin. Interest in structured incubation is solid (mean 3.68/5). That aligns with our rejection stack: the top failures (Milestones/Feasibility, Sustainability) are exactly what a micro-experiments + light advisory lane can address quickly.

  4. Awareness and fit gaps remain. A non-trivial slice hadn’t heard of the program, perceived Arbitrum primarily as a DeFi chain, or found the category fit unclear. This is consistent with the category skew we observe (DeFi/AI heavy; fewer “mindshare” verticals like payments or consumer social).

Lessons from the Hackathon Continuation Program

Additional research done by RNDAO revealed important insights regarding Arbitrum’s Hackathon Continuation Program, a pilot that aimed to nurture hackathon winners into viable ventures. Several limitations surfaced:

  • Talent attraction and community: The organisers reported that a very small marketing budget and the hackathon format attracted builders with low commitment. They also noted that Arbitrum is perceived to have less entrepreneurial “community” than Base or Solana.

  • Hackathons vs. venture building: Hackathon teams often lacked defined problems or validated markets; mentors found that the hackathon mindset is misaligned with customer‑centred venture development. The report recommended abandoning hackathon‑first sourcing and replacing it with research‑driven RFPs and opportunity briefs.

  • Funding and talent acquisition: To attract higher‑calibre founders, the programme called for larger initial investments ($100–150k) and follow‑on funding, plus more marketing and community‑building activities. It also advised shifting the focus from product hacks to business validation.

These lessons dovetail with our grant‑funnel diagnostics: most rejected NPAI applications fail on feasibility, sustainability or team validation rather than ideas.

Conclusion: Where the small-team experimentation is (and isn’t)

Our analysis reveals a consistent pattern: metrics of developer momentum (repos, full-time devs, verified contracts) and on‑chain usage (DAU/MAU, throughput, smart accounts) show Base and Solana, outpacing Arbitrum in attracting new builders and users. DevRel initiatives support this narrative: Coinbase‑backed marketing and reward programmes have created strong builder mindshare on Base, while Solana’s IBRL (increase bandwidth, reduce latency) narrative has attracted and built a large developer community. In contrast, Arbitrum’s current DevRel caters more to established DeFi/RWA players than to small, fast‑iterating teams, with even anecdotal stories of builders not able to find support in official channels.

The survey results and hackathon report reinforce that builders prioritise liquidity, distribution and business‑development support over grant size, view Arbitrum primarily as a DeFi chain, and desire structured incubation to help them meet feasibility and sustainability requirements. Most rejected NPAI applications fail on these exact points, unclear milestones, weak feasibility (140 rejections), sustainability gaps (138) and lack of Arbitrum alignment (125) and novel ideas (119).

In practice, alignment and novelty frequently surface inside feasibility critiques; the teams applying usually don’t have a concrete delta or an Arbitrum-specific plan. This pattern matches a pipeline heavy on AI and DeFi and light on payments, decentralized social, and other “mindshare” categories, which are exactly the tracks where small teams on hot chains tend to run fast experiments. Additionally, the RNDAO pilot observed that limited marketing budgets and a hackathon‑first approach attracted low‑commitment teams and that Arbitrum is perceived to have a thinner entrepreneurial community than Base or Solana.

To conclude, it seems that Base’s metrics are growing and outpacing Arbitrum in nearly every facet, with visible spikes around programmatic initiatives (e.g., Onchain Summer). Qualitatively, recent “mindshare” initiatives (e.g., external ecosystem grants like Kalshi’s) have targeted Base and Solana rather than Arbitrum, reinforcing a perception that quick, low-friction experiments have better distribution elsewhere. Meanwhile, Arbitrum continues to land institutional-grade partners (e.g., major DeFi/RWA integrations), which strengthens the top of the stack but does not by itself generate a small-builder on-ramp.


Appendix Survey

(Base (10) > Ethereum (2) ≈ Solana (2) ≈ HyperEVM (2) > Arbitrum (1).)

Why would you choose this chain?

This refers to the option they choose above.

  • Liquidity and Distribution: High liquidity, better distribution, and a large number of DeFi users are key factors.

  • Ecosystem and Momentum: A vibrant ecosystem, momentum behind the chain, and a focus on DeFi are appealing.

  • Growth Opportunity: Chains with realistic growth opportunities, strong support, and emerging ecosystems with missing solutions are preferred.

  • Specific Chain Preferences: Mentions of Ethereum for serious DeFi apps, Base for consumer-focused apps due to its affiliation with Coinbase and institutional comfort with yield farming, and Solana for its active community despite concerns about Rust.

  • Other Factors: Exposure, capital, cheap fees, and being an EVM compatible chain are also considerations.

Have you considered applying to the Arbitrum DAO grant? If not, why?

  • Awareness and Information: Several respondents were unaware of the Arbitrum DAO grant program, suggesting a need for better communication and outreach.

  • Grant Size and Relevance: Many felt the grant size (e.g., $50k) was too small or not relevant for their projects, especially for growth-stage teams or those with higher cost opportunities.

  • Application Difficulty and Milestones: Some found the milestones or application process too challenging or not worth the effort for the grant amount.

  • Project Fit and Scope: Some believed the grant categories didn’t fit their dApp profile, or that the grant was better suited for smaller projects. There was a perception that Arbitrum is primarily a DeFi chain.

  • Past Interactions and Alternative Support: Some had received past support from the Arbitrum foundation or benefited from other initiatives like audit subsidies and co-marketing, while others considered alternative programs.

If other reasons are checked, please provide below.

  • Retail interest and ecosystem activation: One response mentioned the importance of amassing retail interest and ecosystem-wide activation.

  • Fairness and equal opportunity: A fair and equal playing field for all applying projects was highlighted as important.

  • Dedicated support and compensation: One respondent emphasized the need for dedicated liquidity and support from the network, and adequate compensation for teams that might need to put other projects on hold.

  • Exclusivity requirements: Hard requirements like exclusivity were noted as detrimental due to clashes with fiduciary responsibility.

What single change would most increase your likelihood of applying for an Arbitrum grant?

  • Increased Grant Size and Support: Many respondents indicated that a larger grant size (e.g., 150-200k) and direct participation from the DAO and Arbitrum protocols/partners in pushing the resulting projects would significantly increase their likelihood of applying. This includes providing marketing, VC introductions, and helping projects gain momentum.

  • Improved Process and Communication: Several responses highlighted the need for a single point of contact and easier milestone requirements. Streamlining the KYB/KYC process, which has faced technical issues and data breaches, was also a significant concern, with a desire for a one-time submission.

  • Alignment and Ecosystem Growth: Alignment on multi-chain deployment strategies was a key factor for some builders. There’s also a desire for a continuously running grant system based on performance, a growing DeFi ecosystem, and increased focus on specific DeFi verticals like options.

  • Commitment and Visibility: Respondents expressed a need for Arbitrum to demonstrate a stronger commitment to its grant program, ensuring it’s not perceived as a “side quest” and is better integrated with exchanges and retail distribution channels to put Arbitrum back in the spotlight.

PDF link:

1 Like

I also recommend checking the conclusions from the Arbitrum Ecosystem Pitch day which add further meat to this narrative.

Following up the call, this is the recording to the call of the 3rd of October and the slide deck that was presented.

The seventh monthly report is live in our website!

TLDR:

  • referencing period: 20th of September to 15th of October
  • 727 proposals in total, of which 99 were approved and 20 were completed
  • for the second month in a row we are seeing several milestones being completed: 25% of the whole program in the last 30 days, signalling we are moving as expected in the more mature phase of the grant
  • focus of the month of the team has managing the mid term report, the call with the DAO and the deliverables both from the whole team and for Castle for the research
  • focus for the next month is toward preparing the post-grant interview for completed projects

2 Likes

The eighth monthly report is live in our website!

TLDR:

  • referencing period: 15th of October to 17th of November
  • 817 proposals in total, of which 112 were approved and 25 were completed
  • for the third month in a row we are seeing several milestones being completed: 20% of the whole program in the last 30 days
  • several members of the team were at DevConnect, to participate to Arbitrum events as well as talk about grants and governance
  • focus of the month has been starting to circulate the post-grant interview for completed projects to start gather feedbacks from the teams in the program.

2 Likes

The December report for the D.A.O. Grant Program will be delayed until early January at OpCo’s request. The goal is to begin aligning the delivery of all DAO reporting in a way that’s compatible with the GCR call, so we’ll take a couple of extra weeks and cover 45 days instead of 30. This will help us comply and enable a smoother flow of information with less friction.

That said, I’d like to provide a brief informal update:

  • We have received a total of 911 proposals across the five domains.
  • We have accepted 118 proposals in total, and 31 have been completed.
  • We have allocated $3.2M and distributed $1.5M.

Finally, for the “New Protocols and Ideas” domain, we currently have 19 accepted proposals, versus the internal KPI set in September of 21 proposals by the end of December. We expect to reach the established target for the BD sprint in this vertical.

1 Like

The ninth monthly report is live in our website!

TLDR:

  • referencing period: 18th November 2025 - 1st January 2026
  • 954 proposals in total, of which 128 were approved and 32 were completed
  • During this period we received over 140 new application, more than 14% of our total submissions, indicating increased grantee activity.
  • This report marks the third quarter of the program and is accompanied by a public DAO call to review results so far.
  • Specific updates about the activity during DevConnect can be found in the “Community, Education and Event” Domain summary



Gaming Domain Summary

Over the past three months, the Gaming Domain has made good progress in funding both small and large-scale user acquisition campaigns for Arbitrum games. Seven applications were funded, with a total allocation of $133,000.
Across the seven applications there was a focus on live event content at the YGG Play summit, as well as gaming guild support for content creation and tournaments through AGDAO, P2ECREW, and Wayfinders. These campaigns are still in process but are on track to generate millions of views and tens of thousands of clicks to steam wishlists or game downloads, with a specific focus on Wildcard, a standout hit on Arbitrum with funding from AGV.
Additionally, we funded a social media campaign for Forkast, a prediction market with a focus on Esports and Gaming that is making waves in the space. This campaign, for a spend of $25,000, generated 1,028 new wallets created and signed up on the Forkast platform. This shows that there is a space for this domain to fund marketing budgets for other consumer applications that may not be strictly videogames and should be considered as a viable opportunity for future seasons of the D.A.O. program.
Headed into the final two months of the program, we are aiming to set up solid, long-term campaigns with marketing agencies, creators, and gaming guilds to ensure a baseline level of user acquisition support for Arbitrum games in the interim between Season 3 and an eventual Season 4.

Dev Tooling Summary

Within the Arbitrum DAO Developer Tooling Program, we’ve now committed a little over a third of the total budget. Over the last quarter, most of the work has been less about spraying grants and more about tightening the funnel, funding tooling that actually gets used, and pushing proposals that sit at the intersection of AI, payments, and real developer demand. A lot of time has gone into shaping proposals so they’re not just technically interesting, but meaningfully composable with the rest of the Arbitrum stack.

A big inflection point for this thinking came during Devconnect Argentina. We spent a lot of time on the ground talking to accepted grantees, teams currently in review, and folks from both the Arbitrum Foundation and the OCL. Across all of those conversations, a very consistent signal emerged: the ecosystem’s near-term focus is converging around two things, AI and Stylus. That alignment didn’t feel forced or top-down, it came up organically from builders and reviewers alike.

This makes a lot of sense when you look at where Stylus is today. The framework is no longer in a purely foundational phase. The core architecture is there, and what it needs now is refinement, better developer experience, and tooling that helps teams ship faster without friction. In that context, AI-driven tooling feels like a natural next step rather than a distraction. Whether it’s around auditing, code generation, developer assistance, or analysis, AI fits cleanly into the current maturity stage of Stylus as an enhancement layer, not a speculative bet.

On the tooling side, a large chunk of what we’re currently pushing forward is Stylus-focused. This includes proposals around privacy primitives, CLI tooling, and lower-level developer kits, as well as more practical integrations like bringing Stylus into existing workflows via Hardhat plugins. The goal here is straightforward: reduce the cost of building on Arbitrum. Less friction, fewer sharp edges, and clearer paths from idea to production for teams that already want to be here.

Looking ahead, one of the next concrete steps for the program is kicking off the AI Auditor Pilot. The plan is to deploy somewhere between $100K and $125K to run a demand-driven experiment, informed directly by delegate feedback, to see which AI auditing teams actually have a product that works and a commercial model that makes sense. This is not a blanket bet on “AI auditors” as a category. These teams often operate in different verticals, target different layers of the stack, and in practice don’t meaningfully overlap. Funding will reflect that reality rather than being evenly split just to check a box.

In parallel, we want to put a much stronger emphasis on payments. Payments were a huge topic at Devcon and continue to show up as a real priority across the ecosystem, with teams like Peanut explicitly choosing Arbitrum as a core rail. Payments matter because they’re sticky, they force abstraction, and they stress infrastructure in ways that DeFi alone often doesn’t. If we fund the right tooling for B2B payments and focus on enabling consistent month-over-month growth, we think Arbitrum can get faster feedback loops and more tangible outcomes.

Zooming out, this also ties directly into the broader stablecoin and payments race playing out across ecosystems like Polygon, Base, and Celo, all of which have built strong distribution in markets where crypto isn’t just speculative, but operational. These are regions where crypto acts as a lifeline or a settlement layer, not a toy. Our view is that by leaning into payments and the tooling that supports them, while doubling down on AI and Stylus at the right moment, Arbitrum can position itself as infrastructure for real economic activity, not just another scaling solution.

Orbit Domain Summary

Orbit domain highlights include Dual Mint which has reached its first milestone with vaults launched on Arbitrum and 50k in TVL, against a final target of $15m. Locale.Network has reached its fifth milestone and is now able to onboard financial and IoT data in a privacy preserving way, with an SDK usable by third parties and a pilot currently being finalized with a US municipality.

A key learning from these efforts is that milestones should both validate product market fit early and compose with other protocols.

Looking ahead, approval is upcoming for North Investments, which is launching a permissioned Orbit chain to tokenize yield generating assets with a target of 3m TVL.
In parallel, the Blaze Liquidity Program was proposed to the DAO in response to recurring liquidty needs from RWA builders, with the goal of generating diversified yield for the DAO while supporting early-stage projects.

Community, Education and Event Domain Summary

During this period, the Education Domain focused heavily on supporting high-impact activations during DevConnect in Buenos Aires. The strategy was to ensure Arbitrum had a presence in diverse verticals: from institutional onboarding to developer education and community networking.

Participating in DevConnect allowed us to shift focus from proposal review to direct engagement with our ecosystem. We met with diverse grantees, spanning beyond just the DevConnect-specific activations, to discuss how the DAO can further support their growth and to gather feedback on the program. It was particularly encouraging to see firsthand the successful onboarding of new users in Latin America and Africa driven directly by these grants.
A standout success for the domain was the ETH House. Funding this initiative provided a critical physical hub for builders, the community, and the DAO to converge, proving that having a dedicated ‘home base’ at major conferences significantly amplifies collaboration. These in-person interactions confirmed that our funding strategies are effectively translating into tangible community growth.

The following is a comprehensive list of events funded during DevConnect.

  • Arbitrum Bridge Latam
    • Grant: $40,483
    • Description: An institutional activation designed to connect the Arbitrum ecosystem (Foundation, Offchain Labs) with traditional financial entities (banks, fintechs) and policymakers in Latin America. The event focused on discussing the integration of blockchain solutions into the region’s financial infrastructure.
  • Urbe Campus: DevConnect Edition
    • Grant: $14,600 USD
    • Description: A 4-day educational hacker house (Nov 12-15) that hosted 50 selected scholars. The curriculum focused on Stylus, Orbit chains, and governance. The initiative resulted in 11 final project pitches, with scholars forming teams that went on to win bounties in the main DevConnect hackathon.
  • Arbitrum Stylus Awakening: Builder Lunch
    • Grant: $14,000 USD
    • Description: A targeted “Builder Lunch” designed to engage technical talent specifically around Arbitrum Stylus. The event provided a space for developers to discuss writing smart contracts in languages like Rust and C++, fostering a deeper technical understanding of Arbitrum’s capabilities.
  • Official Sponsor of “ETH HOUSE”
    • Grant: $15,000 USD
    • Description: The DAO secured official sponsorship of the ETH House. This included a dedicated “Arbitrum Day” (Nov 15) focused on educational workshops for Stylus and network use cases, ensuring high visibility among the core Ethereum developer community residing in the house.
  • Dev3pack DeFi Builder’s Club
    • Grant: $12,500
    • Description: A networking event tailored specifically for DeFi builders. The goal was to create a dedicated environment for DeFi protocols and developers to connect, share architectural ideas, and foster collaboration within the Arbitrum DeFi ecosystem.
New Protocols and Ideas Domain Summary

From October 1st to December 31st, the distribution of applicant interest across verticals followed a pattern similar to previous months. One interesting shift compared to the overall period was that the broader “Others” category (11.9%) slightly surpassed AI (10.4%). Meanwhile, DeFi (35.1%) and Consumer Apps (22.4%) continued to be the most popular verticals.

During this timeframe, we reviewed 133 proposals, a higher-than-average volume. Of these, 10 were approved, with 6 originating from Castle Labs’ BD efforts. In addition, 28 proposals remain pending, awaiting resubmission or additional information. Of the remaining 105 rejections, 20 were declined due to a lack of response to follow-up questions or because applicants were asked to resubmit under a different domain.

Finally, it is worth noting that when including submissions under review prior to October 1st, there were 13 approvals during this period, bringing the total number of NPAI domain approvals to 23 as of December 31st.

Addendum - Report on the BD activity of Castle for the Quarter

TL;DR

  • Outreach → approvals: Contacted 26 teams; 11 submitted (42%), 6 approved (23% of approached), 5 pending (19%).
  • ETHGlobal scan: ~200 finalists screened (AI-assisted). Yield to qualified submissions was low without warm paths.
  • Hackathon reality: Many hacks are bounty-mode; repos go cold post-event and contacts are hard to source from showcase pages.
  • BD moved the needle: Warm intros + template-guided milestone rewrites produced the highest submit and approval rates.
  • Effort vs. payoff: Focused BD increased hours per opportunity, but cut diligence time (known teams, social proof) and raised approval density.
  • Target profile: Slightly established teams with shipped product/PMF expanding to Arbitrum, plus proven builders with credible new lines.

Recommendation

Network leverage: BD effectiveness correlates with network strength. One potential idea could be to have a DAO “warm-intro lane” in the delegates’ Telegram to route vouched teams directly to Domain Allocators.

Activity

Submission and Approval rate

Of the 26 teams we engaged with more closely, 11 (42%) went on to submit a grant application, while 15 were filtered out during the screening stage. These early rejections were typically due to misalignment with Arbitrum’s goals, submission under an incorrect domain, or not meeting the required criteria.

Six teams were approved so far (23%); Exchequer, Liqvid, Paros, Bond.Credit, Emerge, and Nuvolari. Moreover, rhere are still 5 pending secondary review (19%) with a high likelihood of approval.

Resurfaced application

We also revisited past applications that demonstrated potential but were previously rejected due to factors such as timing, scope, or readiness. There were two that stood out, Precog and Nebula.

Nebula paused their development due to lack of demand and funding. Precog is still building and is finding some initial traction, which is why we reached out again to see if the application can be optimized and where we can help out to improve the overall application and product, and we currently are in the later stage of assessment with the team.

What worked

We believe the higher approval rate is largely due to sourcing projects from teams we knew through our network, which gave them additional credibility. These teams also had strong expertise in their focus area or previous experience shipping products with users. Warm introductions, avoiding AI-generated anonymous submissions, and existing social proof significantly reduced the time needed for due diligence.

Another factor contributing to the stronger submissions is the guidance these teams received compared to blind applications. Before submitting their application, we go over their idea, discuss potential issues for their protocol and ways to address them, and provide guidance on how to properly fill in the application template.

What didn’t work

We were initially very interested in hackathons as a way to discover innovative projects. However, while hackathons have numerous teams working on proof of concepts, only a very small fraction actually follow through on their ideas.

We noticed that most hackathon teams are there for the even itself or are just doing it as a small side project, with little intention to pursue their hacks. This was evident from examining team code repositories, where activity typically stopped once the hackathon ended.

Additionally, the ethglobal.com showcase page made it extremely difficult to reach out to the teams, as it didn’t provide any contact information. To connect with teams, we had to dig into GitHub, visit developer pages, and hope to find some sort of way to get in touch.

Builder sentiment

From a builder’s perspective, the teams really valued having a dedicated contact instead of just submitting a blind application. It helped them iterate faster and get clear feedback on milestones, KPIs, and what looked good versus what didn’t, which saved time for both sides.

Interestingly, although not in line with this domain, one clear pain point for many builders was the lack of co-marketing support from Arbitrum’s own channels.

Conclusions

From a subjective DA perspective, the BD effort clearly helped improve the overall quality of proposals. In particular, it led to better clarity around team composition and experience, which increased trust thanks to network effect. It therefore reduced the time needed for a more formal due diligence and resulted in stronger applications overall, considering factors like the absence of AI use in applications, but also the greater crypto experience on average of the founders, and the more practical ideas, along with more structured plans, instead of more theoretical concepts and without deep explanations, as we often see in proposals.

Moreover, many teams came in with some prior guidance and context, thanks to a few direct questions about what we were looking for, which made the review process smoother. This allowed us to focus on more practical issues, such as setting milestones and KPIs that could be truly impactful for Arbitrum, rather than spending time on other equally important but slightly less pragmatic rubrics for the development of the ecosystem.

In practice, we engaged on average with founders who were better prepared, more experienced, and supported by clearer and more solid plans, leading to higher submissions quality. Being able to evaluate more pragmatic proposals, where the program’s objectives were easier to visualize, made a clear difference compared to the slower process of assessing ideas that were often overly theoretical and lacked a practical execution path.

For these reasons, we believe the program worked well. That said, it is still early to draw firm conclusions, since each project’s KPIs will only be assessed over the next few months. The same applies to assessing the program’s broader impact on Arbitrum.

It is also worth noting some potential downsides. The BD effort may not work indefinitely, as the number of projects actively looking to build on Arbitrum is not infinite, even if it is hard to measure this in practice. Indeed, we just cannot know how much our network will continue to grow, and whether there will be fluctuations in the willingness to build new things. Therefore, we want to remain conservative, although this good start, as we don’t have long-term data to base our analysis on.

Looking a bit deeper, scalability is an important question. The BD program could potentially scale, but it would likely need some refinement and additional sourcing methods. Castle Labs’ network helped surface a solid number of projects with clear plans to contribute to Arbitrum, but relying on a single sourcing channel over a long period of time may not deliver the same results at larger scale. To complement this, other initiatives could be introduced and managed by the DAs, such as a shared channel where delegators can suggest founders or projects to reach out to, or an incubation program where the focus is shifted to quality rather than quantity.

This does not mean the BD effort cannot scale, or that the current program is not focused on quality, only that it would benefit from being combined with other approaches to make it more resilient during slower market periods and more sustainable over time.


List of approved projects in the sprint for the NPAI Domain

Alloc8

Alloc8 is a wallet-native autopilot that turns idle tokens into always-earning capital, bringing self-driving intelligence for DeFi yield.

The team has shown the ability to apply AI in innovative ways, while also showing strong alignment with Arbitrum’s strategic goals and maintaining close ties with the Foundation.


Nuvolari

Sourced through BD work, Nuvolari is a multi-agent recommendation engine that learns from 80+ on-chain behavioral features to surface personalized one-click insights for portfolio rebalancing, yield farming, LP management.

The project fits well within Arbitrum’s DeFi and AI ecosystem goals. Its focus positions it as a meaningful layer for user-centric automation. The product is designed to be multichain but maintains a clear commitment to building and optimizing for Arbitrum first.

Arbitrum Accounts

Arbitrum Accounts is a seamless portal that transforms web3 into an experience as easy and intuitive as the best of web2.

We approved Arbitrum Accounts as a focused pilot to test whether a curated, AA-based onboarding portal can drive verifiable, first-time user activity and safer discovery across Arbitrum.

Emerge

Reviewed through BD work, Emerge democratizes AI content creation by enabling users to generate, monetize, and share AI-powered content (images, videos, memes) directly within social feeds through modular workflows and on-chain microtransactions.

The project’s strength is execution and distribution rather than pure protocol novelty. We approved Emerge as a pilot to test whether in-feed AI content generation and NFT deployment can drive verifiable, creator-led on-chain activity on Arbitrum One.

Otmo

Otmo is a decentralised prop-trading protocol fully built on Arbitrum that allocates capital to proven traders through fully automated smart contracts, by integrating GMX and Ostium as core execution venues.
The project presents a business model that has recently gained strong interest within crypto. What differentiates this team is the emphasis on transparency, an approach that stands out compared to competitors.

Nodes Garden

Nodes Garden is a subscription-based infrastructure platform that simplifies node deployment and management. Their next step was to go fully on-chain on Arbitrum, where each node subscription would be represented by a tradable NFT. This grant is founding that.

This was decided primarily due to the team’s experience, the traction already demonstrated, and the clear goals set for future expansion.

Stimpak

Stimpak Duels is a Player-versus-Player (PvP) trading event app which currently offers two game modes, with more in the works: Head-to-Head and Deathmatch. It builds on top of existing Arbitrum-native infrastructure (GMX), and is showing initial traction with recurring users.

After initially rejecting the proposal a few months ago, we observed that the team maintained consistent progress, continued developing the platform, and secured new traction.

Paros

Sourced thorugh BD work, Paros is an automated and bespoke treasury layer that connects directly to self-custodied wallets and executes on-chain yield and staking strategies 24/7. Conceptually, the project is innovative in bringing together self-custody, automation, and cross-protocol coordination in a way that is not yet widely implemented across DeFi.

Given the combination of strong team competence, well-defined KPIs, and ecosystem alignment, we approved this proposal.

**Committed Yield RWAs on Arbitrum (Liqvid)**

Sourced through BD work, Liqvid is building institutional-grade infrastructure on Arbitrum for compliant tokenization, capital formation, and secondary liquidity for tokenized private market assets.

Their goal is to onboard the first $8-12M from institutional investors into initial tokenized deals, which expands Arbitrum’s RWA ecosystem beyond the more common asset classes.

Kandle Finance

Kandle Finance introduces a composable DeFi yield and liquidity infrastructure, built natively on Arbitrum using the ERC-4626 vault standard.

Their decision to prioritize fewer deliverables with higher expected impact, particularly from a TVL growth and partnerships perspective, shows a pragmatic and ecosystem-oriented approach.

**Protected Growth Tokens on Arbitrum (Exchequer)**

Sourced through BD work, Exchequer is the issuer standard for a new on-chain primitive called Exchequer Notes. Any ERC-20 project can issue a Protected Growth Token (PGT) with its native token to give buyers a defined outcome over a fixed term.

The protocol deepens project-owned liquidity in ways that are directionally aligned with Arbitrum’s DeFi ambitions.

ImportaPay

The project’s strength lies in execution rather than novelty: a functioning business model, active partnership with 9 Payment Service Bank (9PSB), and a growing network of over 500 pre-onboarded retailers who already perform 10–50 million NGN in monthly settlements.

We approved Importapay as a pragmatic pilot to test whether Arbitrum can support compliant, high-frequency cross-border settlements in African trade markets.

Bond.Credit

Sourced through BD work, Bond Credit is building the first verifiable, confidential credit layer for autonomous agents on Arbitrum.

They score agents using ACE (Agentic Credit Engine), generate TEE-verified trust proofs, and issue the first agent-native credit lines onchain, while making Arbitrum the settlement layer.

3 Likes