Recommendations for Improving the DAO’s Grant Programs

As the Research Member of the ARDC V2, Castle Labs (@CastleCapital) and @DefiLlama_Research were tasked with evaluating and providing recommendations for improving the DAO’s grant programs.

These recommendations should be informed by thorough research into the current operational process, successes, and failures of Arbitrum’s existing grants programs. Additionally, as part of our due diligence, we were asked to review grant programs from other crypto ecosystems to guide our recommendations. The ideal output would be an as-is overview of the current state, a best practice overview, and recommendations on how to get there.

For our internal assessment, we will analyzed the existing Questbook grant programs, identifed key areas for improvement, and proposed actionable recommendations to enhance program efficiency and impact in light of the Season 3 approval.

Externally, we conducted a comparative review of successful grant programs across leading ecosystems such as Solana, Ethereum, and Optimism. By examining their governance structures, funding workflows, and accountability mechanisms, we identified best practices that can inform and elevate Arbitrum’s grants operations.


We have documented our entire research in this Google Doc. Navigate through the document using the Tabs on the left-hand side to jump to specific sections.

The document includes:


We have included the Mini Reports and Recommendations below as Collapsible Sections.

Mini Report [INTERNAL]

Mini-Report: In-Depth Analysis of the Arbitrum DDA Program

Executive Summary

This research provides an in-depth internal analysis of the Arbitrum Delegated Domain Allocation (DDA) program, with a specific focus on Seasons 1 and 2. The DDA program is a cornerstone of Arbitrum’s strategy to foster innovation and growth within its ecosystem through grant funding. This analysis examines the program’s objectives, success metrics, and overall effectiveness. It also evaluates the performance of Domain Allocators (DAs) and the efficiency of the Questbook platform in managing the grant process.

Program Overview and Objectives

The DDA program is designed to bridge the funding gap between small-scale community initiatives and large foundation-level grants. It operates across four critical domains:

  • New Protocols and Ideas: Supporting the development of novel protocols, infrastructure, and governance tools.
  • Education, Community Growth, and Events: Funding initiatives that enhance Arbitrum’s community, educational resources, and event presence.
  • Developer Tooling: Investing in tools and resources that improve the developer experience on Arbitrum.
  • Gaming: Supporting Web3 gaming projects and infrastructure to position Arbitrum as a leader in blockchain gaming.

The Questbook platform manages the grant application, evaluation, and disbursement process.

Key Objectives of the DDA Program

The program’s core objectives are:

  1. Aligned Allocation of Funds: Ensure that funds are directed towards projects that align with Arbitrum’s ecosystem goals.
  2. Increased Transparency: Provide a clear and transparent grant allocation process.
  3. Increased Accountability: Establish a dedicated group of stakeholders (Domain Allocators) responsible for fund allocation.
  4. Lower Turnaround Time (TAT): Reduce the time needed for proposal approval and funding disbursement.
  5. Strong Brand Recognition: Boost developer engagement with Arbitrum and form strategic partnerships.

Key Performance Indicators (KPIs) and Program Success

Success metrics varied across Seasons 1-3. Season 1 focused on ecosystem growth and engagement, while Season 2 introduced more structured and measurable outcomes. Season 3 further refined these metrics and added a sustainability review for grantees.

Season 1 KPIs:

  • Growth in Ecosystem Activity (builders, proposals, funded projects).
  • Community Oversight & Participation (active grant reviews, diverse funding distribution).
  • Increased Engagement in Builder Communities (participation across various platforms).
  • Brand Awareness & Ecosystem Reputation (positive sentiment among developers).

Season 2 KPIs:

  • Program Success (increased contributors, lower turnaround times, higher completion rates).
  • Enhanced Community Involvement (increased engagement and participation).
  • Brand Awareness (sentiment surveys and social media analytics).
  • New Success Metrics for Funded Projects (user onboarding, TVL, new functionalities, follow-on funding).

Season 3 KPIs:

  • Grant Applications (maintaining or exceeding Season 2 proposal volume).
  • Grant Completion Rate (DA judgment, oversight for early-stage teams).
  • Long-Term Sustainability (tracking teams building on Arbitrum post-grant).

Domain Vision

Each domain’s vision and criteria were designed to align with its unique goals and objectives.

  • Gaming: Initially funding raw development, then shifted to marketing, content creation, and events to boost Arbitrum Gaming’s visibility and user adoption.
  • Education, Community Growth, and Events: Supported hackathons, community growth initiatives, educational content, and IRL events; excluded speculative content.
  • Developer Tooling: Funded essential infrastructure projects like RPC systems, asset visualization tools, and documentation translation, as well as innovative toolkits and SDKs.
  • New Protocols and Ideas: Focused on developing or migrating protocols and infrastructure, with an emphasis on functional products.

Operational Review

Governance Model and Transparency

  • The Questbook platform and public dashboards contributed to the strong transparency of the program.
  • Communication between DAs and applicants was fragmented across Discord and Questbook, making it difficult to track conversations.
  • Reporting methodologies were not standardized, with different DAs using different data points and KPIs.
  • DAs occasionally experienced delays in releasing funds and a few cases of disbursement errors.
  • Decision-making processes were transparent during the application and project milestone stages, but any reporting post-grant completion was lacking.

Domain Allocators (DA) Performance

The increased volume of applications has created challenges for Domain Allocators (DAs), specifically in their review process and communication with applicants. This challenge is evident in the stale rates, particularly in the Gaming and Education categories, where DAs faced significantly higher workloads compared to other domains.

Despite the overall trend, some DAs, such as New Protocols & Ideas, managed to maintain a 0% stale rate.

The data reveals an uneven distribution of workload among DAs and emphasizes the necessity for DAs to adapt their evaluation methods due to the high volume of quality proposals and existing economic constraints.

Funding & Budgeting

To view the entire public dataset we created, please visit this link.

Allocation vs. Utilization

Season Total Asking ($) Total Disbursed ($) % Disbursed Leftover ($) Milestones Completed (%) Projects Completed (%)
S1 997,793 871,281 87.3% 69,012 82.4% 70.1%
S2 (Ongoing) 2,949,156 2,063,662 70.0% 609,314 66.3% 49.3%

S1 Key Observations:

  • High fund utilization (87.3%) suggests that most approved projects reached milestones efficiently.
  • Only 6.9% of funds were left unused, indicating good allocation accuracy.
  • Project completion rates were relatively high (70.1%), but many projects still failed to complete milestones.
  • The high milestone completion rate (82.4%) suggests a well-paced funding structure, though some domains performed better than others.

S2 Key Observations:

  • The amount of total funding requested almost tripled from Season 1 due to the larger budget.
  • Lower utilization (70.0%) is expected since projects are still completing milestones.
  • A significant portion ($609,314, or 20.7%) remains unspent, reflecting the ongoing nature of fund disbursement for milestone completion.
  • Project completion rates are currently at 49.3%, expected to increase as Season 2 progresses.

Disbursement Efficiency

Efficient disbursement of funds is critical to ensuring that approved projects can execute their plans without unnecessary delays. This section examines (1) the time to approval and (2) the time from approval to first disbursement separately to highlight bottlenecks in the process and improvements from Season 1 to Season 2.

Key Observations:

  • Approval times increased for Dev Tooling (+118%) and Education (+58%).
  • Gaming saw minor improvement (-2 days).
  • New Protocols & Ideas saw significant speedup (-11 days, 27% faster).

Key Observations:

  • Dev Tooling was slower to disburse funds (+7 days, 39% increase).
  • Education & Community disbursement was slightly faster (-3 days).
  • Gaming disbursement was much faster (-16 days, 36%).
  • New Protocols & Ideas disbursement was the fastest (-37 days, 50%).

Now, we can view the overall approval & disbursement trends across seasons:

Season Avg. Days to Approval (All Domains) Avg. Days from Approval to Disbursement (All Domains) Total Avg. Days to First Disbursement
S1 30.2 days 40.8 days 71 days
S2 37.2 days 28.5 days 65.7 days

Key Takeaways:

  1. While approval times increased by 7 days on average, disbursement speed improved by 12.3 days on average. The net effect was a 7.5% faster total disbursement process in Season 2.
  2. Approval remains a growing bottleneck, especially in Dev Tooling and Education. This suggests the need for support to DAs in the most overwhelmed domains.
  3. Gaming and New Protocols & Ideas saw dramatic improvements in approval-to-disbursement speed.

Impact Review

Value Creation & Ecosystem Growth

The DDA program has indeed funded several impactful projects. These projects have positively contributed to various metrics across the ecosystem, such as:

  • Pear Protocol: Generated over $280m in volume and $390k in fees since launch.
  • Mountain Protocol: From the $25k received, they were able to accrue a TVL of $9m+, reaching over 1800 holders and 890 weekly users.
  • Vyper: Used the $50k from the grant to research to improve the security of its compiler, increasing the security of its $100m+ in TVL on Arbitrum.
  • Dexpal: Leveraged the funds to develop integrations with Vela and enter into discussions with Gains, GMX, and other Arbitrum protocols.
  • Modular Crypto - Education, Events & University Study group: with the $18.5k received, they organized two major events with over 350 attendees and developed an Arbitrum-focused academic program in several major Brazilian universities.
  • Arbitrum Dapps over Apps: Used $15.1k to host workshops and hackathons in Nigeria and create Arbitrum university clubs. 300+ students were trained, 89 students participated in the hackathon and submitted 22 projects, 30 dapps were launched, and social impressions exceeded 200k.

Despite these selected success stories, a significant challenge exists in performing a more comprehensive analysis of the overall impact. This is primarily due to a lack of a centralized system for tracking impact KPIs. Individual projects have their success metrics, but they are not consistently gathered or reported.

Success Metrics & Monitoring

The DDA program and individual projects define success metrics, but there are gaps in monitoring and analysis. Responsibility for key metrics was unclear, leading to incomplete data and difficulty in evaluating project success.

Projects had success metrics and impact KPIs, but these were often not part of project milestones or integrated into the Questbook platform. This led to non-standardized reporting and fragmented data.

The table below shows a quick qualitative analysis of the number of milestones that included some sort of numerical KPI within their milestone field.

It is recommended that future programs emphasize impact KPIs and record them in program-owned, standardized data tables. Season 3 introduced post-funding monitoring, but a central repository of impact KPIs is recommended. Additionally, project-based impact KPIs should be linked to higher-level strategic objectives.

Operational oversights concerning tracking and reporting include:

  1. Lack of measurement of stated success metrics
  2. Lack of a framework for collecting impact KPI data
  3. Lack of a clear plan to monitor and assess projects post-grant

These can be overcome by:

  1. Defining a collection methodology and assigning responsibility for all success metrics
  2. Defining and collecting impact KPIs for each project at each milestone completion
  3. Creating a defined framework, process, and budget for post-grant project tracking (note: this is somewhat proposed for S3)

Post-Funding Tracking & Verification

Grantees reported that the grants enabled them to achieve significant outcomes, including team building, pipeline development, hiring dedicated resources, security improvements, user base expansion, and MVP development.

However, a missed opportunity was identified in the lack of post-funding support. Grantees expressed a desire for increased collaboration and networking opportunities within the Arbitrum ecosystem.

“As mentioned, I think giving pathways for projects to get in touch and collaborate with people within Arbitrum would be great.”

“I think we would have appreciated more connections / networking opportunities with the arbitrum foundation, arbitrum dev rel team or just other builders on arbitrum or other fellow grantees.”

Therefore, in addition to the existing S3 framework recommendations, it is recommended that additional post-grant support be provided to maximize the impact of funded projects. This support could include networking opportunities, introductions to established Arbitrum protocols, co-marketing support, and introductions to post-grant funding opportunities.

Recommendations

Below is a compilation of recommendations for the DDA program to take into consideration for post-season two analysis, Season 3, and beyond.

Governance & Operations

  1. Ensure all decisions from DAs are communicated on Questbook as the single source of truth to prevent miscommunication and fragmented discussions between Discord and Questbook.
  2. Standardize templates for different reporting formats (PM, DAs, grantees), ensuring they align with initially outlined KPIs and success metrics to track performance consistently across different projects.
  3. Assign a DAO-experienced Program Manager (PM) with greater involvement and oversight of DA processes to ensure consistency and accountability.
  4. Hire DAs from a concentrated DAO-centric talent pool of expert service providers to improve the quality of funding decisions.
  5. Define clear strategic goals for the program that align with that of the DAO and link explicitly with measurable impact KPIs on a per-project basis.

Funding & Budgeting

  1. Investigate and address the root causes of disbursement errors, specifically cases of duplicate fund distributions to grantees, which can lead to financial mismanagement.
  2. Implement safeguards to prevent manual errors in fund disbursement, as manually sending funds increases the risk of costly mistakes.
  3. Establish clear rules regarding changes to funding milestones and requested amounts post-approval to prevent shifting budgets and unexpected funding gaps.

Impact & Accountability

  1. Define clear success metric collection methodologies and assign responsibility for data gathering to ensure that relevant data is collected consistently.
  2. Require impact KPIs to be submitted and recorded at each milestone completion before payouts to ensure projects deliver measurable results.
  3. Develop a structured framework, process, and budget for continued tracking of projects post-grant, including extended impact KPI tracking and well-crafted surveys, to measure sustained impact.
  4. Increase post-grant support for funded projects. Grants are often one-time payments, but additional support, including networking and co-marketing opportunities and guidance toward post-grant funding, can help projects maximize their impact.
Data Analysis [INTERNAL

Questbook DDA Data Analysis [20/02/2025]

Please find the Google Sheet with the full analysis below.

ARDC [Research]: Questbook DDA Data Analysis [Castle Labs]

Mini Report [EXTERNAL

Mini-Report: In-Depth Analysis of External Grant Programs

2. Program Overviews & Objectives

This report analyzes external grant programs by considering other Layer 1 and Layer 2 ecosystems and protocols such as Optimism, Scroll, Ethereum, Solana Uniswap, and Lido. Below is a brief overview of each program’s core mission, governance structure, and primary objectives.

2.1.1 Optimism – Retroactive Public Goods Funding (RetroPGF)

Program Overview
Optimism uses “Retroactive Public Goods Funding” (RetroPGF) to reward builders, researchers, and community contributors who have demonstrated valuable impact on Optimism or the broader Ethereum ecosystem. Six “rounds” have been conducted, each experimenting with different voter selection mechanisms (Badgeholders, expert voters, community-led budgets).

Objectives

  • Support Public Goods: Retroactively fund completed work that benefits Optimism and Ethereum.
  • Experiment in Governance: Continually refine voting processes (expert voters, sub-group allocations, onchain vs. offchain).
  • Promote Ecosystem Growth: Broaden the scope of recognized contributions, from developer tooling to social impact.

2.1.2 Scroll – Level Up Grants & Hackathons

Program Overview
Scroll, a zero-knowledge rollup (zkRollup) for Ethereum, issues grants via the “Level Up with Scroll” program and periodic online hackathons. The “Level Up” track offers two funding tiers, up to $10k for “Starter” and $100k for “Launch”. Starter grants are geared toward smaller experimental ideas or early-stage teams while Launch grants are designed for bigger, more ambitious or established projects. Additionally, Scroll has hosted a number of hackathons in-house and through external partners to distribute prizes to teams building on Scroll.

Objectives

  • Onboard Developers: Provide resources and tutorials (e.g., “Level Up” educational materials).
  • Fuel Early-Stage Innovation: Give smaller or mid-sized grants to experimental teams.
  • Encourage Ecosystem Tools: Sponsor hackathons that produce libraries, dApps, or infrastructure relevant to zkRollups.

2.1.3 Ethereum – EF Ecosystem Support Program (ESP)

Program Overview
The Ethereum Foundation (EF) Ecosystem Support Program (ESP) is a centralized initiative funding open-source, non-commercial projects that strengthen Ethereum. While historically it operated under various “waves”, where blog posts would detail awardees and selection rationale. After 2020, these blog posts were condensed to highlight grantees based on category, project, recipient, and description. This new style of reporting does not include specific amounts given per project or selection rationale.

Objectives

  • Focus on Public Goods: Primarily invests in core infrastructure, client diversity, zero-knowledge research, and community education.
  • Minimize Community Overlap: The ESP focuses on covering foundational or protocol-level R&D rather than funding large-scale dapps.
  • Drive Protocol Research: Provide academic grants for cryptographic primitives, scaling solutions, and security research.

2.1.4 Solana – Foundation Grants, Hackathons, and Accelerator

Program Overview
The Solana Foundation operates a milestone-based grants program for public goods. Projects range from developer tooling to security audits. Additionally, hackathons (often run by partner entity Colosseum) funnel top teams into an accelerator offering $250k pre-seed funding.

Objectives

  • Grow Developer Adoption: Fund essential libraries, community education, and user-facing apps highlighting Solana’s high-throughput features.
  • Incentivize Innovation: Provide convertible grants to commercially oriented teams and pure grants for open-source contributions.
  • Encourage Hackathon-to-Startup Pipeline: Move hackathon winners into accelerator programs, bridging the gap between prototypes and seed-stage ventures.

2.1.5 Uniswap – Uniswap Foundation Grants

Program Overview
The Uniswap Foundation (UF) supports large-scale, long-term projects aligned with the Uniswap protocol’s growth and governance. Grant categories include Developers, Researchers, Governance, Innovation, Security, and Protocol. While the Uniswap DAO is responsible for allocating the Foundation budget, it operates with significant autonomy.

Objectives

  • Support Core Protocol Growth: Fund strategic research (AMM design, layer-2 hooks, advanced security).
  • Strengthen Governance: Build out governance tooling, delegate engagement, and encourage robust community participation.
  • Drive DeFi Innovation: Grant capital for next-generation DeFi concepts on Uniswap (v4 Hooks, advanced liquidity management, cross-chain expansions).

2.1.6 Lido – LEGO (Lido Ecosystem Grants Organization)

Program Overview
LEGO provides tiered grants (“Sandgrain,” “Pebble,” “Boulder,” “Mountain”) for diverse initiatives such as code audits, node operator tooling, philanthropic integrations, and marketing.Funding thresholds map to the required approval level (Council vs. entire DAO), meaning smaller grants i.e. less than 10,000 USD may only require 2 LEGO member votes while grants over 100,000 USD

Objectives

  • Enhance Liquid Staking: Fuel expansions to multiple chains (Ethereum, Solana, Polygon), with security audits as a primary focus.
  • Boost Decentralization: Encourage distributed validator technology, community-led dashboards, and philanthropic “impact staking.”
  • Maintain Secure Infrastructure: Prioritize recurring audits, bug bounties, and research on new staking modules.

2.2 Governance & Operations

This pillar evaluates how each grant program is structured, who makes decisions, and how transparent they are to the public. We focused on three questions:

  1. Governance Model & Transparency – Are decision processes clear and well-documented? Are allocations public?
  2. Performance & Accountability – Are teams or councils effective at reviewing grants fairly and on time?
  3. Program Efficiency & Accessibility – Do applicants understand the process and experience consistent communication?

2.2.1 Governance Model & Transparency

  • Optimism: RetroPGF uses a “Badgeholders” model, mixing community-elected and foundation-appointed experts. Rounds have tested new voting experiments, such as expert segments or guest voters. Results, project lists, and OP distribution amounts are public, though the selection of specialized voters can be opaque.
  • Scroll: Grants are given out by Scroll team members, with no DAO-based onchain voting. Hackathon winners are determined by partner judges. Application details are public, but final decisions remain relatively opaque.
  • Ethereum ESP: Entirely foundation-driven. There is no community voting and minimal published detail beyond broad project categories. Quarterly updates list project names but lack details on the funding amounts or milestone activity. Decision-making processes remain largely internal, though the EF has at times delegated operational responsibility to local/regional teams for more targeted grants.
  • Solana Foundation: Operates a top-down approach with grants managed entirely offchain by an internal “Capital Team.” Hackathons, run by Colosseum, are sponsored and closely monitored by the Solana Foundation, ensuring alignment with the ecosystem’s priorities. While judging is transparent, final investment and accelerator selection decisions are made by Colosseum’s leadership, with input from select Solana Foundation representatives. Colosseum, founded by ex-Solana Foundation and Slow Ventures members, operates independently but remains tightly integrated with Solana’s growth strategy.
  • Uniswap Foundation: Receives block funding from UNI governance, then internally manages distribution with a specialized team. Maintains a “Dashboard of Success” for certain KPIs and public calls announcing grant waves. Large budgets or new categories can require an onchain governance proposal.
  • Lido (LEGO): Tiers grants by size. Minor grants are approved by individual council members or small teams, while large “Boulder/Mountain” grants require either full council or Lido DAO votes. Quarterly transparency reports are published including information on grant recipients, though the process for smaller grants is more informal (each LEGO Council member can facilitate sandgrain and pebble grants within a quarterly budget).

Key Observations

  • Programs like Optimism and Uniswap utilise more comprehensive programs involving community participation and onchain governance (community-driven budgets, specialized reviewers), while Ethereum and Solana remain foundation-led with minimal direct community input.
  • Lido stands out for its tiered approach to approvals, reducing friction for small grants while preserving broader DAO oversight for major funding.

2.2.2 Performance & Accountability of Allocators

  • Optimism: Badgeholders have sometimes struggled with very high application volumes in big rounds (200+ projects). They respond with metric-based voting or narrowing eligibility. Results have improved over time, but complexity persists.
  • Scroll: A smaller ecosystem, so fewer grants. The main friction is clarity on milestone expectations. Scroll’s team typically responds quickly to proposals, but the final authority is quite opaque outside of the online hackathon.
  • Ethereum ESP: A dedicated internal team plus subject-matter experts handle reviews. Turnaround times can be unpredictable; some applicants wait months, others get quick decisions (this is especially true for smaller grants, which require less oversight). Accountability rests solely with EF staff.
  • Solana Foundation: The Foundation typically takes around three weeks for an in-depth review before either finalizing support or directing teams to the Colosseum accelerator. While the hackathon pipeline is well-structured, the level of post-hackathon support varies, with some teams receiving further funding or mentorship while others do not.
  • Uniswap Foundation: Large categorization (Developers, Security, Protocol, etc.) ensures specialized reviewers. Quarterly or wave-based updates track what is approved. Some smaller “subcommittees” emerged historically, but the new approach emphasizes fewer, larger grants.
  • Lido (LEGO): Council or personal allocations for smaller grants generally ensure quick approvals. Larger grants require consensus. Quarterly reports highlight which council members sponsored each project, improving accountability.

Key Observations

  • Most programs rely on a specialized “core team” or “council” to vet proposals; community members seldom have binding votes, except in some Optimism or Uniswap governance steps.
  • Accountability is strongest when multiple reviewers must sign off or when structured reporting is mandated.

2.2.3 Program Efficiency & Accessibility

  • Optimism: Initially manual and prone to “Badgeholder fatigue,” early rounds relied on subjective evaluations. Rounds 4–6 introduced partial automation impact metrics, using predefined criteria like user onboarding, gas fees saved, and ecosystem contributions to streamline evaluations. While many applicants appreciate the platform, some find the eligibility verification process complex.
  • Scroll: Application forms are straightforward, especially for hackathons, where judging criteria and awards are clearly outlined. The “Level Up” funnel is more curated, involving a structured six-week developer program before final funding decisions, which are made internally by the Scroll team. However, the lack of public documentation on how final funding decisions are made creates some opacity.
  • Ethereum ESP: Well-known program (ESP) and open intake forms, but some confusion about how proposals are evaluated and slow timelines. The shift to “small grants vs. project grants” has improved clarity slightly.
  • Solana Foundation: Clear milestone-based approach. Hackathons are highly publicized, accelerating brand recognition. However, final approval from the Foundation can be slower for bigger requests.
  • Uniswap Foundation: Evolved from frequent “waves” to fewer, larger grants with a structured funnel. Applicants in 2021–2022 sometimes complained about unclear feedback or over-subscription. The new approach (from 2024 onward) aims to reduce churn by focusing on bigger, more curated projects.
  • Lido (LEGO): Offers flexible, tier-based approvals. Micro-grants can be approved within days. More extensive proposals might require a formal Snapshot or DAO vote. This multi-level approach fosters accessibility, though it can be challenging to track smaller grants’ statuses across multiple Discord/forums.

Key Observations

  • All programs have improved accessibility over time, refining forms, milestones, or hackathon flows.
  • The largest friction points generally stem from high volumes of submissions (e.g., Optimism, Uniswap), or from the centralized gating of decisions (e.g., Scroll, Solana).

2.3 Funding & Budgeting

This section reviews how each program allocates funds, manages potential volatility (e.g., token prices), and handles unspent budgets.

2.3.1 Allocation vs. Utilization

  • Optimism: RetroPGF rounds range from 3.5M OP tokens to over 30M OP in earlier rounds, with utilization rates typically exceeding 90%, meaning nearly all allocated OP for each round is distributed to selected projects. This high utilization occurs because badgeholders vote on the full OP allotment, ensuring funds are fully deployed rather than left unallocated.
  • Scroll: Budget details are not fully transparent. Starter grants are up to $10k, and Launch grants are up to $100k, plus hackathon prize pools. Many of the grants allocated are trackable on devpost or sponsor websites.
  • Ethereum ESP: No formal cap per quarter, the EF taps treasury as needed. Some specialized rounds (Academic, Merge Data Challenge) have explicit budgets (e.g., $2M). Utilization is rarely published at a granular level.
  • Solana Foundation: The foundation does not publicly detail a total annual budget. Grants are typically milestone-based, leading to partial disbursements over time. The hackathon budgets are more explicit (e.g., $500k–$1.5M prize pools).
  • Uniswap Foundation: After an initial multi-million-dollar runway from the Uniswap DAO, the foundation has set budget categories (Developers, Research, Governance, Innovation, Security, Protocol). Budget utilization is tracked in periodic “Community Impact Reports” and “Financial Summaries.”
  • Lido (LEGO): A typical quarterly target is around $500k–$1M in stables plus some LDO. Unspent funds can roll over. Over time, Lido reallocated significant security costs to a separate committee, letting LEGO channel funds more directly into ecosystem growth.

2.3.2 Disbursement Efficiency

  • Optimism: Large lumps are distributed after each RetroPGF round. Some recipients mention disbursement within days of final tallies.
  • Scroll: Grants are paid out once or in multiple tranches, depending on the project track. Hackathon prizes are lump sum.
  • Ethereum ESP: Often milestone-based for bigger grants, but timescales vary widely.
  • Solana Foundation: Typically milestone-based, with funds disbursed within 2-4 weeks from milestone verification.
  • Uniswap Foundation: Movement toward bigger, milestone-based grants in 2024. Past waves sometimes had partial pre-pay plus mid-project milestones. The new approach typically ties multiple tranches to deliverables.
  • Lido (LEGO): Tiers define the sign-off process. Typically, smaller grants get one-time payouts. Large “boulder/mountain” grants are milestone-based. Security or integration expansions can require multiple checks.

2.3.3 Financial Sustainability & Token Volatility

  • Optimism: OP token price fluctuations matter less for RetroPGF because distribution is retroactive. Recipients face volatility risk post-funding.
  • Scroll: Funded from the Scroll treasury, presumably using stablecoins or a mix of assets, though there is no public documentation confirming the breakdown.
  • Ethereum ESP: Historically, large fluctuations in ETH price have not been fully hedged. The EF has a substantial treasury but does not publicly detail stablecoin usage.
  • Solana Foundation: Typically holds SOL plus stable reserves. Grants are often denominated in stablecoins to mitigate volatility; convertible grants can take an equity or token position.
  • Uniswap Foundation: Receives large UNI allocations from the DAO, and typically sells or swaps to stables to pay out grants. Reports mention partial stable conversions to reduce volatility.
  • Lido (LEGO): Shifted from using exclusively LDO for disbursing funds to roughly 20% LDO/80% stable disbursements. This approach helped audit firms and dev teams avoid unexpected shortfalls and maintain partial alignment with LDO’s success.

Key Observations

  • Most ecosystems (Uniswap, Lido, Solana) increasingly rely on stablecoins for main payouts, ensuring fund predictability for grant recipients.
  • Programs with large token-based budgets (Optimism, Ethereum) do not systematically hedge or convert, leaving recipients to handle volatility.

2.4 Proposal Evaluation & Domain Strategy

This section focuses on whether each program defines clear rubrics, domain focuses, or flexible guidelines for allocating funds and how programs adapt strategies over time.

2.4.1 Evaluation Rubrics & Domain Fit

  • Optimism: Evolved from pure “subjective voting” to partially automated scoring (Round 4’s “impact metrics,” Round 5’s “expert segments”). Projects that yield broad ecosystem benefits or represent “public goods” are preferred.
  • Scroll: Grants revolve around building out zkRollup solutions or developer tools. There is no formal scoring rubric publicly detailed. They do, however, ask projects to justify why they want to build on Scroll and how they plan to leverage zero-knowledge proofs.
  • Ethereum ESP: Looks primarily for open-source, protocol-aligned outcomes. There is no official numeric scoring, but each domain expert reviews proposals for alignment with Ethereum’s roadmap (e.g., scalability, dev tooling, cryptographic research).
  • Solana Foundation: Distinguishes between public goods and commercial “convertible” grants. Hackathon judges emphasize real product potential, user experience, and synergy with Solana’s throughput or features.
  • Uniswap Foundation: Divides proposals by categories (Developers, Researchers, Governance, Innovation, Security, Protocol). Each has distinct success criteria (e.g., for Developer grants, the ratio of new dev tools or v4 Hooks).
  • Lido (LEGO): Ties the domain to expansions (e.g., multi-chain staking), security (e.g., audits, bug bounties), decentralization (node operator tooling, philanthropic staking), or community (e.g., meetups, translations). The “tiered” system primarily addresses budget/approval rather than the domain strategy itself.

2.4.2 Flexibility & Adaptability

  • Optimism: RetroPGF has evolved significantly with each round, incorporating learnings from previous iterations. Early “collections” were replaced with metrics-based voting to improve evaluation consistency, while governance experiments, such as expert voting and community-led budget setting, are regularly tested to refine the process.
  • Scroll: The “Level Up” program is somewhat stable, though hackathon tracks shift depending on core Scroll updates (e.g., new EVM features, bridging).
  • Ethereum ESP: Ethereum ESP has refined its grant categories over the years, transitioning from broader “waves” to more structured academic grants and challenge-based funding (e.g., the Merge Data Challenge). While the Ethereum Foundation (EF) occasionally runs localized grant rounds (e.g., Japan Local Grants) and publishes quarterly funding updates, grant decisions remain entirely within the EF’s discretion, with no community governance or participatory budget voting.
  • Solana Foundation: Partners with external teams (Colosseum, Superteam, BuildWithMonkeDAO, etc.) to run specialized sub-grants or microgrants in emerging markets. This “ecosystem partner” approach fosters adaptation to different regions or verticals.
  • Uniswap Foundation: Moved from frequent smaller “waves” to fewer, bigger, milestone-based grants in 2024. Also introduced categories for security audits, protocol improvements, or specialized “hooks.”
  • Lido (LEGO): As Lido expanded to multiple chains, LEGO quickly adapted to fund relevant audits, philanthropic integrations, or specialized node-operator tooling. They frequently reorganize large security costs under a separate committee to keep LEGO’s domain flexible.

2.4.3 Strategic Alignment of Funding

  • Optimism: True to “retro funding,” the goal is to reward ecosystem contributions after the fact, focusing on intangible “public goods.”
  • Scroll: Mostly invests in immediate developer growth, building a pipeline of devs proficient in zkRollups.
  • Ethereum ESP: Strictly funds public goods and protocol R&D, often bypassing pure dApp use cases. This top-down approach ensures funds are directed toward Ethereum’s fundamental infrastructure, security, and long-term scalability, rather than individual application-layer projects.
  • Solana Foundation: Prioritizes big-picture adoption (e.g., DeFi, gaming, consumer) that exemplifies Solana’s speed. Many subprograms exist, bridging from microgrants (Superteam) to major convertible grants for more commercial projects.
  • Uniswap Foundation: Heavily invests in continuing DeFi innovation on Uniswap, with specialized categories and advancing research (e.g., AMM design, cross-chain expansions, v4 hooks).
  • Lido (LEGO): Laser-focused on maintaining Lido’s multi-chain presence and security. Decentralization is the secondary major theme (e.g., DVT, philanthropic expansions, community operator onboarding).

Key Observations

  • Each program is shaped by its ecosystem’s strategic goals: Ethereum invests in base-layer R&D, Solana aims at multi-segment adoption, Lido emphasizes liquid staking expansions, Uniswap invests in next-generation DEX research and governance.
  • Adaptability is particularly evident in Optimism (constant iteration on voting) and Lido (repeated expansions plus philanthropic angles).

2.5 Impact & Accountability

This pillar examines how each program tracks grantees post-funding, measures long-term success, and enforces accountability if deliverables are not met.

2.5.1 Post-Funding Tracking & Verification

  • Optimism: RetroPGF is retrospective, so measuring “impact” is partly done via the original nomination process, plus short project descriptions. There is little direct post-funding oversight, though the next RetroPGF round often references whether the grantee delivered ongoing results.
  • Scroll: Hackathon winners show code repos and demos. Grants (Starter/Launch) often require milestone check-ins, but no universal public portal tracks progress.
  • Ethereum ESP: Larger grants typically come with milestone-based updates, but only high-level outcomes are shared in quarterly blog posts. There is no robust universal tracking system or “one-year-later” accountability standard.
  • Solana Foundation: Usually milestone-based, with recipients to provide proof (demo, onchain usage) to trigger subsequent payouts. Hackathon follow-ups exist if teams enter the accelerator.
  • Uniswap Foundation: Moved from forum-based progress updates to structured monthly milestone check-ins and quarterly KPI reviews for large grants.
  • Lido (LEGO): Most small grants get a simple “deliver or not” check. Large ones require multiple milestone validations. Quarterly LEGO reports explicitly list newly completed vs. ongoing vs. abandoned projects.

2.5.2 Innovation & Value Creation

  • Optimism: Many well-known Ethereum public goods have benefited (e.g., top dev tools, research teams). The “public goods growth” measure is intangible but widely recognized by the Ethereum community.
  • Scroll: Has spurred an emerging zkRollup developer base. Hackathon demos sometimes progress into broader ecosystem projects. Actual adoption outside niche dev communities is still relatively small, given Scroll’s early stage.
  • Ethereum ESP: Deeply influential in cryptographic breakthroughs, Eth2 clients, zero-knowledge R&D. While application-layer progress can be slow, the protocol’s security and scaling have advanced significantly.
  • Solana Foundation: Some hackathon winners become major ecosystem players (e.g., STEPN, Drift). The hackathon-to-accelerator model fosters real startups.
  • Uniswap Foundation: The “v4 Hooks” ecosystem blossomed with specialized grants in 2023–2024. Many developer tools and governance improvements reflect stable progress; however, pivoting away from many small grants might reduce the funnel for smaller experiments.
  • Lido (LEGO): Real expansions across multiple chains (e.g., Polygon, Solana, etc.), distributed validator tech, philanthropic partnerships (e.g., Impact Staking). Has solidified Lido’s brand as a multi-chain, community-supported liquid staking solution.

2.5.3 Enforcement & Non-Performance

  • Optimism: Typically, there are no “clawbacks” because RetroPGF is retrospective.
  • Scroll: Grants can stop paying subsequent milestones if a project fails to deliver.
  • Ethereum ESP: In theory, the EF can withhold future tranches; in practice, few public “clawback” examples exist.
  • Solana Foundation: If a milestone fails, that portion is withheld. No explicit public mention of how or if partial refunds occur.
  • Uniswap Foundation: Early waves used partial pre-payments. The 2024 milestone approach can cancel or reduce future tranches if deliverables are missed.
  • Lido (LEGO): A large “Boulder” or “Mountain” grant can be paused mid-way. Minor “sandgrain” or “pebble” are typically one-offs, so enforcement is minimal beyond reputational risk.

Key Observations

  • Most programs rely on milestone-based payouts as the primary enforcement. Clawbacks or formal dispute resolution are rarely used.
  • For retrospective models (Optimism), underperformance is less relevant. For prospective models (Scroll, Solana, Uniswap, Lido), partial payouts align with milestone verification.

2.6 High-Level Observations & Potential Best Practices

2.6.1 Observations and Best Practices to Adopt

Synthesizing these six programs reveals common patterns that can inform future improvements for Arbitrum’s ecosystem grants:

  1. Multiple Governance Layers Can Improve Efficiency
  • Lido’s tiered approach (Sandgrain vs. Pebble vs. Boulder vs. Mountain) allows a quick turnaround for smaller grants while preserving DAO-level oversight for large expenditures.
  • Uniswap introduced subcommittees (e.g., The Stable, specialized domain leads) to further handle specific categories with expert groups.
  1. Stablecoin-Centric Payouts Offer Predictability
  • Lido’s 80/20 model, Uniswap Foundation’s partial stable conversions, and the Solana Foundation’s stable-based milestone releases reduce price risk for grantees and streamline budgets.
  1. Mix Retrospective & Prospective Approaches
  • Optimism’s RetroPGF effectively rewards existing accomplishments, but prospective grants let other ecosystems steer strategic growth.
  • A balanced approach might combine “retro booster bonuses” with milestone-based prospective funding.
  1. Use Sub-Committees for Domain Expertise
  • Solana hackathons run by partner entities (Colosseum, Superteam) or Uniswap’s domain categories help scale specialized evaluations.
  • Lido fosters the Community Lifeguards Initiative to handle smaller community operators, freeing the main council from this workload.
  1. Publish Clear “Success Metrics” & Post-Funding Tracking
  • Uniswap’s 2024 pivot to bigger grants includes monthly milestone check-ins and 2–3 objective KPIs.
  • Tying partial disbursement to measurable deliverables encourages accountability without over-bureaucratizing.
  1. Governance Experimentation Drives Long-Term Improvements
  • Optimism repeatedly iterates on voter selection and metric-based scoring.
  • Ongoing refinements let the community see tangible progress in fairness and strategic alignment.

2.6.2 Pitfalls to Avoid

  1. Relying Solely on Retroactive Funding

  2. RetroPGF (Optimism) effectively rewards completed work but struggles to drive strategic direction. Grants that come only after the fact can miss critical ecosystem gaps and hamper proactive initiatives.

  3. Recommended Action: Combine retrospective funding (to reward unplanned but high-impact contributions) with prospective grants that align with the DAO’s roadmap.

  4. Being Overly Centralized or Opaque in Decision-Making

  5. Programs with top-down approvals often draw criticism for unclear or private processes. This risks alienating the community and stifling broader involvement.

  6. Recommended Action: Embed transparency, including published rubrics, domain-based committees, or open calls for feedback. Ensure consistent public reporting of Domain Allocator decisions and increased ownership of the grant life cycle.

  7. Failing to Convert Large Token Allocations into Stablecoins

  8. Volatility poses a serious financial risk to grantees, especially if the token price drops before they can liquidate. This undermines the predictability of the grant.

  9. Recommended Action: Follow the approach used by Uniswap or Lido by adopting a stable-oriented funding structure (e.g., 80% stables / 20% native token). This offers stable operating budgets plus some alignment to Arbitrum’s ecosystem growth.

  10. Over-Bureaucratizing Small Grants

  11. Requiring multi-step governance or high-level signoffs for micro-grants introduces delays and drains Domain Allocators’ bandwidth.

  12. Recommended Action: Employ tiered approvals (as Lido does with “Pebble” vs. “Boulder” grants, or as Arbitrum’s DDA concept might) so that small grants can be fast-tracked by a single allocator.

  13. Neglecting Post-Funding Verification

  14. Without structured milestone checks or transparent dashboards, many funded projects can stagnate or fail to deliver. External programs often struggle with incomplete tracking.

  15. Recommended Action: Enforce milestone-based payouts with a unified tracking page (i.e. Lido, dYdX) — ensuring that each grantee updates progress. This clarifies when (or if) the next tranche will be disbursed.

  16. Under-Communicating Delays or Rejections

  17. Applicants left “in limbo” (ghosted) can grow frustrated, harming community trust. High-volume ecosystems often see this problem.

  18. Recommended Action: If review times spike, create quick “waitlisted” or “pending” notices. Provide concise feedback even for unsuccessful applications, allowing teams to refine and possibly reapply later.

Recommendations

3 Recommendations for Arbitrum Grants

These recommendations draw on our internal assessment of Arbitrum’s Delegated Domain Allocator (DDA) program and external research into six leading ecosystem grants models. Since Arbitrum’s latest round of grants is already underway, we’ve separated our proposals into immediate adjustments that can be adopted mid-cycle with minimal disruption, and longer-term reforms that demand more significant planning or onchain governance steps. This approach ensures the DAO can implement quick, high-impact optimizations while laying the groundwork for deeper structural enhancements in future cycles.

3.1 Immediate DDA Feedback and Recommendations

These recommendations address small but crucial refinements we can make to the DDA program right now, without major governance changes or new budget allocations. By improving operational efficiencies, funding workflows, and post-grant accountability, we can significantly strengthen Season 3’s impact and lay a solid foundation for future cycles with minimal disruption to the current program.

3.1.1 Governance & Operations

  1. Ensure all decisions from DAs are clearly communicated on Questbook as the single source of truth
  • To prevent miscommunication and fragmented discussions between Discord and Questbook, which often separate DA questions from applicant responses.
  1. Standardize templates for different reporting formats (PM, DAs, grantees) ensuring they align with the KPIs and success metrics initially outlined in the program
  • Currently, reporting methodologies vary, making it difficult to track performance consistently across different projects.
  1. Assign a DAO-experienced Program Manager (PM) with greater involvement and oversight of DA processes (note: this has already been achieved in Season 3)
  • Ensures consistency and accountability, especially in areas where oversight has been weak.
  1. Hire DAs from a concentrated DAO-centric talent pool of expert service providers (note: Season 3 is already approved and DAs have been elected)
  • Improves the quality of funding decisions by involving domain experts who understand the needs of grantees and the broader ecosystem.
  1. Define clear strategic goals for the program that align with that of the DAO and link explicitly with measurable impact KPIs on a per-project basis
  • Aligns funding with ecosystem priorities and makes it easier to evaluate project success.

3.1.2 Funding & Budgeting

  1. Investigate and address the root causes of disbursement errors, specifically cases of duplicate fund distributions to grantees
  • Errors in fund disbursement can lead to financial mismanagement, even if funds are later recovered.
  1. Implement safeguards to prevent manual errors in fund disbursement, such as pre-committed funding allocations during proposal approval
  • Manually sending funds increases the risk of mistakes that could be costly or irreversible.
  1. Establish clear rules regarding changes to funding milestones and requested amounts post-approval. To prevent shifting budgets and unexpected funding gaps that disrupt project execution, either:
  • Forbid changes in the requested amount once a grant is approved – Avoids grantees arbitrarily increasing funding needs.
  • Allocate an emergency budget per proposal – Provides a buffer for unexpected costs while maintaining financial discipline.
  • Disallow changes in milestone values after reaching a set number of milestones – Prevents sunk-cost fallacy from forcing continued funding into struggling projects.
  • Define and document some other structure for DAs to follow.

3.1.3 Impact & Accountability

  1. Define clear success metric collection methodologies and assign responsibility for data gathering
  • Avoids ambiguity about who tracks what, ensuring that relevant data is collected consistently.
  1. Require impact KPIs to be submitted and recorded at each milestone completion before payouts
  • Ensures projects deliver measurable results into a central repository before receiving funding.
  1. Develop a structured framework, process, and budget for continued tracking of projects post-grant. Many projects receive initial funding but lack follow-up assessments to measure their sustained impact (note: this has partly been addressed in Season 3)
  • This framework should include:

  • Extended impact KPI tracking – Captures long-term effects beyond initial milestones.

  • Well-crafted surveys for long-term impact assessment – Provides qualitative and quantitative insights into project effectiveness.

  1. Increase post-grant support for funded projects. Grants are often one-time payments, but additional support can help projects maximize their impact. Suggested improvements include:
  • Networking opportunities across the ecosystem, DAO, Foundation, and OCL to help grantees connect with key stakeholders.
  • Introductions to established Arbitrum protocols for potential integrations and collaborations, encouraging ecosystem growth.
  • Co-marketing support for top performers via Arbitrum’s official channels to increase visibility for high-impact projects.
  • Guidance toward post-grant funding opportunities, helping successful projects secure continued support and integrating them deeper into Arbitrum.

3.2 DDA-Adjacent Recommendations

Beyond immediate quick fixes, these suggestions call for moderate structural tweaks and strategic enhancements to the DDA model. While they may require votes or structural changes, they primarily build upon the current DDA framework to deliver a more adaptive, transparent, and data-driven grants program.

3.2.1 KPI Standardization and Impact Tracking

Rationale

  1. Aligning with the DAO’s Strategic Objectives
  • The Arbitrum DAO’s overarching goals are highly debated but generally most would agree they include driving ecosystem adoption, cultivating developer innovation, and expanding practical use cases across multiple domains (e.g., Gaming, Dev Tooling, Education, Orbit). We look forward to investigating the submissions to the current Strategic Objective Setting (SOS) initiative and recommend that the consolidated outcomes form the basis of Arbitrum’s grant program strategy, both for DDA Season 3 and other potential programs.
  • Implementing standardized KPIs tied to these high-level ambitions would help the DAO assess the impact of grants in terms of growth in user adoption, developer tooling maturity, real-world adoption metrics for recipients, or other specific DAO objectives that become clear in the coming months.
  1. Maximizing Grant Effectiveness
  • Without clear, measurable outcomes, it’s difficult to pinpoint which grants deliver genuine impact and which fall short.
  • The introduction of data-driven insights based on standardized KPIs would allow Domain Allocators (DAs) and the broader DAO to make evidence-based decisions on funding expansions, milestone adjustments, or overall domain strategies as well as leverage this data for future programs.
  1. Ensuring Long-Term Ecosystem Sustainability
  • Arbitrum is rapidly evolving. Solutions such as Orbit chains and advanced features like Stylus are continually shifting the ecosystem’s priorities.
  • Consistent KPI measurement provides a dynamic feedback loop that keeps the grant program responsive and ensures that evolving objectives (e.g., a new push for real-world DeFi or more dev tooling for Telegram bots) are reflected in how success is defined and impact is measured.
  1. Facilitating Accountability & Transparency
  • A standard KPI framework reassures token holders, potential partners, and the broader Ethereum community that Arbitrum’s grants have a measurable impact that can then be leveraged to assess the program’s overall success. It also sets clearer expectations for applicants who wish to engage in the DAO’s grant programs.
  • Frequent KPI updates foster a culture of transparency, where the community can track progress and weigh in on potential improvements.

Suggested Implementation Model

  1. Domain-Specific KPI Templates
  • Before teams submit a grant proposal, they should understand that the DAO requires each application to include KPI targets aligned with our broader strategic objectives.
  • Each domain (Gaming, Dev Tooling, Education, New Protocols & Ideas, Orbit) defines 2–3 core metrics reflecting the DAO’s strategic targets. For instance:
    • Gaming: Daily active users, user retention (14-day, 30-day, 60-day), number of in-game transactions.
    • Dev Tooling (One & Stylus): GitHub forks/stars, monthly active devs using the library, # of integrated dApps.
    • Education & Community: Event attendance, post-event dev signups, workshop completion rates.
    • New Protocols & Ideas: TVL if DeFi-based, daily user-interactions, cross-protocol collaborations.
    • Orbit (if applicable): # of dApps on the new chain, cross-chain activity, bridging metrics.
  • Projects add 1–2 custom KPIs if their use case is unique.
  1. KPI Integration into Grant Lifecycle
  • Application Stage:
    • Grantees must map their project plan to the relevant domain metrics. This ensures they are fully aligned with ArbitrumDAO’s strategic objectives from day one.
  • Milestone Submissions:
    • Each time a milestone is completed, the grantee reports the status of their core and custom KPIs. The DA verifies these figures before approving payouts.
  • Post-Completion Follow-Up:
    • A 6-month and 12-month check-in gauges whether progress (e.g., user retention, dev adoption) sustains beyond the initial milestone phase to measure post-grant project liveliness and success.
  1. Centralized Data Management
  • The reporting and verification of the standardized KPIs should happen within a single repository platform to avoid information fragmentation. These could be embedded within Questbook directly into the proposal and milestones reporting form. These metrics could then be compiled into a shared dashboard, letting the DAO compare projects across domains and see trends over time.
  1. Role of a Dedicated Grants Team
  • If the DAO opts to create a Grants Team (composed of DAs, PM, and possibly data/analytics specialists), it would:
    • Ensure every domain’s KPI requirements remain aligned with ArbitrumDAO’s shifting priorities.
    • Actively check self-reported data, propose onchain or third-party verifications where appropriate, and investigate anomalies.
    • Compile monthly or quarterly cross-domain reports showing which metrics are surpassing or missing targets.

Desired Outcomes

  1. Direct Line of Sight to DAO-Wide Goals
  • By implementing not only project-specific KPIs (e.g. measuring user adoption, developer engagement, or transaction volume) but also ecosystem-wide KPIs, the DDA program would strengthen its link to fulfilling broader second-order goals for Arbitrum.
  1. Make the Job of DAs Easier
  • DAs would be able to quickly assess the reported milestones against the standardized KPIs, allowing them to spot projects lacking or showing success much more quickly.
  • In turn, this would allow for faster interactions on rubrics or funding strategies, ensuring responsiveness to changing market conditions.
  1. Data-Driven Innovation
  • Leveraging the easily accessible data and standardized KPIs, programs can be improved, extended or modified according to hard evidence.
  1. Enhanced External Reputation & Partnerships
  • Transparent and measurable success stories will position Arbitrum grants positively within the broader ecosystem, attracting more applicants.

How to Get There

  1. Phase 1: Define Key Metrics per Domain
  • DAs to propose 2-3 standardized KPIs, including ecosystem-wide KPIs tied to Arbitrum’s strategic targets.
  1. Phase 2: Update Grant Application & Milestone Forms
  • Implement mandatory and standardized KPI fields by creating a milestones reporting form in Questbook (or whichever grant platform is used).
  • Metrics should be clearly defined especially in terms of how they are measured (e.g. self-reported data, onchain data).
  1. Phase 3: Roll Out to Existing & New Grants
  • The new milestones reporting forum could be already introduced and tested in Season 3. However, in case this is deemed to disrupt the current flow, they could be implemented in future seasons.
  • Alignment with KPIs should be non-negotiable. Failure to reach or provide KPI data should result in the milestones deemed incomplete and thus not reached.
  1. Phase 4: Dashboard & First KPI Review
  • Launch a shared KPI dashboard, updated monthly.
  • The dedicated Grants Team (or PM + domain allocators) will include a KPI overview within the reporting on the program.
  1. Phase 5: Refine & Adapt
  • The KPIs should be collectively evaluated and revised every season to ensure they align with Arbitrum’s evolving priorities.
  • If certain metrics prove irrelevant or a new chain feature emerges (e.g., a big shift in Orbit usage), the KPI sets get updated accordingly.

3.2.2 Program “Wishlist”

Rationale

  1. Proactive Ecosystem Building
  • Rather than waiting passively for applicants to propose ideas, the DAO publishes a Wish List, an evolving set of desired tools, protocols, integrations, and solutions across the different domains.
  • Community members and domain allocators can regularly update this Wish List, ensuring that the DAOs’ most pressing needs or interesting ideas receive targeted focus.
  1. Real-Time Collaboration & Feedback
  • Innovation Calls serve as an interactive environment where developers pitch solutions, either from the Wish List or from their own experience to receive immediate feedback.
  • This mirrors how Ethereum’s grants program occasionally posts explicit “pinboard” items or bounties that spark quick, targeted responses.
  1. Better Capital Allocation and Lower Barriers to Entry
  • Applicants can have transparent access to the wish list items and leverage the Innovation Calls to build products. This ensures funds are assigned to the most pressing issues of the ecosystem.
  • The open discussion helps applicants refine that concept and integrate domain-specific best practices, bridging the gap between a raw idea and a polished prospective grant.
  1. Medium-Term Pilot Feasibility
  • The implementation of monthly or bimonthly calls and a public Wish List does not require significant overhaul of the program and can be integrated into the existing structure in a relatively short period of time.
  • Given these are relatively low lifts, if these calls prove successful, they can become a long-term fixture, if proven to be unsuccessful, they can be scrapped.

Suggested Implementation

  1. DAO Wish List
  • Regularly Updated Items:
    • Each domain allocator (Gaming, Dev Tooling, Education/Community, New Protocols & Ideas, Orbit) compiles and posts a short list of high-priority ecosystem needs, similar akin to RFPs.
    • Example items could include “We want a Telegram-based user onboarding flow for Dev Tooling” or “We need a better NFT marketplace aggregator for Gaming NFTs.”
  • Publicly Accessible:
    • Post this Wish List in publicly accessible locations.
    • Update it as applicants pick from the list to remove fulfilled items and add new requests based on evolving DAO goals.
  1. Innovation Calls
  • Frequency & Format:
    • Host calls once every 4–8 weeks (depending on how fast new Wish List items emerge).
    • Maintain a 60–90 minute structure, allowing 5–10 minutes per pitch.
  • Agenda:
    • Part 1: Wish List Review: A quick rundown of current high-priority items. The domain allocators explain why these are important.
    • Part 2: Developer Pitches: Devs and community members present ideas, some directly addressing the Wish List, others entirely new.
    • Part 3: Q&A & Next Steps: Allocators and community experts share feedback, possibly directing the pitched idea to a relevant domain for prospective funding.
  1. Documentation & Follow-Up
  • If a pitch is strong, domain allocators can invite the team to begin a prospective grant application in Questbook immediately.
  • All calls should be recorded and summarized, made publicly available, and posted on the forum, tagging relevant domain leads so they can continue the conversation or clarify details asynchronously.

Desired Outcomes

  1. Targeted Solutions to Ecosystem Gaps
  • By publishing a Wish List, the DAO ensures that applicants focus on high-impact or under-served areas whether it’s specialized analytics, developer tooling, or new DeFi primitives.
  • Innovation Calls keep these needs front and center, attracting devs who can fulfill them quickly.
  1. Rapid Validation & Iteration
  • A developer might propose a partial solution, get feedback from domain experts and the community, and then iterate before formally applying for a grant.
  • This iterative cycle can cut weeks of uncertainty from the typical application process.
  1. Enhanced Community Engagement
  • The calls create a transparent, inclusive forum where “wishlist owners” (domain allocators or key DAO members) meet builders face-to-face.
  • Newcomers see the DAO’s openness and clear guidance, which is more welcoming than sifting through a static webpage or oversaturated Discord channels.
  1. Efficient Grant Allocation
  • The prospective funding route is more straightforward as the pitched solutions are already partially validated against real ecosystem needs.
  • Fewer “off-target” proposals and less time wasted on extensive back-and-forth for domain alignment.

How to Get There (Medium-Term Roadmap)

  1. Weeks 1-3: Wish List Draft & Call Setup
  • Each domain allocator composes a preliminary Wish List (3–5 items each).
  • Publish the combined Wish List in a pinned governance forum thread and include short rationale for each item.
  • Schedule the first Innovation Call for the following month, creating a sign-up form for devs to reserve pitch slots.
  1. Weeks 4-5: First Innovation Call
  • Opening Agenda:
    • Present the top Wish List items, clarifying scope and potential synergy with existing Arbitrum dApps.
    • Invite devs or community members who have ideas related to the Wish List to pitch — plus any non-Wish-List ideas as time allows.
  • Recap & Public Post: Summarize the call in the forum, highlight interesting pitches, and nudge domain allocators to follow up with relevant teams.
  1. Weeks 6-8: Refinement & Potential Funding
  • If a solution directly addresses a Wish List item, domain allocators encourage or assist the team in drafting a prospective DDA grant application.
  • Remove items that are now in progress or add new priorities the DAO identifies.
  1. Week 8+: Institutionalize the Process
  • If the first calls produce good momentum, adopt them as a monthly or bimonthly staple.
  • Track which Wish List items got solved, which pitch teams earned grants, how quickly solutions progressed.

3.3 Additional Grant Structures

Finally, these are larger-scale proposals that would further diversify and future-proof Arbitrum’s grants ecosystem. In an ideal scenario, where we have the bandwidth, resources, and governance consensus, introducing new structures such as dedicated hackathon funnels or expanded team-based oversight could greatly impact the effectiveness of Arbitrum’s grant programs.

3.3.1 Introduce a Hackathon to DDA Funnel

Rationale

  1. Injecting New Builder Energy:
  • Many talented developers are accustomed to hackathon culture, discovering ecosystems through short, focused bursts of creation.
  • Case Study (Solana): Multiple top protocols like STEPN, Solend, and Mango Markets originated from hackathon teams. This success is derived from a well-structured process: (a) open invites, (b) track-specific challenges, and (c) immediate follow-up funding for promising MVPs.
  • For Arbitrum, adopting a similar approach encourages devs from other L2s or entirely different chains to test the waters in a time-bound, high-energy environment.
  1. Accelerating Proof of Concept (PoC) to MVP:
  • Traditional grants can be time-consuming for unproven teams. A hackathon shortens that cycle and teams can produce something tangible in 4–8 weeks.
  • Once complete, the DDA can easily funnel additional support to the best projects. This “MVP first, grants second” pipeline ensures that funds go to teams who have demonstrated actual potential.
  1. Community & Marketing Boost:
  • Hackathons create public excitement around building on Arbitrum. They’re inherently social and open.
  • Solana’s frequent hackathons (like the “Riptide Hackathon” or “Ignition”) regularly drew thousands of participants.
  1. Ecosystem Alignment:
  • Each Season 3 domain (Gaming, Dev Tooling, Education, etc.) can feed their most pressing challenges into hackathon “tracks,” guiding new devs to fill ecosystem gaps, or hackathons can be used to fill needs not covered through the DDA.

Suggested Implementation

  1. Event Duration & Scope:
  • 4-8 weeks long to encourage real prototypes but short enough to maintain urgency.
  • Each domain gets a distinct challenge (e.g., “Build a browser-based Arbitrum mini-game,” “Create a Telegram trading bot with Stylus,” etc.).
  • Keep it open to all devs, but request a short project idea submission at the start to filter out obvious spam.
  1. Prizes & Incentives:
  • A modest but meaningful prize pool, for instance, $100k total, split among top teams in each domain. This could pull from a portion of the already approved funds for season 3.
  • Outline that any project placing in the top 3 of a domain track is eligible for a “fast-lane” prospective grant review.
  • Potential extra sponsor prizes from ecosystem partners or tooling providers.
  1. Mentorship & Workshops:
  • Weekly office hours could be hosted by domain allocators or recognized experts, covering ecosystem how-tos (e.g., bridging, dev environment setup, performance tips).
  • Encourage well-known devs in the Arbitrum community to volunteer an hour or two each week to advise participants.
  1. Judging & Criteria:
  • Each track has a small panel with domain expertise.
  • Common scoring pillars: (a) Technical Execution (Is it well-coded? Any novel architecture?), (b) Originality (Does it solve a new or under-addressed problem?), (c) Alignment with Arbitrum (User adoption potential, synergy with existing ecosystem?), and (d) Feasibility for further development.
  1. Concluding Demo Day:
  • Teams present final prototypes (virtually or in-person).
  • Judges announce winners; top teams get official invites to the DDA program’s next milestone-based funding stage.

Desired Outcomes

  1. Ongoing Project Pipeline:
  • Within a month, the DAO can identify multiple MVPs that fill important gaps, for example, a new GameFi MVP, a novel analytics tool, or an interactive dev simulator.
  • By connecting hackathon winners to prospective grants, the ecosystem steadily gains new dApps, dev tools, and community initiatives.
  1. Cross-Pollination & Community Growth:
  • Hackathons often see cross-domain brainstorming, with participants from different backgrounds teaming up (e.g., a dev from a gaming background plus one from DeFi).
  • Active Slack/Discord channels continue beyond the event, evolving into ongoing dev communities, similar to how certain Solana hackathon communities became permanent dev hubs.
  1. Enhanced DAO Visibility & Reputation:
  • A well-publicized hackathon draws headlines in crypto media, especially if participation is robust.
  • Demonstrates that Arbitrum actively invests in fostering grassroots innovation, a point that can attract additional capital or partnerships from outside the ecosystem.
  1. Efficient Resource Allocation:
  • Instead of awarding large sums blindly, the DAO first sees MVP results. Only top teams move to the DDA stage, ensuring higher accountability and success rates.
  • This approach mitigates the risk of awarding grants to teams that never deliver, as the hackathon has already tested their execution capacity under time pressure.

How to Get There

  1. Phase 1: Planning & Budget (Month 1)
  • The DAO votes on the prize pool and domain tracks e.g., $100k total, split across 4–5 domain tracks.
  • Each domain allocator identifies a specific question or problem statement to guide hackathon participants.
  • Create a basic event webpage or portal for sign-ups.
  1. Phase 2: Announcement & Promotion (Month 2)
  • Share official details across Arbitrum’s forum, Discord, Twitter, dev communities, and relevant forums.
  • Domain allocators or external experts confirm workshop/AMA timeslots.
  • Teams start forming; some might post initial ideas for feedback.
  1. Phase 3: Hackathon Execution (Month 3)
  • Optional opening ceremony or livestream explaining the domains, key resources, and the DDA follow-up path.
  • Weekly sessions or “AMA hours” where participants can get code help or pitch feedback.
  • Teams submit final repos/demos. Judges review and pick winners, awarding prizes.
  1. Phase 4: Follow-Up (Month 4+)
  • The top 1–3 teams in each domain automatically get an expedited prospective grant review.
  • Publish a post-hackathon highlights article or forum post spotlighting winners and linking to each MVP’s GitHub.
  • The hackathon Discord remains open for teams to continue refining or collaborating. The next hackathon can build on lessons from this event.

3.3.2 Establish a Dedicated ‘DAO Grants Team’

Rationale

  1. Unified Strategy & Oversight
  • While DAs focus on the assigned categories (Gaming, Dev Tooling, etc.), a central team can maintain a broad strategic vision, ensuring Arbitrum’s grant programs evolve in sync with the DAOs’ goals.
  • This group continuously reviews performance across domains and coordinates any needed shifts, from adjusting rubrics to testing new funding models or identifying new domain tracks.
  • The DAO Grants Team will also ensure timely communication, acting as an intermediary between the DAO and the program.
  1. Fluidity & Experimentation
  • As new ideas arise the DAO can benefit from a single coordinating body to pilot these concepts, gather feedback, and refine them.
  • A Grants Team can swiftly test, adopt, or discontinue experimental programs, ensuring the DDA remains responsive to ecosystem changes or new findings that emerge from the program.
  1. Streamlined & Cost-Efficient
  • Rather than creating a new, fully funded entity, the DAO could unify its existing Domain Allocators with the current Season 3 Program Manager to form a cohesive Grants Team. In this model, the PM continues day-to-day oversight of the DDA program, while additional team support (e.g., part-time roles for admin, communications, or data analysis) can assist with running hackathons, designing retroactive grants, or coordinating with external accelerator programs.
  • This approach ensures the PM isn’t forced to shoulder every operational task alone, while existing DAs and their budgets remain central to domain-specific decisions.
  • By consolidating rather than duplicating responsibilities, the DAO can roll out new initiatives without significantly increasing overall costs.
  1. Consistent Communication Experience
  • The DAO Grants Team will act as a unified group handling planning, strategy, and domain reviews to ensure consistent guidelines, timelines, and reporting.
  • Applicants will benefit from a clear and standardized structure, helping them navigate from pitch to prospective grant.
  • With standardized updates, clear reporting timelines, and consistent guidelines, DAO members and delegates have real-time visibility into ongoing grants, performance metrics, and any new initiatives.

Implementation Model

  1. Team Composition
  • Core Members (3–5): Existing Domain Allocators (DAs), the Program Manager (PM), plus potentially 1–2 additional members elected by the DAO (e.g., analytics or comms specialists).
  • No Major Budget Increase:
    • The compensation that DAs already receive can be repurposed to cover their expanded responsibilities, or increased if necessary.
    • If new tasks like hackathon planning require extra hours, the DAO can set a small, performance-based top-up, but generally keep overall costs near current levels.
  1. Primary Responsibilities
  • Continuously assess how domain grants are performing, review KPI data, and recommend changes to keep grants aligned with evolving DAO goals.
  • In effect, the DAs themselves become a coordinated unit, so any domain facing a backlog or unique challenge can draw help from the entire team.
  • Whether it’s a hackathon track, a Wish List approach, or another experiment, the team organizes the logistics, sets success metrics, and reports outcomes to the DAO.
  • Oversee domain budgets holistically, ensuring leftover funds or unallocated resources can be re-channeled to pressing needs without spiking overall program costs.
  1. Improving Coordination between DAs and PM
  • Because DAs serve on this Grants Team, operational tasks and domain evaluations become more cohesive.
  • The team can also appoint temporary sub-allocators if certain domains face spikes in proposals, thus preventing DAs from being overloaded.
  1. Checks & Balances
  • The DAO still retains ultimate governance oversight as any large budget expansions or domain reconfigurations would go to a Snapshot or onchain vote.
  • The team’s monthly/quarterly reports ensure accountability and transparency, showing how well new experiments (like hackathons) are working.

Desired Outcomes

  1. Cohesive & Adaptive Grant Ecosystem
  • With DAs and the PM unified in a single team, the program can nimbly adjust to ecosystem changes, for example, if a novel technology emerges, they can quickly set up a new track or adapt rubrics.
  1. Maintained or Lowered Overhead
  • By incorporating existing paid roles (DAs) into the Grants Team, the DAO avoids duplicating salaries while expanding its capacity to run new initiatives.
  • Administrative overhead can decrease if the entire team shares a single repository for data, rubrics, and best practices.
  1. Frequent & Transparent Reporting
  • The Grants Team, meeting regularly, can provide consolidated monthly updates:
    • Track successes and failures in each domain,
    • Highlight cross-domain insights (e.g., shared developer tooling requests),
    • Offer recommended next steps to the DAO.
  1. Strategic Roadmap for Innovations
  • New experiments can be scheduled, tested, and scaled under a consistent framework.
  • This ensures the entire ecosystem sees “one face” for proposals, pilot results, and follow-up grants.

How to Get There

  1. Phase 1: Charter & Alignment
  • Propose forming a single Grants Team composed of Domain Allocators and the PM, clarifying that no separate salaries will be introduced.
  • Define how each DA’s responsibilities expand to include overall program strategy, pilot management, and monthly reporting.
  1. Phase 2: Unified Workflows
  • Move all domain communications, data tracking (KPI reporting, forum updates) into one shared system like a combined Notion or Questbook instance, or a dedicated website like Scroll’s Level Up program.
  • The newly formed Grants Team can immediately test the viability of a Wish List or monthly Innovation Calls, collecting feedback on how they integrate with existing domain reviews.
  1. Phase 3: Steady State Operations
  • The team meets bi-weekly or monthly to discuss cross-domain trends, budget usage, success stories, or stumbling blocks.
  • Release monthly/quarterly summaries showing domain highlights, KPI insights, and any proposed expansions.
  1. Phase 4: Evaluate & Refine
  • Are hackathons bringing new teams into the ecosystem? Are Wish List items being tackled effectively?
  • If certain DAs or PMs transition out, hold a short candidacy period to fill positions, maintaining continuous operation without significant knowledge gaps.

3.3 Arbitrum Endgame: A Comprehensive Funnel

To establish Arbitrum as the prime destination for innovative builders, we propose creating a single, end-to-end pipeline that covers every stage of a project’s lifecycle — from idea to post-launch incentives. Below is a deeper look at how each stage would be structured, along with the rationale and expected outcomes.

Overview of the Funnel

  1. Hackathons → short, focused events to discover new ideas and talent incorporating a wishlist to target high value ideas.
  2. Grants → seed funding to reach MVP and validate core assumptions.
  3. Incubator/Accelerator → in-depth mentorship, dev resources, and networking to prepare projects for scale.
  4. DAO Investments → potential equity or token positions in top-tier projects, giving Arbitrum direct upside and a seat at the table.
  5. Protocol Incentives → post-launch user or liquidity incentives, boosting adoption on Arbitrum.

Detailed Stages

  1. Hackathons
  • Purpose: Attract new devs, ensure a quick MVP cycle (4–8 weeks).
  • Structure:
    • Track-Based Challenges: Each track reflects Arbitrum’s domains (e.g., Gaming, Dev Tooling, RWAs, etc…).
    • Mentorship: Office hours with domain experts and community leaders.
    • Follow-On Path: Winners gain immediate access to the grants program (fast-lane or “auto-qualified” route).
  • Value:
    • Creates immediate community interest and recognition for Arbitrum as an ecosystem that provides significant support to new builders.
    • Quickly filters for promising teams with real deliverables.
  1. Grants
  • Purpose: Fund the transition from MVP to a fully tested alpha product.
  • Milestone-Based: Clear deliverables (e.g., user pilot, core dev progress) must be hit before the next payout.
  • Oversight: Grants Team verifies that each milestone meets set KPIs (user traction, code quality, etc.).
  • Value:
    • Provides financial runway for budding projects.
    • Encourages accountability through transparent deliverables.
  1. Incubator/Accelerator
  • Purpose: Give advanced teams (post-grant) a structured environment for scaling product, user base, or business model.
  • Key Features:
    • Mentorship & Workshops: Legal, tokenomics, marketing, and multi-chain strategy.
    • Demo Days: Show off progress to the DAO and external investors, forging valuable partnerships.
    • Arbitrum-Centric Resources: Access to Stylus integrations, bridging solutions, developer frameworks, and more.
  • Value:
    • De-risks the “jump to mainnet” by providing robust support and feedback loops.
    • Increases the chance of building sustainable, high-value dApps on Arbitrum.
  1. DAO Investments
  • Purpose: Capture long-term upside and deepen alignment with the ecosystem’s top projects.
  • Implementation:
    • Equity or Token Holdings: The DAO can negotiate small stakes in exchange for capital.
    • Co-Investment Opportunities: Collaboration with trusted VCs or other strategic partners, potentially even including community members (the entire ecosystem wins together with interests aligned).
  • Value:
    • Generates new revenue streams for the DAO treasury if the project thrives.
    • Aligns incentives: the DAO benefits financially by supporting projects that successfully grow on Arbitrum.
  1. Protocol Incentives
  • Purpose: Support go-live user growth, liquidity, or bridging.
  • Approach:
    • Liquidity Mining / Staking Rewards: Encourage adoption of newly launched protocols.
    • User Onboarding Incentives: Target user growth for consumer or gaming dApps.
  • Value:
    • Ensures traction once projects launch on Arbitrum, reducing the risk of “ghost” dApps.
    • Solidifies Arbitrum’s brand as a high-traffic network where devs see real user uptake and support through every step of the process.

Role of the Grants Team

  • One Steering Body for the Whole Pipeline: The same select group (Domain Allocators + Program Manager + any additional investment liaison) would:
    • Oversee hackathon planning, define track categories.
    • Approve grants and track project KPIs.
    • Manage incubator operations or collaborate with external partners.
    • Coordinate investment decisions with DAO governance.
    • Recommend protocol-level incentive frameworks for successful projects.
  • Provide regular, holistic updates across the entire funnel, from hackathon participants to post-launch success metrics.
  • By unifying all programs under one body, funds can be re-channeled quickly to high-impact areas or projects.

Phase 1: Pipeline Design & Governance Alignment

  1. Define the Vision & Scope
  • Draft a concise overview of the Hackathon → Grant → Incubator → Investment → Incentive pipeline.
  • Outline how each stage complements existing domain allocations.
  1. Preliminary Budget & Role Mapping
  • Estimate initial budgets for hackathons, grants, incubator operations, and any investment capital.
  • Identify or recruit individuals for key roles (e.g., Incubator Lead, Investment Liaison).
  • Clarify how these roles integrate with the existing “Grants Team.”
  1. Governance Proposal Draft
  • Prepare a short or multi-part proposal explaining the funnel structure.
  • Address legal or compliance concerns around DAO investments (equity/tokens).
  • Outline how KPI tracking (from previous recommendations) will unify at each stage.
  1. DAO Feedback & Voting
  • Present the pipeline proposal to the community via forum or Discord for initial feedback.
  • Incorporate changes, finalize the governance proposal, and schedule a Snapshot or onchain vote.
  • Ultimately, the goal is a ratified plan endorsed by the DAO with high-level budgets, roles, and timelines for each stage of the pipeline.

Phase 2: Hackathon Launch & Marketing Prep

  1. Hackathon Branding & Tracks
  • Decide on a name, branding assets, and exact theme (e.g., “Arbitrum Dev Sprint”).
  • Each domain allocator proposes 1–2 core challenges for participants to tackle ideally taking some inspiration from the wishlist.
  1. Logistics & Partnerships
  • Finalize event dates (4–8 weeks total, with possible extension for final judging).
  • Secure any co-sponsor or partner relationships like tooling providers, infrastructure sponsors.
  • Prepare a hackathon website or platform, sign-up forms, and marketing materials.
  1. Mentorship & Workshops
  • Identify domain experts and volunteers who’ll host “office hours” or tutorials on a regular cadence.
  • Set up a dedicated Discord channel or website for participant Q&A.
  1. Marketing Push
  • Launch forum and social media announcements, targeted newsletters, collaborations with dev communities.
  • Highlight the end-to-end funnel and post-hackathon advantage (fast-track to grants/incubator).

Phase 3: Update & Integrate Grant + Incubator Framework

  1. Refine Milestone-Based Grants
  • Incorporate standardized KPIs for all new projects including but not limited to hackathon winners.
  • Streamline application flows in Questbook (or chosen platform) to reflect the pipeline’s structure.
  1. Design Incubator Curriculum & Milestones
  • Outline a 6–12 week incubator schedule that includes mentorship, tokenomics, bridging to other L2s, marketing.
  • Pre-define “checkpoints” for teams (e.g., alpha build completion, legal readiness, testnet launch).
  1. Hire/Assign Incubator Lead
  • If using existing DAs or PM, clarify how incubator oversight merges with normal domain duties.
  • If new staff is needed (e.g., part-time mentorship coordinator), finalize contracts and budgets.
  1. Eligibility & Onboarding
  • Define clear criteria for which grant recipients get invited to the incubator for example, MVP traction, strong dev team, etc.
  • Publish the selection process on the forum or in an FAQ so teams know how to qualify.

Phase 4: Formalize DAO Investment Guidelines

  1. Legal & Compliance Exploration
  • Assess feasibility of the DAO taking equity/token positions:
    • Clarify any relevant jurisdictional concerns or disclaimers for the DAO.
    • Possibly engage external counsel or a specialized working group.
  1. Investment Subcommittee Creation
  • If the DAO is comfortable with direct investments, form a smaller subcommittee under the Grants Team.
  • Outline how they evaluate deals, structure SAFE notes, negotiate token warrants, etc.
  1. Collaborative Investment Model
  • Seek co-investment partnerships with known crypto VCs or institutional funds.
  • Create templates for term sheets, ensuring fairness and alignment with the DAO’s mission.
  1. Governance Approval
  • Present the finalized investment approach (equity vs. tokens, typical round size, etc.) for another DAO vote if needed.
  • Integrate with the existing milestone-based model: teams that graduate from the incubator can propose an investment round to the subcommittee.

Phase 5: Launch Protocol Incentives Framework

  1. Define Incentive Goals & Budget
  • Are these incentives for liquidity mining, user onboarding, bridging volume, or all of the above?
  • Decide how much of the DAO’s treasury can be allocated to “Performance Incentives.”
  1. Design Performance Triggers
  • Outline the key metrics that qualify a project for incentives for example, daily active users, DeFi TVL, cross-chain usage.
  • Possibly tie in “Tiered Incentives” (small boost at threshold #1, bigger if they surpass threshold #2).
  1. Implement Technical Infrastructure
  • Smart contract frameworks for distributing rewards.
  • Potential onchain checks for verifying user counts, transaction volumes, or bridging stats.
  1. Participant Onboarding
  • Incubator graduates can apply automatically.
  • Existing or external projects on Arbitrum can also qualify by proving they meet the required metrics.

Phase 6: Monitoring, Data, & Continuous Improvement

  1. Unified KPI Dashboards
  • Consolidate metrics from hackathons, grants, incubator checkpoints, and live protocols receiving incentives.
  • Publish monthly or quarterly updates on usage stats, amounts disbursed, ROI from investments, etc.
  1. Regular Team Syncs
  • “Grants Team” (now orchestrating hackathons, incubator, investments, incentives) meets every 2–4 weeks.
  • Evaluate any pipeline bottlenecks (e.g., too many or too few applicants at a certain stage).
  1. Feedback Loops
  • Surveys or open calls for dev teams at each funnel stage to provide direct suggestions on how to refine the process.
  • Use that feedback to adjust guidelines, budgets, or the timeline for each stage.
  1. Governance Updates & Refinements
  • If the pipeline is overperforming or certain aspects like investment deals need more resources, the Grants Team can propose expansions or pivot strategies.
  • If phases are underutilized like a lower than expected hackathon turnout, re-allocate resources accordingly.
Summary Slides

Please find the Google Slides presented to the DAO below.
ARDCV2 Research Deliverable #1: Grant Analysis

11 Likes

Huge props for this in-depth report @CastleCapital @DefiLlama_Research! Great work.

1 Like

This is great to see and gives good context from which to improve future grant programs.

We support your recommendations and hope to see these implemented in the future.

Guys, thanks for the detailed report!

A few comments:

Although I can see how this option is tempting, I believe it goes against the development of the industry. Web3 is no longer a cohesive market but is becoming heavily differentiated across verticals. e.g. the Gaming track has very different objectives (mostly growth grants) than what can be asked of new protocols in DeFi or CollabTech.
If Arbitrum further consolidates by functions (e.g. Grants, Investments, etc) it will struggle to tap into opportunities present by new verticals and offer specialised support to different vertical needs.
We have been advocating for an organisational design that’s structured around verticals so we don’t become too detached form the new multi-vertical reality of the industry.

In this sense, having some standarised processes across domains is good, and you have pointed multiple opportunities for improvement, but we need to be careful to not over standardise only with DeFi or Gaming in mind when the industry is a lot broader and new, valuable opportunities emerge all the time.

This is particularly important when thinking about KPIs, as getting this wrong will create a lot of rigidity and perverse incentives. E.g. TVL is largely seen as a vanity metric and we shouldn’t be pushing all DeFI grantees to optimise for that when e.g. protocol fees is significantly more meaningful. I understand your KPI suggestions were not final but simply illustrative, so I mention this so we avoid quickly enforcing “3 KPIs per domain” and make the DAO detrimentally bureaucratic.

Wishlists are great for inspiration. Beyond that, we need to be really careful to not send builders down the wrong path. Web3 already suffers from a crippling lack of user centricity, instead launching pumps and dumps and misguided “cool ideas” that fail to have real world impact.
We need to evolve towards providing builders with market insights about real needs, and not just wish lists coming from non-builders and non-customers (domain allocators are most frequently not the users of the tech so have limited understanding of what’s actually needed compared to real users).
I understand the DAO could have some good ideas, but ideally we have vertically-specialised research Units that can map the landscape of problems. Said otherwise, a research-based process for generating wishlists is amazing, while a poor process is likely to be destructive.

This is a cool idea if it doesn’t become an extra hoop that applicants need to jump through. Other ecosystems already require popularity contests and making time for calls scheduled at the same time that other things are, and it adds complexity to the process of applying rather than making it easy.
I’d invite you to refine the idea with that constraint in mind, as extra hoops won’t help attract the best builders.

We have experimented with Hackathons as part of a builder funnel already, and Arbitrum has run multiple hackathons over the last few months. The results have been ok but not super exciting. When we look at root causes, there are many issues:

  • hackathons are these days well attended by professional bounty hunters who have no desire to build a real project but specialise in winning hackathons and then moving on to the next thing. This has become a large and well practiced population.
  • the many Hackathons run by Arbitrum are already not producing significant results for the DDA funnel. This is in part due to the fact that Hackathons lack the user-research aspect that’s critical to produce ideas with real world use cases. And also because these hackathons are not well connected to other opportunities (the exception being the Hackathon Continuation Program that’s currently piloting offering continuity to hackathon participants with an emphasis in customer-validation over just giving money to building stuff no one wants).
    Having more community and orientation so that participants are retained in Arbitrum would be a lower-hanging fruit than organising yet another hackathon, no?
    The solution here can also be these vertical communities where builders connect with real customers in their vertical, and not trying to create more generalist programs that lack depth and connection with specialised knowledge.
  • There’s perhaps a misunderstanding around what the tracks are, as the Education track is not about building tech and hence likely poorly suited for hackathons. The education track in fact has already funded multiple hackathons and other events for builders.

This is a good starting point but I think further diving into the topic is needed.

  • Polygon and SAFE have had poor experience with accelerators as the dealflow is not quite there.
  • Also, newer models are offering better ROI than acceleators, like venture studios. (YC is ofcourse the exception but Arbitrum has no chance of recreating YC)
  • Hackathons as a start of the funnel suffer from the problems described above related to dedicating builder attention to poor ideas. Previous research shows that 75% of Hackathon winners abandon their projects, and this is not due to lack of subsequent funding opportunities but because Hackathon Projects are often built without foundations (team that wants to build a venture, user and market research, etc).
  • I’d reframe DAO investments (which can include incubators, accelerators, venture studios, etc) to DAO Pre-seed+ investments. Just to clarify the stage of maturity and the modality (VC).
  • A few other steps are needed, including non-financial support programs outside of Grants and Incubators/Accelerators, research to identify opportunities (wishlist related), and token launch support.
    • Non-financial support can offer a cost effective method to attract and support builders. Our research on builder needs shows some promising opportunities here and we hope to share once ready to publish.
    • launching a token is a critical step and doesn’t seem to be taken into account. This is an opportunity for Arbitrum to support builders and potentially get upside via token warrants or so.
    • research to identify opportunities is absent. This is a specialised task that would ideally be handled by those with expertise on each vertical. I can for example imagine you doing a deep dive on DeFI, while a different research group would be best suited for Gaming, and we are well positioned for CollabTech, etc.

I thank you again for all the work. I hope I’m not sounding overly critical as I appreciate the effort put in. This is a big topic with many nuances one learns about by being in the trenches nd so I’m hoping the feedback can serve to advance Arbitrum’s thinking in this area and avoid us rushing to conclusions that could be counter productive.

1 Like