Web3 Grant Program Research and Analysis
Overview
This report provides an in-depth exploration of Web3 grant programs, analyzing them from various angles to understand their best practices, effectiveness, development, and overall impact. Our research examines best practices, methods for measuring program success, and the development of a maturity framework for grant programs in this space. Additionally, we outline key impact metrics based on insights from a range of Web3 grant programs. Another aim of this study is to identify opportunities to improve reputation systems within these programs.
To inform our analysis, we conducted interviews with grant program operators, engaged with forums across the ecosystem, and revisited key findings from the Grant Innovation Labâs State of Web3 Grants report, along with other retrospective studies. Drawing from this data, we developed several tools: a framework to evaluate program effectiveness, a maturity model to assess different stages of grant programs, and a set of metrics to gauge impact. We advise operators to use these tools in combination for a more comprehensive assessment of program outcomes.
Itâs important to acknowledge that grant programs in the Web3 space are still experimental by nature. The entire industry is in a state of evolution, and grant programs have had to adapt accordingly. Even the most advanced programs, often in their 5th or 6th iteration, are still relatively young. Therefore, we donât recommend directly comparing them with long-established grant initiatives and programs outside of Web3.
Finally, this study is empirical, shaped by the realities of a fast-evolving industry, but it is also data-driven, incorporating our deep dives and analysis from a wide range of grant programs. Our hope is that the insights provided here will be valuable to the wider ecosystem, supporting the continued development of effective and impactful grant programs in Web3.
Team
ZER8: Solo managed grant programs for Thankarb Milestone 1 and distributed over 500k $, Ex- eligibility team lead at Gitcoin DAO: reviewed over 3000 grant applications and helped save over 1m$. Iâve spent my last 3 years in web3 full time involved in managing grant programs, launching Qf rounds, reviewing grants(>3000) and studying friction points & inefficiencies.
Twitter: @zer8_future
Linkedin: Popescu Razvan Matei
Email: synprm@gmail.com
Mashal Waqar is the Head of Marketing at Octant. Sheâs also the co-author of the State of Web3 Grants report, and the Retroactive Grants report.
Her governance experience includes work with the Octant Community Fund, Gitcoin Grants Council, RARI Foundation, and ThankARB. Her roles include heading partnerships at Bankless Publishing, operations at VenturePunk, and research for seedclub, Protein, Gitcoin, and Ethereum Foundationâs Summer of Protocols. To date, she has collectively reviewed 130+ grants across various ecosystems in web3.
Mashal previously co-founded a global media company (The Tempest), a revenue-focused accelerator program for early-stage founders, a femtech startup, and a community-building consultancy . Mashal holds a B.S. in Computing Security with a minor in International Business from Rochester Institute of Technology (RIT). Sheâs a Forbes Middle East 30 Under 30 and winner of the 19th WIL Economic Forum Young Leader of the Year award.
Linkedin: Mashal Waqar @mashal
Twitter: @arlery
Email: mashal@milestoneventures.co
This work was made possible by the Cartographers Syndicate via the RFP program funded by ThankARB ML1B.
Report Structure
1.Challenges & Best Practices
2.Program Effectiveness
3.Maturity Framework
4.Impact metrics
5.Integration, refinement and implementation
6.Aknowledgements
Challenges
Depending on the type of grant program, thereâs different challenges to draw on.
Direct grant programs can be vulnerable to the principal agency problem, grants issued through community voting or quadratic funding and voting mechanisms, a key challenge is tackling sybil attacks.
For traditional grants programs, a few challenges as shared in the State of Web3 Grants report:
- Lack of reporting from grantees
- Sourcing quality applications
- Measuring impact of grant. There are several factors that can contribute to this such as:
- Inability to measure and track grantee progress
- Lack of reporting from grantees
- Inability to measure grantee sustainability
- For programs that donât define categories, comparison of grantees can be difficult if grantees projects are incomparable in nature
- Grant farming
- Crafting good RFPs, especially technical ones, is tough. Proper assessment takes resources.
- Manual due diligence takes time and resources. Even tougher to do at scale.
- Coordination between programs to sift out grant farmers
- Grantees want free money without any expectations for milestones or deliverables
- Finding quality applicants for programs with large applicant volumes is tough.
- Decentralizing to a community is tricky if the decision-making group does not have the relevant expertise. Another challenge specific to grants programs run by DAOs is itâs tough to coordinate with a larger DAO.
- Staying up to date with a large portfolio of grantees is also tough.
Solutions and Best Practices
- Consistent Reporting - monthly reporting and a regular cadence with consistency is an excellent way to be transparent, and to communicate updates with the larger community. Aave Grants DAO (AGD) is a good example where they share overall metrics such as:
- Total Grant Applications Received
- Total Applications Approved
- Acceptance Rate
- Percentage of projects that make it to the written stage and video call stage
- Total Amount Disbursed
- Percentage of Complete Grant Payments
- Percentage of In-progress/Incomplete Grant Payments:
- Status of all grants (percentage of complete, in-progress, and inactive or sunset)
- Amount and Quantity Approved by Category
- A summary of the grantees each month, along with how much they were awarded, and a brief description of their grant.
- The transparency in the evaluation process and criteria helps limit duplicate proposals and help applicants know what they need to improve in order to receive a grant in the first place.
- Having different grants formats and flavors for different types of grants is an effective way to deploy capital and to run grants experiments.
- Being adaptable and flexible with changing needs of the program and of the industry is a good practice that prevents the program from being outdated and ineffective.
- A way to improve the quality of applications is to outline the nature of the application process, making it easier for grantees to apply. An example of this is Uniswap Foundation, whose website includes a checklist for potential applicants as well as tips and considerations to help applicants strengthen their submission.
Learnings from Grants Programs, Operators, and L2s
The following insights were derived from a variety of ecosystems by combing through governance forum retrospectives and insights, as well as direct interviews with several operators and grants tools, and leadership from these projects.
Learnings from AGD (Aave Grants DAO) Grants Program
Program Overview
Aave Grants DAO (AGD) was a community-led grants program that fosters community growth and innovation in the Aave ecosystem by serving as a gateway for teams building on top of Aave Pools or with GHO. It was started in 2021 and announced winding down in 2024.
Insights from the AGD forum
- Establishing a legal entity allows a DAO to begin operating more independently and with more protection and certainty for contributors by providing a legal structure for members and the ability to operate with traditional industries.
- âIntroducing a rotating committee of community experts to serve short (3-6 months) time frames as grant reviewers allows for further involvement of the community while also bringing expertise from different community members.â
- âA well functioning grants team only works well if the team is nimble, can move fast, and act independentlyâ.
- âTerm elections for choosing capital allocators from the community is a well-established norm in various reputed ecosystems such as ENS DAO, Gitcoin etc. By implementing a more rigorous and clearly defined set of criteria for selection, an election process would offer a fair and unbiased chance for community members, including those who are new contributors and proposers, to join the AGD team and contribute meaningfully to the DAOâ.
- âA transparent process empowers the community to keep on oversight on what proposals are getting accepted, rejected, funded and get a deeper insight into the review parameters and processâŠA transparent process will empower the community to ask the right questions and allow for constructive feedback, thus improving further iterations of AGD. Additional deliberation and community participation is better than having none.â
- Grantee growth parameters should be objective and tied to tangible value creation to better evaluate the performance of the grants program.
Insights from the AGD Operations Team
- The ecosystem: stage of protocol, contributor maturity, developer interest/awareness - are really important factors for structuring and setting up a grants program. Giving one size fits all advice for grants programs is difficult. Grants programs should start with their mission/goals/values and then work backwards to design and implement a program to best serve that
- Rotating committee: This is also something AGD explored. Reviewers need to be trusted and recognized by the community is crucial - unless the program is set up with direct community involvement itâs pretty necessary for a community to trust a grants team.
- A well functioning grants teamâŠ: Being able to grow and evolve as a grants program alongside the respective ecosystem is key. Grants programs should complement the other stakeholders, not be a totally independent island.
- Elections: This was actually brought up to the community and was shut down because itâs hard not to make something like elections a popularity contest. Itâs tough though because the surface level appeal is clear, however thereâs skepticism on whether it would result in getting the best people in the role. To the point above, especially having people who are independent seems difficult if elections are heavily leaned on. A (main?) priority of reviewers becomes reelection in cases like this.
- âAdditional deliberation and community participation is better than noneâ: The program mission and even on a per grant basis has a huge influence on this. If the goal is contributor growth and the ecosystem is relatively unknown then taking long times to review and engage the community on each grant doesnât scale and the process does more harm than good. Additionally, there may not be much of a community to engage early on.
Learnings from ThankArbitrum (Arbitrum DAO) Grants Program
Program Overview
Thank Arbitrum is the first pluralist grants program under Arbitrum and has committed to allocating $3.2M in ARB through 10 grants programs to the top crypto builders and contributors. For this study, weâve researched ThankARBâs Milestone 1 Program. ThankArbitrum can be seen as a multi-dimensional grant framework and a learning machine that evolves from iteration to iteration based on the inputs, learnings, and outputs of each iteration.
Insights from the Thank ARB Milestone 1 report in the Arbitrum Forum:
-
Legitimacy
-
The Thank ARB strategic priorities found during #GovMonth-an initiative to reward people that want to help shape governance, did not receive support from top delegates because they did not include them in the process in a way that was seen as legitimate.
-
Without legitimate priorities and processes it is hard for the DAO to enable more experimentation.
-
Communication
-
Communicating the results of a multi-dimensional grant framework is a complex task. Segmenting by persona (delegate/builder/grantee/etc) could be a solution
-
DAO members need communications to be simplified and aggregated. Theyâre unsure of where to go for maintaining context, knowing about responsibilities and potential roles as a DAO member, and learning about other ways to become a good Arbitrum citizen.
-
Organization
-
Potential contributors need a pathway to get a small grant to do narrow scoped work which then is assessed to provide next steps for the contributor to know there is opportunity and also for the DAO to double down on high performers
-
The community is willing and capable however theyâre unsure of what to do next.
-
Evaluation and organization of 13 grant programs makes for extreme complexity. A potential route to tackling this is with focused workstreams.
-
Experimentation
-
Quadratic funding is likely not a good meta-allocation mechanism. Exploring options with Quadratic Voting and other custom algorithms could be better alternatives.
-
Successful tools usually solve more than one problem.
For example: Hats protocol provides many solutions from workstream accountability to dynamic councils. Hedgey streams are easy to use and can work for grants as well as salaries.
When finding solutions, itâs useful to understand context and to think through other ways the tools or solution can be used in the ecosystem. -
The DAO needs more indexed data available for the community to interpret. Overall, the DAO does not have any agreed upon metrics to give definitive answers about what success looks like.
-
Funding Success
-
There have been lots of learnings with the foundation through compliance, processes, and systems implemented and funded within the ecosystem.
-
As decisions are made, they are observing and documenting criteria to open up the process in the future. Confirming community-led review capabilities is a priority before removing decision making power in the DAO. This applies at the framework level as multiple programs are attempting to draft and use criteria-based approaches.
-
Firestarters program served a clear purpose and was the most successful program (as perceived by the DAO because it drove tangible results for the DAO in a short time.
-
Arbitrum partnered with legacy organizations such as the American Cancer Society, Blackrock Franklin Templeton signifying important achievements.
-
Data driven funding will be critical in Milestone 2 (STIP support, OSO, etc)
Learnings from ThankARB ML1 Programs
Thank Arbitrum is a plural grant framework which involves multiple separate grant programs, this implies they each have to report separately.
List of programs with amounts funded:
Program Name | Amount |
---|---|
Thank ARB | 350,000 ARB |
Firestarters | 300,000 ARB |
MEV Research | 330,000 ARB |
Aribitrumâs âBiggest Small Grants Yetâ | 90,000 ARB |
Arbitrum Co. Lab by RN DAO | 156,000 ARB |
Open Data Community (OCD) Intelligence | 165,000 ARB |
Grantships by DAO Masons | 154,000 ARB |
Plurality Gov Boost | 552,500 ARB |
Questbook Support Rounds on Gitcoin | 100,000 ARB |
Arbitrum Citizen Retro Funding | 100,000 ARB |
Allo on Arbitrum Hackathon | 122,500 ARB |
Here are the programs that participated and the reporting for each of the programs. Below we have presented the learnings from these grant programs:
1. Arbitrum Matching Fest
Program Description: This program added extra funds to the matching pool for quadratic funding rounds on Arbitrum. Top programs running on Arbitrum were selected to receive additional funding.
Why the Program Was Funded
This program was designed to:
- Make an agreement with Gitcoin to prioritize deployment of Allo protocol on Arbitrum and integration to the Grant Stack interface.
- Attract new audiences to Arbitrum via Gitcoin which may find a home on Arbitrum or generate transaction fees
- Market Arbitrum in a positive way in the Web 3 community
Takeaways:
- Partnering with a legacy health organization (American Cancer Society) turned out to be a great move, bringing in a lot of positive attention and significant marketing benefits.
- Arbitrum could potentially handle all the main rounds on its own, given that it successfully hosted nearly half the rounds on Gitcoin.
- Many of the organizations involved, like Metagov and TEC, had little to no prior experience with Arbitrum, yet they were successful in bringing their donor bases to Arbitrum for these rounds.
2. Firestarters
Program Description: The Firestarter program was designed to address specific and immediate needs within the DAO. Problems as identified by delegates and Plurality Labs (acquired by Thrive Protocol) were given a grant to do the initial catherding and research that is required to kickstart action.
Expected Outcomes
- Quickly and effectively address urgent needs resulting in high-quality resolutions.
- Demonstrate the ability to create fast and fair outcomes that benefit the ecosystem.
- Frameworks for service providers to build a scalable foundation for Arbitrum growth.
- High quality resolution of key, immediate needs. better, more fair outcomes for the ecosystem, more fair frameworks for service providers speed and fairness.
Takeaways:
- This program was well received and has significant positive impact because it directly addressed needs the community recognized.
- Firestarters need a clear next step rather than expecting indefinite continuation. The need for on-chain tools to assign roles for igniting and holding firestarters accountable stood out.
- While giving someone like Disruption Joe decision-making power worked, itâs worth exploring other models that would function well without relying on a single individual.
3. Thank Arbitrum
Program Description: This program introduced a novel way for the community to connect with the DAO. It provided a foundation for the community to continually sense and respond to the fast-paced crypto environment. Its aim was to provide a place for DAO members to learn about what is happening, to voice their opinions, and to learn about opportunities to offer their skills to Arbitrum.
Expected Outcomes
- Improve how grant funding is distributed
- Increase delegate and community confidence that funds are allocated as intended based on a collaborative process.
- Assure the community that we can manage the grant process inclusively and efficiently.
Takeaways:
- Despite efforts to remove sybils, they struggled with engagement from participants who werenât genuinely invested in the DAO. Future efforts should focus more on engaging delegates and valuable contributors.
- While significant data was gathered on how token holders feel, it didnât fully reflect the disengagement among delegates.
- The need to develop better interfaces tailored to DAO membersâ needs.
- The initial attempt at a tARB reputation system didnât work as planned. Discussions with Hats Protocol revealed that dynamic councils based on Thank ARB contributions could enhance decision-making, particularly for tasks requiring a high level of effort, similar to whatâs needed in Optimismâs appeal system.
4. MEV (Miner Extractable Value) Research
Program Description: This grant was awarded to establish the Plural Research Society and build the first forum and plural voting tools needed to support it. As part of the grant, an experimental plural research workshop would be hosted where the participants â researchers from across the MEV space â would convene to present their research proposals, discuss and debate using a variety of pluralistic tools, then ultimately vote to decide how to allocate 100,000 ARB in grant funding.
Why the Program Was Funded
This program was designed to :
- Create a scalable platform to build upon a successful proof of concept built for Zuzalu
- Leverage the expertise of highly respected advisors to conduct novel research
- Discover if a better forum can drastically improve the quality of conversation
- Use the forum under-the-hood design to allocate funding on a niche, high-expertise topic
- To ensure platforming for a broad selection of researcher opinions in the gas fee optimization and MEV space which has incentives to platform only financially beneficial views
Expected Outcomes
- A new forum dedicated to high expertise discussions, pushing the boundaries of decentralized governance and credibility.
- The forum will be a place where qualified voices and thought leaders stand out and the most important discussion gets attention in a decentralized way.
- Radically amplify new ideas and technology.
Takeaways
- Having a dedicated, high-quality advisor who is truly invested in the project made a big difference in attracting top talent and ensuring adherence to specs.
- Some viewed the program as merely a âforumâ or âconference,â but itâs actually a critical experiment aimed at improving our understanding of gas fee optimization and information sharing in the ecosystem.
- Due to delays in compliance, assembling the right team, and coordinating schedules for researchers, this program has taken longer than others, with more insights expected as it progresses.
5. Arbitrum Co.Lab by RNDAO
Program Description
RnDAO designed a programme to grow the Arbitrum ecosystem through a research fellowship focused on governance and operations challenges, with the goal of incubating sustainable Collaboration Tech ventures that build on Arbitrum. The ventures operate as a âbusiness clusterâ creating network effects to attract others through integrations, talent, investor networks, etc.
Why the Program Was Funded
This program was funded to:
- Explore proven methods of deliberation in a web 3 context such as citizenâs assemblies or sociacracy 3.0
- Explore AI solutions to automate components of DAO contributor experience and/or governance design
- Provide a support network for researchers along with a pathway to future funding.
Takeaways:
- Ways of tracking applicants who arenât selected but remain enthusiastic about Arbitrum are required and explored
- A more clear definition of the research topics for the participants would have ensured more targeted outcomes.
- Program is still ongoing so learnings are still being observed.
6. Plurality Gov Boost
Program Description
This program included direct grants funded to increase Arbitrum DAO ability to effectively govern its resources.
Why the Program Was Funded
This program was designed to:
- Utilize PL delegated decision-making freedom to use different decision making modalities to fund needed work
- Fund research and milestone based work using milestone based payouts.
- Anticipate future needs of the DAO and build, processes, procedures, and programs which may have substantial impact on the DAO but lack other ways of being funded.
Takeaways:
- The program highlighted the need for clearer decision-making pathways and transparency, especially for specialized requirements.
- Some decisions were made based on expert input that wasnât widely understood by others. For example, Helika Gamingâs data was crucial for analyzing acquisition costs across different verticals, which will save the DAO significant funds over time, even if these choices werenât always clear to everyone.
7. Questbook Rounds on Gitcoin
Program Description
This program started with two unique governance experiments. A âdomain allocationâ governance experiment allowed the community to direct funds to four matching pools based on Questbook program domains. Then 4 quadratic funding rounds sourced grants to assist their domain allocators in sourcing grants.
Why the Program Was Funded
This program was designed to:
- Support Questbook program success based on them answering that the âreason they would not succeed if they didnâtâ
- Show how pluralist programs can be complementary to each other.
Takeaways:
- The disparity between votes in the domain round and funding round, especially in categories like Gaming, suggests that Quadratic Funding might not be the best method for all cases.
- The true success of this program will be evident in how many projects receive additional funding after it ends. This will be most telling if the initial allocation isnât fully used.
- Offering a small ARB incentive for donations over $10 could nearly double the average donation amount on the Thank ARB platform.
8. Arbitrum Citizens Retro Funding
Program Description
The goal of this program was to distribute 100k $ARB to best Arbinauts and citizens that have proactively worked and truly impacted the Arbitrum DAO since its launch. They aimed to reward work that helped kickstart the DAO and/or contributed directly to Arbitrums strategic goals.
Why the Program was Funded
This program was designed to:
- Reward DAO contributors who had stepped up to contribute without reward since the DAO started
- Encourage future participation by setting a precedent for retroactive rewards
- Experiment with using Quadratic Funding retroactively
- Market the Arbitrum community in a highly visible way
Takeaways:
- By focusing only on individuals, certain organizations that have made significant contributions were not able to participate. Involving delegates in creating eligibility criteria could help address this.
- A dynamic pot size might be worth considering in future rounds to ensure higher participation.
9. Allo on Arbitrum Hackathon
Program Description
This program was all about expanding what Gitcoinâs Allo protocol could do on the Arbitrum One Network. New funding strategies, interfaces, and curation modules were available for all 490 protocols on Arbitrum to use freely to fund what matters.
Why the Program Was Funded
This program was designed to :
- Enable new on-chain allocation strategies and interfaces which can be used by all 500+ protocols building on Arbitrum
- Support more projects to exist on the Allo protocol project registry to create consistent open data standards for future community review and evaluation
- Discover new grant evaluation interfaces and mechanisms
- Support more decision making modalities to be available for use and testing in Thank ARB governance and potentially for Arbitrum DAO governance if successful.
Takeaways:
- Itâs crucial to clearly communicate the judging mechanism from the start. Even if the judging happens at the end, sticking to announced prize amounts is key to maintaining trust.
- A lack of clarity via eligibility led to some undesirable situations, which were fortunately resolved through direct conversations with the teams involved.
Learnings from ThankARB leadership
Web3 grants are decentralized resource allocation mechanisms, broader than traditional grants, and designed to address ecosystem needs. These grants foster innovation, infrastructure, and ecosystem growth through decentralized decision-making.
ThankArb functions as a learning machine, iterating and improving with each funding round. Its focus on adaptability allows it to identify and expand successful mechanisms, decentralizing decision-making by empowering competent individuals.
The future of Web3 grants also involves refining decentralized funding mechanisms, such as quadratic funding (QF) possibly paired together with other onchain allocation mechanisms eg:conviction voting (CV), but even more important is applying tailored solutions that address crucial ecosystem needs., eg: direct funding, councils, etc . Experimentation with off and on-chain resource allocation remains critical.
Notable programs include ThankArb and ThankApe (ApeCoin community), both of which emphasize strong community-building before allocating grants. Iterative learning and adaptability make these programs stand out.
A natural comparison would be to look at RPGF vs ThankARB as they both serve as the main allocators for L2 ecosystems(Optimism and Arbitrum), but in reality ThankArb and Optimism RPGF1 serve distinct roles, with ThankArb focusing on decentralized allocation and RPGF1 on centralized decision-making. Success in both can be measured through ecosystem impact and milestone achievement
Key best practices include:
- Ecosystem Needs: Tailoring grants to the ecosystemâs specific requirements.
- Multiple Pathways: Offering diverse funding mechanisms for different projects, protocols, etc
- Iterative Learning: Running experiments to refine grant strategies and address evolving needs.
Grant program effectiveness can be assessed through metrics such as milestone achievements, impact on ecosystem needs, and efficiency in resource distribution. Allocators must play a proactive role in managing and growing projects.
While Web3 aims for decentralization, most grant programs are still centralized in decision-making. Programs like ThankArb are pushing the boundaries by experimenting with on-chain mechanisms to distribute decision-making power more effectively.
Learnings from Karma GAP
About GAP:
The Grantee Accountability Protocol (GAP) is a tool designed to address grants funding challenges by aiding grantees in building their reputation, assisting communities in maintaining grantee accountability, and enabling third parties to develop applications using this protocol.
Takeaways
Effective grant programs should have clear goals, transparent evaluation rubrics, and involve the community in decision-making. A maturity index can rate programs based on factors like financial sustainability and community support. Feedback and marketing assistance are also important for successful grant programs
- Articulating what an operator wants out of the grants program is crucial. This can help set the tone and guide the latter process of applications, grantee selection, and evaluation later on.
- Having a rubric to evaluate the application from is also especially useful. This allows applicants to understand reasons for decision-making.
- Having some form of feedback, even if generalized feedback is helpful to applicants, and is taking an extra step that can improve the process.
- Post-funding support is an area most programs fall short in. Following up with grantees and collaborating or sharing feedback where useful, and amplifying and providing marketing support to grantees can go a long way in the grantee journey.
Learnings from Optimism RPGF 3 and 4
About Retroactive Public Goods Funding Rounds:
RetroPGF (Retroactive Public Goods Funding) is a mechanism that rewards public goods based on proven impact.
Retro Funding is being run as an ongoing experiment where insights and learnings from each round feed the next iteration of the program and inform the design of future experiments.
Takeaways
- âHaving standardized, verifiable, and comparable impact metrics is crucial to objectively measure the impact of applications.
- Having stronger eligibility criteria results in less spammy applications.
- Applications for funding can be shortened, as deadlines are the real driver of submissions
- Defined grants rounds set the stage for a more focused and impactful approach to incentivizing contributions
- The broad round scope overwhelmed badgeholders and applicants.
- The absence of standardized, verifiable, and comparable impact metrics, and the reliance on individual subjective review criteria, made it difficult to objectively measure the impact of applications.
- The self-selection of applications to review by badgeholders did not ensure a fair review by a minimum number of badgeholders of each application.
- The sheer volume of applications, combined with weak eligibility criteria, complicated the voting process for badgeholders. Lists were not effective in scaling the ability of badgeholders to accurately vote on more applications.
- These learnings were derived from 300+ crowdsourced pieces of feedback, collected via a survey among badgeholders, the RetroPGF 3 feedback Gov Forum post, as well as the badgeholder Discord/Telegram channel and will inform the next iteration of Retro Funding.
- Below we document these learnings in detail as part of a gradual process to open source Optimism governance design. This post is non-exhaustive and aims to focus on the core learnings and most popular requests.â.
Learnings from the RPGF Team
Grants in Web3 are designed to support future innovations and projects by providing funding based on anticipated impact. Programs like RetroPGF (RPGF) offer rewards for past achievements, contrasting with traditional grants that focus on future outcomes.
Takeaways
- Optimismâs approach includes rewarding ecosystem impact through grants, with a commitment from the Collective to recognize and compensate valuable contributions. Optimism aims to transition towards a fully on-chain grant system as the social layer and metrics become more integrated with proper attestation mechanisms.
- The future of Web3 grants involves transitioning to more on-chain systems, improving transparency, and creating more sustainable models. This includes developing grant systems that can effectively measure and reward impact over time, rather than just focusing on short-term growth.
- Top grant programs, such as RPGF and the Grants Council, vary in their approach. RPGF has a larger budget and compensates for past impact, while the Grants Council emphasizes a high standard of professionalism. Different programs may have different strengths depending on their focus and metrics.
- The effectiveness of a grant program depends on its goals and metrics. RPGF may be more effective for compensating past achievements and ensuring sustainability, while programs like the Grants Council focus on future-oriented projects and maintaining high standards of professionalism.
- Best practices include maintaining clarity and transparency throughout the grant process, avoiding rule changes mid-term, and ensuring fairness through multiple scorers and feedback steps. Establishing clear objectives and metrics for evaluation can enhance the quality of grants and outcomes.
- Effectiveness is evaluated through metrics assigned to each grant mission, ensuring that grantees meet specific criteria. This includes tracking the long-term impact and sustainability of projects, rather than just short-term growth. For educational grants, follow-up on the career progression of educated individuals is also important.
- Challenges in decentralization include ensuring fairness, transparency, and effective decision-making processes. There is a need for improved coordination and understanding among ecosystem participants to address issues such as misaligned incentives and ensuring that grants genuinely solve real-world problems.
Learnings from Badgeholders
RetroPGF through Optimism is considered one of the best grant programs due to its low bureaucratic overhead and flexibility for grantees. Retro Quadratic Funding (QF) rounds also stand out for their ability to involve the community and simultaneously provide financial support and marketing exposure to projects that have already made significant contributions.
RetroPGF is an interesting model because its post-completion funding model ensures that projects have already delivered value before receiving grants. This reduces the pressure to meet predefined milestones and offers projects more freedom to adapt to the fast-moving industry landscape. The Optimism programâs simplicity, large funding rounds, and low overhead make it highly efficient for both projects and the ecosystem.
Learnings from Gitcoin Grants Rounds
Grants in Web3 are funds provided without an expectation of return or equity stake. Their historical purpose is to support public goods, infrastructure, or research that can advance human progress. In the crypto space, grants are often funded through token generation events (TGEs) and are meant to drive ecosystem growth by funding various initiatives.
The purpose of grants is to ignite growth within ecosystems by using incentives. They aim to foster innovation and development in the Web3 space, although many grants have become part of marketing strategies rather than solely focusing on growth.
Gitcoinâs grants are funded by the Ethereum community and the wider Web3 community, rather than through Gitcoinâs own treasury. This is different from many crypto grant programs that use funds generated through TGEs.
- Gitcoin employs a novel funding mechanism called quadratic funding, which is designed to enhance capital efficiency and support digital public goods.
- Gitcoin has its own treasury that funds development but does not run grant programs at scale. The grants are primarily for digital public goods and open-source software.
Gitcoin is developing systems for better capital efficiency and allocation, with the goal of influencing broader grant programs and ecosystems. Thereâs a trend towards grants that can be converted into equity, which could provide a return on investment for grantors and enhance sustainability.
-
The future may include more automated incentive structures based on on-chain network metrics, reducing reliance on human decision-making. Projects like Gitcoinâs work with Optimism on direct-to-contract incentives are examples of this trend.
-
Noted for its mature evaluation process and comprehensive review system. RPGF program is well-structured with multiple stages of evaluation and metrics review.
-
Recognized for its strong ecosystem and effective use of its treasury. ENS manages its grant programs creatively, balancing impact and capital allocation.
-
Praised for its diverse programs and innovative approaches, including convertible grants to equity. The Solana foundation has demonstrated significant growth and effectiveness in its grant strategies.
-
Each program excels in different areas, making direct comparisons challenging. Optimism is noted for its evaluation maturity, ENS for its effective ecosystem management and creative grant use, and Silvana for its innovative approaches and recent growth. The best program depends on specific criteria and goals.
-
Clearly define the desired outcomes and return on investment (ROI). Itâs crucial to understand how funding will translate into impact.
-
Strong eligibility criteria are essential for efficient allocation of funds. Protocol Guildâs success is attributed to its well-defined criteria.
-
Know your builders and provide additional support beyond funding. Implement milestones and accountability measures to ensure progress and alignment with goals.
-
Effectiveness is best measured by defining clear outcomes, ROI, and using appropriate metrics and accountability controls. Tools alone are not enough; understanding the underlying questions and processes is key.
-
The potential of decentralized grant programs lies in creating reputation systems that can track and evaluate the effectiveness of both programs and grantees. Open systems and public ledgers offer opportunities for developing these systems.
-
Building a comprehensive map of grant programs, their impact, and connections can improve the efficiency of the grant process. This includes understanding how programs and grantees interact and contribute to the ecosystem.
-
Gitcoinâs approach when it comes to grant funding: the ecosystem has a vision and the grant program derives its mission from it. Next, thereâs granular objectives derived from the mission followed by key results for each and followed by leading indicators.
-
The Gitcoin leadership strongly believes in the evolutionary approach, no metric is good because it becomes redundant over time-people learn how to âhack itâ. This is aligned with other thought leaders in the ecosystem such as Metrics Garden.
-
Incorporating feedback from participants is invaluable. Gitcoins products have evolved significantly since their inception due to feedback. They put emphasis on streamlining the application process and introduced more robust support systems to assist grantees. As a result, there was an 80% grantee satisfaction rate in GG20, the highest theyâve seen in at least the past year.
-
Gitcoin is pioneering allocation mechanisms with Allo and growing the EVM grant pie as more and more L2s are supported by GrantsStack(the permissionless platform to manage grant programs).
Learnings from Octant
- One big takeaway is people donât often read and pay attention to details. This observation applies to communities and users across the board from grantees to donors. Unless thereâs repeated, consistent emphasis on communication and strong emphasis on any detail worth nothing, people are likely to ignore key pieces of information. A learning here has been to make it easier for people to understand and double down on communicating often and across channels.
Learnings from Giveth
Grants in Web3 are decentralized financial mechanisms aimed at stimulating ecosystem growth by supporting projects that contribute to public goods, open-source infrastructure, and innovation. Unlike traditional grants, they should emphasize decentralized decision-making and often align with token economies, enabling project sustainability through economic models, rather than direct funding alone and community/ecosystem alignment
Giveth challenges the traditional grant model, viewing grants as inefficient and unsustainable. Instead, Giveth will soon propose a more innovative approach by enabling projects to tokenize and create self-sustaining economic models, allowing for long-term growth and a better alignment of incentives. This approach shifts from one-time grants to fostering ecosystems where projects can generate value through bonding curves and tokenized economies.
The future of Web3 grants is seen as pluralistic, with a mix of decentralized governance models and council-led decisions. While grants might not ever be fully on-chain, the importance of efficiency, decentralized governance, and community involvement is clear. Retroactive funding mechanisms like RetroPGF (Retroactive Public Goods Funding) are predicted to become more prominent, reducing the bureaucratic burden on startups and fostering entrepreneurial freedom.
Tips for improving grant programs:
- Eligibility clarity: Clear eligibility criteria and streamlined processes reduce wasted time for applicants.
- Community involvement: Programs that incorporate community feedback, such as Quadratic Funding rounds, can foster broader project support.
- Multiple program types: A pluralistic approach to funding (e.g., growth grants, builder grants, mission proposals) can cater to various ecosystem needs
- Growing the ETH Pie: Giveth diverse grant programs are running on multiple chains and ecosystems
Evaluating the net value of grant programs to the ecosystem is more relevant than individual project outcomes. A tolerance for failures, especially with small grants, and highlights the importance of viewing programs in aggregate. Decentralized impact measurement can help to avoid bureaucracy, especially for smaller grants, advocating for metrics that reflect broader ecosystem health.
Decentralized governance in grant programs is seen as difficult to implement efficiently. While decentralized decision-making can bring greater community participation, the complexity of ensuring effective, scalable governance remains a significant challenge.
The primary issue lies in the misalignment of incentives within traditional grant structures. Grantees may only meet the minimum requirements for receiving funds, creating inefficiencies. However, the ecosystem as a whole, particularly the reliance on grants without long-term sustainability models, is a bigger problem. By enabling projects to create their own token economies, the incentive structure can shift toward long-term value creation.
2. Program Effectiveness
Evaluating a grant program effectiveness should, in theory, be coupled with the alignment with the ecosystem it is funded by and programâs objectives should be derived from it.
Ideally the ecosystem has an overarching strategy already available during the inception of a grant program, but it is also true that grant programs are growth tools that can also shape the ecosystemâs strategy itself. We could assume there is a a certain degree of interdependency here, but usually this becomes a reality only after a couple of iterations of grant programs
While we recognize that every grant program is different(in approach and objective), some are highly experimental, some are mature, some are more transparent, some more opaque etc, we believe that the only way to evaluate a grant programs effectiveness, in the decentralized context of web 3, is in a way that enables covering the whole spectrum of possibilities. Our recommendation is to consider a mix of the following approaches:
a) Diverse metrics, outcomes and KPIs (Quantitative and qualitative)
A comprehensive evaluation of a grant programâs effectiveness requires incorporating a broad range of metrics, both quantitative and qualitative. Utilizing multiple KPIs and clearly defined outcomes ensures a more holistic assessment. It is also true that limiting the evaluation to a only one set of metrics can ignore other aspects of the program and can also introduce vulnerabilities, particularly for ongoing programs, as it may fail to capture the full scope of impact and long-term sustainability. Allowing the community to co-propose part of these metrics would be a solution towards a more optimal pool of metrics.
b) Prioritizing key accomplishments to reduce signal-to-noise ratio
By isolating and emphasizing the most significant achievements of a grant program, stakeholders can better assess the programâs overall value. This approach allows for a clearer understanding of whether the programâs outcomes justify the allocated resources. A fundamental principle is that a grant program demonstrating multiple high-value successes may signal continued funding and support.
c) Assessing program effectiveness relative to maturity
As a grant program matures, the complexity and depth of its evaluations should increase accordingly. More mature programs should have more complex metrics and also undergo more sophisticated reviews to accurately assess their long-term effectiveness and sustainability, accounting for their growth and evolving impact over time. The implications of Goodhartâs law need to be considered when attempting to define the leading metrics of evaluating a grant programâs effectiveness, especially if the program is not at its first iteration.
d) Independent audits by neutral third parties
Given the capital-intensive nature of grant programs, neutral third-party audits, alongside ecosystem evaluations, are crucial to determining whether funds are being deployed efficiently and equitably. These audits help ensure that the distribution of resources aligns with the goals of fostering growth and inclusivity across the potential applicant pool.
For the purpose of this study we have considered three main areas that cover the spectrum of the quantifiable information about a grant program in a large proportion, while also enabling a more direct and neutral approach of assessing a grant programâs effectiveness. The three aspects we will explore are: Alignment with ecosystem goals and strategic impact, Ecosystem growth, community development and sustainable impact, Innovation and value creation. Each section will contain metrics and formulas for how to assess these areas in a grant program. Milestone achievements are not covered in this study, but they are prerequisites.
The metrics and formulas explored in detail in the sections below should be seen as one relevant subset of the total number of possible relevant metrics that can measure program effectiveness, the main goal being to both equip and inspire grant program managers and ecosystems with a more advanced and neutral set of formulas that can be used to determine a grant programâs effectiveness.
It is important to mention this is an exploratory approach that attempts to evolve the current metrics and formulas used in the present, not exact science. The formulas presented below can be applied per grant and averaged per total number of grantees or per program. All the possible scoring options are not explored in the section below, but some formats are suggested.
1. Alignment with ecosystem goals and strategic impact
The goal of this section is to explore and attempt to determine, at least to a degree, how well the grant program aligns with broader strategic objectives and priorities within the ecosystem that funded it. Below we present and explore formulas and indicators that can be adapted on a case by case basis to help measure goal alignment and strategic impact.
- Strategic Alignment Score(SAS)
Evaluating whether the funded projects align with the strategic goals of the ecosystem or community (e.g., driving adoption, improving infrastructure, ensuring scalability, security, decentralization, etc) is one of the most important aspects when analyzing grant program effectiveness. Projects should in theory directly contribute to achieving key milestones or overcoming critical challenges that the ecosystem faces.
The Strategic Alignment Score (SAS) can help quantify how well projects align with the strategic goals of the ecosystem that funded them. The availability of clearly defined goals and project reporting is necessary for the calculation.
Where:
- Agi = Alignment score of project i for strategic goal g (on a 1-5 scale), based on project reports, objectives milestones and external reviews.
- Wg = Weight of the strategic goal đ (on a 0-1 scale) based on its importance in the ecosystem.
- N= Number of projects
- G= Total number of relevant strategic goals the grant program addresses (e.g., adoption, innovation, problem-solving, scalability, security, etc.)
- Min score =1, Max score =5
Grant applications, project reports,forums, roadmaps and ecosystem documents could be used as data sources if the information is not available. Weightings can be derived through stakeholder feedback or predefined priorities.
Ex: A grant program has funded 5 projects.The Ecosystem that funded it had the following goals: 1) Grow the ecosystem in a sustainable way 2) Fund projects that develop infrastructure.Goal 1 is more important than Goal 2.
Number of Projects (N) = 5
Number of Goals (G) = 2
For each goal g, weâll assume the alignment scores(in practice these scores need to be assigned by reviewers) for the 5 projects and the weights(G1>G2) for the two goals:
Goal 1: Grow the ecosystem in a sustainable way
Alignment Scores (on a scale of 1-5):
- Project 1: A1,1=4
- Project 2: A1,2=5
- Project 3: A1,3=3
- Project 4: A1,4=4
- Project 5: A1,5=2
Weight of Goal1: W1=0.7
Goal 2: Fund projects that develop infrastructure
Alignment Scores (on a scale of 1-5):
- Project 1: A2,1=3
- Project 2: A2,2=2
- Project 3: A2,3=4
- Project 4: A2,4=5
- Project 5: A2,5=3
Weight of Goal 2: W2=0.3 (less important than Goal1)
Calculation:
1. Average Alignment score(Agi) for Goal 1:
Ag,i =(4+5+3+4+2)/ 5= 3.6
- Average alignment score(Agi) for Goal 2:
Ag,i =(3+2+4+5+3â)/5= 3.4
-
Weighted score for Goal 1:(3.6Ă0.7)=2.52
-
Weighted score for Goal 2:(3.4Ă0.3)=1.02
5. SAS for the Program:
SAS=2.52+1.02=3.54
- Problem-solving capacity(PSC)
Grant programs can encounter a diverse spectrum of âissuesâ such as: lack of coordination, KYC/KYB issues, extractive behaviors, suboptimal processes which are all experienced in a live environment. To thoroughly analyze effectiveness of a program the teams or managers problem solving capacity should be considered.
The Problem-solving capacity (PSC) formula can help measure how well managing teams respond to issues. To keep it realistic, we focus on available data such as time to resolve issues and reported effectiveness.
Where:
- I = Total number of issues encountered (e.g., system errors, project obstacles, technical challenges).
- Ti = Time taken to resolve issue i (measured in days)
- Ei = Effectiveness score for resolving issue i, typically rated on a scale (e.g., 1-5) based on the improvement or feedback after the issue is resolved.
- Cj = Complexity score of issue j (e.g., on a scale of 1-5).
Incident reports, issue tracking systems (e.g., GitHub, JIRA, Notion, etc), and team performance reviews can be used as data sources.The resolution time is likely to be available or sampled via feedback forms for grantees. Effectiveness may need to be based on subjective feedback and qualitative reporting.
- Long-term vision and roadmap contribution score(LTRS)
Examining how grant-funded projects contribute to the long-term vision and roadmap of the ecosystem is one of the most important areas to explore. This involves assessing the projectâs vision, sustainability, potential for long-term impact, and ability to evolve with the ecosystemâs needs. Projects that align well with the ecosystemâs vision and can be more effective at solving critical problems, innovating, scaling or just adapting over time tend to be more valuable.
The Long-Term vision and roadmap contribution score (LTRS) evaluates how well a project or program contributes to the ecosystemâs long-term goals, which requires assessing sustainability, scalability, and flexibility to evolve.
Where:
- Vgiâ = Vision alignment score for project i on goal g (1-5 scale).
- Sgiâ = Scalability potential for project i on goal g (1-5 scale).
- E = Number of long-term ecosystem goals assessed in the grant program.
- I= Total number of issues or long-term goals across all projects, i.e., I=PĂE
- P = Total number of projects in the grant program.
- Min score=1, Max score=5
Collaboration generally leads to innovation and should be seen as a sign of a healthy ecosystem. The Collaboration index (CI) measures the level of cooperation between different ecosystem participants (partnerships, joint ventures, etc.). It can be defined as:
Where:
- Np= Number of active partnerships due to the grant program
- Nj= Number of joint ventures due to the grant program
- Nco = Number of cross-organization/chain/protocol collaborations
- P= Total number of grant-funded projects in the program
- C=Total number of collaborations per project
- Min score=1, Max score = , Any score over 1 is a sign of a healthy grant program
A higher CI suggests a more interconnected and collaborative ecosystem.
Web3 revolves around reputation although this aspect is usually not directly discussed in the context of web3 grant programs. The Developer Reputation Score (DRS) can evaluate developers history based on their contributions and influence within the ecosystem. This formula combines multiple factors like contributions, peer reviews, and community engagement:
Where:
- Ci = Number of contributions by developer i (1-5 scale)
- Qi = Quality score of contributions, based on peer reviews, analytics or consensus (1-5 scale)
- Di = Developerâs influence (e.g., GitHub stars, forum followers, peer recognition on a 1-5 scale)
- N = Total number of developers being evaluated per program/project
- Min score=1, Max score=5
This score aggregates contribution volume and quality, giving weight to both output and influence. The most accessible and reliable data will likely come from open-source repositories (e.g., GitHub), developer forums, peer reviews, and other metrics such as stars, followers, and reputation on developer platforms. Projects such as Open Source Observer coupled with other tools can help us track developer activity and calculate the DRS.
g) Ecosystem loyalty score(ELS):
In web3 there is a phenomenon known as âgrant hoppingâ- projects that have the tendency to apply in multiple grant programs from different ecosystems solely for the purpose of raising funding. The Loyalty score(ELS) has the goal of evaluating the degree of âloyaltyâ a project has within an ecosystem and can represent a solution to help solve this issue:
Where:
- ELS = Ecosystem Loyalty Score (between 0 and 1).
- Pl = Number of projects that have received grants from only one ecosystem.
- Ph = Number of projects that have received grants from multiple ecosystems.
- Min score=0, Max score=1
- The closer the score is to 1, the higher the degree of loyalty
Some non-botable metrics include:
The Community sentiment and trust score (CSTS) attempts to quantify the overall trust and sentiment of the community towards the grant-funded projects. Data can be collected from sentiment analysis tools on community forums, social media, and governance platforms to gauge community trust and sentiment towards grant-funded projects.Several platforms can be considered as data sources: X, Discourse, Tally, Karma, etc. This metric is harder to manipulate because it involves parsing through large amounts of diverse, user-generated content.
Where:
- Siâ = Sentiment score of post i (e.g., from sentiment analysis tools, ranging from -1 for negative to +1 for positive)
- Ti = Trust score of post i based on user engagement (e.g., upvotes, likes, or similar metrics)
- N = Total number of analyzed posts or user-generated content per project/program
- Min score=-1, Max score=1
This formula uses a weighted average of community posts and engagement
to gauge overall sentiment.
b) Grant Impact Score (GIS)
To assess the effectiveness and impact of a grant-funded project, we can calculate a Grant Impact Score (GIS) that incorporates innovation, collaboration, developer reputation, and community sentiment:
GIS=(CIĂ w1) + (DRS Ă w2) + (CSTS Ă w3) + (II Ă w4) + (ELSĂ w5) + (LTRS Ă w6) + (Li Ă w7) + (SAS Ă w8) + âŠ.
Where:
- SAS, CI, DRS, and CST, ELS are as defined above
- I= Innovation Index described in the Innovation and value creation section
- w[1âŠn]â = Weighting factors based on the relative importance of collaboration, developer reputation, and community trust for the grant program. Program managers and relevant stakeholders would define them.
- Multiple metrics can be added here and interpretation is based on the weight factors
- The scale is optional, it can be normalized to a scale of 0 to 5 or a different one.
This composite score gives an overall view of the projectâs impact and effectiveness, balancing collaboration, developer reputation, and community trust.
- Ecosystem growth. community development and sustainable impact
Defining and measuring how the grant program contributes to the long-term growth and sustainability of the Web3 ecosystem should be one of the priorities in grant programs.
The list below captures broad formulas that can be considered to evaluate a grant programâs effectiveness in terms of ecosystem growth. It is important to mention that each of the formulas can and should be adapted to the scope and timeline of the grant program analyzed. Users is a broad term and can refer to: dapp/protocol users, lenders/borrowers, contributors, marketers, developers, community members, artists, people onboarded, etc
- User activity and engagement depth(UAED)
Grants are growth tools, growth involves increased user activity. This metric broadly helps us calculate the average number of meaningful interactions* (e.g., transactions, votes, contributions, etc) per daily active user.
Where:
- DAU = The total number of daily users, which would be the sum of all users across group g
- MIU= Meaningful interactions per user( as defined depending by the grant program)
- G = The total number of groups or clusters of users. Each group has similar activity patterns or engagement levels.
- Wá” = The weight assigned to interactions by users in group g (if needed, to represent the importance of the groupâs engagement).
- Ng= The number of users in group g.
- Min score=0, Max score = N
- Community sentiment score(CSS)
This is a holistic metric, it can be complex to quantify and collect the data required, the assumption being there are stakeholders that can help define it such as: delegates, grantees, ecosystem contributors, etc. Surveys, polls, etc may help further clarify as well. Community forums and social media platforms could also be considered as data sources.
Where
- Ps= Positive sentiment
- Ns= Negative sentiment
- T = Total mentions - the total number of opinions collected from multiple sources: contributors, surveys, forums, comments, etc
- Ecosystem integration index(EII)
This describes the possible integrations achieved by grant-funded projects with other ecosystem projects.
Total number of integrations with other projects, protocols, initiatives, etc that were achieved during or after the grant program
Where
- Ii =: Number of integrations achieved by project iii with other ecosystem projects, protocols, initiatives, etc. (an integer).
- Wi=â Weight or significance of the integrations achieved by project iii (a score from 1 to 5, representing the impact or relevance of the integration).
- P = Total number of grant-funded projects in the ecosystem that are being assessed.
- T = Total number of possible integrations (this could be a benchmark or an estimated number of integrations expected within the ecosystem).
- If this metric is <1, it is a sign of a healthy grant program, normalization is difficult due to the fact that T is not standard
- âRetention and churn analysis(RCA)
Measures how many users remain active over time using cohort analysis. Can be applied per project or per grant program.
RCA =(Cohort of users before the program /âCobort of users after the program) x 100
Tools such as Dune and other user data tracking tools can be used for data collection.
- Developer contribution quality index(DCQI)
Measures the quality of code contributions on Github using code reviews, bug reports, and test coverage per project.The DCQI should be computed before and after the grant program and compared.
DCQI = ((PCR âBR)/TC)âĂ100
Where:
- PCR - Positive Code Reviews = The number of code reviews with positive feedback per program/project
- BR - Bug Reports = The number of bug reports generated from the contributions per program/project
- TC = The total number of contributions made by developers (e.g., commits, pull requests) per program/project.
- Community engagement rate(CER)
Percentage of new users actively participating in community governance and forums. The CER should be computed before and after the grant program and the results should be compared.
CER =(ACP/TUâ)Ă100
Where:
- ACP - Active Community Participants = The number of active community participants that are involved in governance, forums, etc after the grant program.
- TU = The total number of users
Tally, Karma, Snapshot, Forum APIs are the required data sources.
- Ecosystem diversity index(EDI)
Ratio of unique project categories funded compared to total existing categories, indicating diversity. Categories can be new verticals, or new domains within a vertical, for example for the DEFI vertical we can have multiple domains: social-fi, meme-fi, etc. REFI would be considered a new vertical.
EDI =UPCF/TCP
Where:
- TCP - Total Project Categories = The total number of project categories that exist or are recognized within the ecosystem prior to the grant program.
- UPCF - Unique Project Categories Funded = The number of distinct project categories that have received funding. This includes all unique categories that projects fall into within the grant program.
- A ratio closer to 0 would indicate that not a lot of new domains or vertical are brought by the grant program.
- Target user(developer, client, builder, etc) onboarding rate(TUOR)
Percentage of new target users entering the ecosystem through grant-funded projects. Target users can are defined by the grant programs goal, they could be: developers/builders, users, marketers, artists, degens, regens, etc
TUOR =(NU/T)Ă100
Where:
- NU - New Target Users in Ecosystem = The number of new users (developers, builders, users, marketers, artists, etc.) that have joined the ecosystem through grant-funded projects.
- T -Total Target Users Pre-Grant =The number of target users in the ecosystem before the grant program started.
- Min score = 0% (if no new target users have entered the ecosystem).
- Max score = 100% or higher (depending on how many new users joined compared to the initial user base).
3. Innovation and value creation
The objective of this section is to assess the extent to which Web3 grant programs stimulate innovation, innovative solutions and generate tangible value within the ecosystem. Key indicators such as innovation, economic impact, and cross-protocol synergies will be evaluated. As in the sections above, the indicators and formulas below can be derived and adapted to a specific grant programs mission, timelines and context:
a) Innovation Index (II)
To measure innovation, we will focus on available data such as technological breakthroughs (based on publications or major releases), patents filed and the introduction of novel protocols or use cases within the ecosystem.
Where:
- Pi = Number of innovations, technological breakthroughs, patents or intellectual property filings by the grantees i, normalized on a scale (e.g., 0-1 if binary data or scale of 1-5 if more qualitative).
- Tiâ = Technological breakthroughs achieved by project i, based on peer reviews, expert assessments, protocol releases, new dapps, usescases, etc (on a scale of 1-5).
- N = Total number of projects evaluated for innovation.
- Min score=1, Max score=5
b) Economic Impact (EI)
Economic impact is a complex metric to analyze as it involves multiple sub-metrics. Itâs easier to attempt to quantify based on revenue, user growth, transaction volume, and liquidity as this data is usually available from multiple sources( project reporting, blockchain analytics platforms, and market cap aggregators). The formula below should be considered a multi dimensional approach to quantifying economic impact.
EI =(ÎR+ÎU+ÎTV+ÎL+ÎM)/Ti Ă Fâ Ă 5
Where:
- ÎR = Percentage change in revenue or economic value created by the project.
- ÎU = User growth rate over time (e.g., percentage increase in users).
- ÎTV = Change in transaction volume contributed by the granteess (in terms of on-chain activity).
- ÎL = Change in liquidity added to the ecosystem by the grantees
- ÎM = Market capitalization growth of the project or related tokens.
- Fâ adds weight to projects that show sustained growth over time. Fâ = 1 for projects with short-term impact and Fâ > 1 for those that maintain or grow over longer periods. This rewards projects that contribute long-term stability rather than just short-term spikes.
- Ti= Total number of submetrics used
- Min score=0, Max score=5
Revenue and user growth can be found in project reports, user analytics tools, or tokenomics data. Transaction volume and liquidity data can be retrieved from blockchain explorers, DeFi platforms, or public blockchain metrics. Market cap data is available from aggregators such as CoinGecko or CoinMarketCap.
d) Important successes/Total projects ratio (ISR)
This metric measures the effectiveness of a grant program by analyzing the ratio of high-success projects to total funded projects. âSuccessâ is based on objective (e.g., revenue generation, user growth) and subjective (e.g., community impact, innovation) measures.
ISR =S/T
Where:
- S = Number of âhigh-successâ projects, based on both objective (e.g., revenue, users, technical advancements, partnerships, etc) and subjective criteria (e.g., expert assessment, community feedback).
- T = Total number of grant-funded projects.
- A common rule of thumb is that if at least 5-10% of the grantees are considered high successes, the program is deemed effective.
- Min score=0, Max score=1
Success criteria can be derived from project performance reports, revenue data, user metrics, or subjective assessments from community feedback or expert panels. Total number of projects is easily available from the grant programâs records.
e) Comprehensive ecosystem innovation score (CECS)
To evaluate a projectâs overall contribution to innovation, we can combine the previous metrics into a weighted score based on the importance of innovation, economic impact, network effects, and success ratio.
CECS = (IIĂw1)+(EIĂw2)+(NEXSĂw3)+(ISRĂw4)
Where:
- II, EI, NEXS, and ISR are the previously defined metrics.
- w1â, w2â, w3â, and w4 are weighting factors that reflect the relative importance of each metric in evaluating a projectâs success and contribution to the ecosystem.
- The scale is optional, it can be normalized to a scale or 0 to 5 or a different one.
Each component can use data from public project reports, blockchain analytics, collaboration records, and ecosystem engagement. Weightings can be adjusted depending on the priorities of the grant program or the ecosystem.
Innovation is a key driver of growth and evolution in the Web3 ecosystem. Grant programs that successfully fund innovative projects can ensure continuous advancement, addressing both emerging challenges and opportunities. By evaluating the effectiveness of these programs in fostering innovation and economic growth, the study aims to provide a comprehensive view of their long-term value.
Examples of interpretations for the metrics above:
- A project with a high Innovation Index(II) but a low Economic Index(EI) could indicate that the project is potentially groundbreaking but has yet to achieve commercial success or adoption. This could be a sign of high potential if properly nurtured.
- If a grant program has a high ISR, this indicates that it is funding projects with notable successes, justifying continued investment into the grant pool. Conversely, a low ISR suggests many funded projects are failing to achieve their intended outcomes.
For most grant programs, gathering comprehensive data into a single source of truth may present challenges. Adaptations to data collection methods and formulas should be made based on the maturity and scope of the grant program.
The aspect of maturity in grant programs
When analyzing program effectiveness it is best to consider how âmatureâ a grant program is before attempting to derive proper KPIs/ metrics/formulas. We recommend increasing the complexity of the metrics used directly proportional to the maturity level.Generally, they can be categorized into three maturity stages, and it is important to consider these stages when deriving metrics:
- Pilot Programs or early experiments
Examples: ThankARB, Polygonâs Quadratic Accelerator, and StarkNetâs Seed Grant Program. These are in their first or second iterations.
- Programs Reaching a Certain Level of Maturity
Examples: ENS and Octant, which are in their third or fourth iterations.
- Mature Programs:
Examples: Aave Grants DAO, Optimismâs- RPGF (Retroactive Public Goods Funding), and Gitcoin Rounds(GG1-GG21), which have undergone five or more iterations.
3. Maturity Framework:
Categories for Indicators
- Program stability and growth: Indicators that measure the longevity, expansion, and adaptability of the program.
- Governance and decision-making: Indicators that assess the structure, transparency, and inclusivity of decision-making processes.
- Support for grantees: Indicators that reflect the various types of support provided to grantees, both financial and non-financial.
- Operational efficiency and sophistication: Indicators that evaluate the effectiveness and sophistication of the programâs processes and operations.
- Community and impact: Indicators that measure the programâs engagement with the community and its broader impact on the ecosystem.
The indicators in each category have different weightage, with our recommended indicator weight in the Refined Formula subsheet. Weâve defined each indicator with a score ranging from 1 to 4 where applicable. To calculate your programâs maturity score, multiply the indicator weight with the indicatore score on the right and add the sum.
As a scoring design principle, we donât recommend changing the weights, but in the case that you feel itâs inapplicable to you and you change the indicator weights, make sure you have weights of differing scores. Some indicators (those with higher weights like 2 or 3) should have a more significant impact on the overall score, so reflect this in the calculations.
Indicator Scoring Logic
- Basic/Entry Level: Indicates minimal maturity where the program is either new, inexperienced, or lacks robust systems.
- Developing: Represents moderate maturity with some operational processes and governance in place, but with room for growth.
- Mature: Demonstrates a mature program, where systems, decision-making, and outputs are well-developed and effective.
- Highly Sophisticated: Reflects a fully optimized and sophisticated program that excels in multiple dimensions, often leading the industry.
How to Use This Framework in Scoring
- Assess each program indicator: For each indicator (e.g., Time frame, Grantees, Demand for program, etc.), assign a score (1 to 4) based on the programâs current state.
- Multiply each score by its weight: For example, for âDemand for program,â if the score is 3 (because the program has 200 applicants) and the weight is 2, the weighted score is 3Ă2=63 \times 2 = 63Ă2=6.
- Sum the weighted scores: Once youâve assigned scores to all indicators, multiply each by its respective weight, then sum them to get the total maturity score.
Example Calculation
Letâs say a program has the following characteristics:
- Time frame: 3+ years â Score = 3 (Weight = 1) â Weighted score = 3Ă1=33 \times 1 = 33Ă1=3
- Grantees: 150+ â Score = 3 (Weight = 1) â Weighted score = 3Ă1=33 \times 1 = 33Ă1=3
- Demand for program: 200 applicants â Score = 3 (Weight = 2) â Weighted score = 3Ă2=63 \times 2 = 63Ă2=6
- Program Review/Audit: Community feedback â Score = 3 (Weight = 2) â Weighted score = 3Ă2=63 \times 2 = 63Ă2=6
- Non-financial support: Mentorship and guidance â Score = 3 (Weight = 2) â Weighted score = 3Ă2=63 \times 2 = 63Ă2=6
Total score for these indicators: 3+3+6+6+6=243 + 3 + 6 + 6 + 6 = 243+3+6+6+6=24
Indicators
Indicator | Definition |
---|---|
Time frame each program has been in existence | Time frame each program has been in existence: Measures the duration of the grant programâs operation, reflecting its stability and ability to evolve over time. |
Team maturity | Refers to the experience, expertise, and stability of the team managing the grant program. This accounts for the fact that growing programs and application intakes will require bigger teams who take responsibility for comms, marketing, and community efforts, grantee support, application processes, and events. |
Mode and process of decision-making | Indicates the decision-making processes used by the grant program, focusing on inclusivity, transparency, and community involvement. |
Grant Sizes | Reflects the range and scale of the financial grants offered by the program, indicating its capacity to support various project scopes. |
Process sophistication (how are GOs tracking grantees) | Measures the sophistication of the processes used by the grant operator (GO) to track and monitor granteesâ progress. |
Program Review /Audit | Assesses the extent and quality of reviews and audits conducted on the grant program to ensure accountability and continuous improvement. |
Speed of changes and implementation | Measures how quickly the grant program can adapt to changes and implement new processes or improvements. |
Grantees | Evaluates the number of grantees and grant applications supported by the program, reflecting its reach and impact. |
Success stories | Assesses the long-term success of grantees, particularly in their ability to become financially sustainable and contribute to the ecosystem. The ultimate marker of a success story being a grantee project that has grown to develop their own grants program |
Non-financial systems of support for grantees. | The availability and quality of non-financial support systems provided to grantees, such as mentorship, marketing, and technical assistance. |
The stage of operations | Reflects the complexity and variety of the grant programâs operations, including the diversity of tracks and categories. |
Resources needed to run the program | Assesses the resources required to effectively run the grant program, including human resources, advisory councils, and other support structures. |
Accessibility and playbooks for programs | The availability and quality of playbooks, best practices, and resources provided to applicants and grantees. |
Output and outcome measurement | The extent and sophistication of measuring the outputs and outcomes of the grant program effectively. |
Transparency, Learnings and Reporting | The transparency of the grant programâs processes, the effectiveness of sharing learnings, and the thoroughness of reporting practices. |
Program Design | The structure and inclusiveness of the program design, including the involvement of different stakeholders in the design process. |
Demand for program (Applicants) | Measures the level of interest and demand for the grant program, indicated by the number of applications received. |
Community | Assesses the extent of community involvement and support in the grant program, including how the program engages and integrates community feedback. |
Maturity Score Results
Maturity Score Range | Maturity Level | Program Characteristics | Recommendations |
---|---|---|---|
0 - 20 | Emerging (Level 1) | * Program is in its early stages. |
- Limited or no systems in place.
- Minimal support for grantees.
- No transparency or basic reporting.|* Focus on building basic systems for program management and support.
- Consider defining roles within the team and adding a structured decision-making process.
- Start collecting data for basic reporting and transparency.
- Expand grantee support with non-financial assistance (e.g., marketing, mentorship).|
|21 - 40|Developing (Level 2)|* Program is maturing but still has room for growth. - Some structures in place for governance, decision-making, and grantee support.
- Basic processes for review and audits exist.
- Limited non-financial support for grantees.|* Strengthen decision-making processes by incorporating more feedback loops (team + community).
- Increase transparency in program reporting and decision-making.
- Improve non-financial support for grantees, such as by offering mentorship or community support.
- Streamline resource management and program scalability.|
|41 - 60|Mature (Level 3)|* Well-established program with strong governance and decision-making systems. - Programs have a history of sustained operations.
- Grantees are supported both financially and non-financially.
- Moderate to high transparency in program operations.|* Focus on refining the decision-making process by including external, neutral audits or feedback from the community.
- Enhance grantee support systems by formalizing mentorship programs or establishing accelerator-like structures.
- Measure outcomes more comprehensively to assess the true impact of the program.
- Leverage the maturity to expand the program (e.g., create more tracks or categories for funding).|
|61 - 80|Sophisticated (Level 4)|* Highly sophisticated program with robust processes in place for all areas (governance, support, transparency). - Programs operate across multiple tracks or categories.
- Strong financial and non-financial support for grantees, including access to accelerators, mentorship, and comprehensive systems.
- Highly transparent with detailed reporting and outcome measurement.|* Continue to innovate by implementing cutting-edge technologies or methodologies to improve operational efficiency.
- Build global partnerships and networks to increase program reach and impact.
- Continue to iterate and improve based on grantee and community feedback loops.
- Push for stronger scalability while maintaining quality.|
|81 - 100|Leader/Exemplar (Level 5)|* Leading-edge program that sets standards in governance, transparency, and support. - Programs highly transparent and fully accountable.
- Provides extensive support, helping grantees grow into funders themselves or start grant programs.
- Operates in a highly decentralized, community-driven model.|* Use your status as a leader to guide and mentor other programs.
- Share best practices with other organizations looking to improve their grant programs.
- Focus on maintaining quality while pushing boundaries in terms of innovation and program expansion.
- Continuously assess and adapt to the changing needs of grantees and the broader community.|
Measuring Over Time
The score can be used as a way to compare and analyze performance over time. We recommend keeping the same indicator weights when comparing a program over a period of time.
When using the framework, create a separate table analyzing the scores with a reasoning behind what went well and what didnât go well. After scoring, this subsheet can be used to retrospect on the program and used to create an action plan for improving the program.
We caution against a âvanityâ score which results in bias to score high despite a lower-level indicator score in actuality. To reduce bias, we recommend the scoring with the same indicator weights to be done by a few different stakeholders in the program including trusted grantees, operators, core team, and the community. Compare the scores and average the score, with analysis of the reasoning behind scoring.
To improve the indicator scores, we recommend creating an action plan that tackles the root cause of the lower score. An honest and brutal assessment will allow for a more effective review and improvement plan eventually.
4. Impact Metrics
Tackling impact and understanding and finalizing the right impact metrics is one of the toughest tasks a grant operator can undertake. No one has cracked the answer to this, but in this study weâve attempted to tackle this in a holistic way. Solely relying on quantitative or qualitative data is going to lead to one-dimensional reporting and will be a very limited view on the impact of the program and on end-user beneficiaries (i.e. people and projects benefiting from the granteeâs work). A challenge with quantitative metrics is the ability to game metrics through measures that include sybil, bots, and fraud. A project that is aiming to tackle this is Impact Garden by Metrics Garden (Impact Gardenâs mission is standardized data generation specifically for qualitative data.)
A challenge with qualitative data is lack of reliability and lack of standards in collecting, reporting, and verifying this data.
There is a further issue with vanity metrics, i.e. metrics that donât serve any purpose other than to paint the program in a positive light, to give the illusion of progress or impact. This can be severely harmful to a programâs growth.
With that said, below weâve listed a comprehensive list of metrics in different categories and areas that can be useful to grant programs.
Weâd like to point out the need for a standard for metrics that can be used at large by all programs, that would allow for the best comparisons.
Another important consideration is that grant programs can also have extremely positive sum side-effects (accomplishments) that are directly/indirectly created by the grant program which cannot be accurately quantified solely by impact metrics.
Layer-2s /Blockchain Metrics
Metric | Definition |
---|---|
TVL | Total Value Locked |
Wallets | Wallets created through grantee on the L2 |
Users | Daily Active Users (DAU) |
Weekly Active Users | |
Monthly Active Users | |
Daily New Users | |
Weekly New Users | |
Monthly New Users | |
Meaningful Active Users(MAU) | |
User activity and engagement depth(UAED) | |
Retention and churn analysis(RCA) | |
Target user onboarding rate(TUOR) | |
Transactions | Number of transactions that takes place on the L2 |
Weekly Transactions | |
Monthly Transactions | |
Revenue | Daily Network Revenue ($USD) |
Monthly Network Revenue ($USD) | |
Weekly Network Profit ($USD) | |
Monthly Network Profit ($USD) | |
Profit: L2 Transaction Fees |
Community
Metric | Definition |
---|---|
Community Size | Number of people in the community |
Active Community Members | Active community members as defined by the grant programs objective |
Community Retention | Defined in alignment with the grant program objectives |
Net Community Score | Number of people who feel connected being part of the community |
Community Sentiment and Trust Score(CSTS) | The overall trust and sentiment of the community towards grant-funded projects. |
Community Sentiment Score(CSS) | Attempts to quantify the communityâs perception on certain initiatives |
Community Engagement Rate(CER) | Percentage of new users actively participating in community governance and forums. |
Marketing
Metric | Specifics |
---|---|
Followers | Number of followers over a period of time on a platform |
Impressions | Number of views on a piece of post |
Subscribers | Number of subscribers to a newsletter |
CTR (Click-through Rate) | The percentage of clicks on a piece of content |
Engagement | Likes, Replies, Comments, bookmarks, |
Time Spent | Duration spent engaging with content |
Conversion rate | Total number of sign-ups or interactions to the program versus the total reach |
Community Engagement | Number of community members that post about the program |
Events
Metric | Definition |
---|---|
Signups | Number of signups to the event |
Confirmed Attendees | Confirmed number of attendees to an event |
POAPs | Number of POAPs minted during an event |
Neutral Attendees | Number of people that are not in the same social graph with the organizers |
Onchain activity | Spikes in on chain activity directly linked to the event |
Grant Program
Metric | Definition |
---|---|
Total Grant Applications Received | Total number of grant applications received by the program |
Total Applications Approved | Total number of grants given out by the program |
Completed Projects | Grantees who successfully completed the project they received a grant for |
Sustainable Projects | Grantees who have been able to achieve financial and project sustainability after receiving a grant |
Innovation Index(II) | Attempts to measure innovation focusing on official and unofficial technological breaktroughts of a certain grant |
Important successes/Total projects ration (ISR) | This metric measures the effectiveness of a grant program by analyzing the ratio of high-success projects to total funded projects. |
Ecosystem diversity index(EDI) | Ratio of unique project categories funded compared to total existing categories, indicating diversity. |
Collaboration Index(CI) | Collaboration generally leads to innovation and should be seen as a sign of a healthy ecosystem |
Qualitative Metrics
Metric | Definition |
---|---|
Grant Impact Score(GIS) | Attempts to incorporate innovation, collaboration, developer reputation, and community sentiment and other metrics into a single score. |
Developer Reputation Score(DRS) | Evaluates developers history based on their contributions and influence within the ecosystem. |
Most Significant Change (MSC) as a result of the project | 10-step process for for ongoing monitoring and for evaluation purposes |
Team reputation | Public speaking and community engagements, press features and mentions, public-facing writing, and social media mentions by people of notability. |
Long-term vision and roadmap contribution score(LTRS) | Evaluates how well a project or program contributes to the ecosystemâs long-term goals, which requires assessing sustainability, scalability, and flexibility to evolve. |
Effect on end-users web3 trajectory | This measures the effect of the grant in the form of onboarding new users into the industry and ecosystem. |
Strategic Alignment Score(SAS) | Quantifies how well projects align with the strategic goals of the ecosystem that funded them. |
Ecosystem Loyalty Score(ELS) | Evaluates the degree of âloyaltyâ a project has within an ecosystem that funded it. |
5. Integration, Refinement & Implementation
Our proposition for integration with RFP1 is to solve for reputation and ranking from a grantee and grants program perspective. For example, tiered ranking where grantees whoâve received grants are given categorized questions, to give feedback on different components of the grants program.
The feedback and answers can be used to score the program on a scale, and can be comparatively used keeping the Maturity Framework in mind.
An example of this would be to have a survey with questions around resources given to grantees for e.g. marketing and amplification support, partnership support, and questions assessing the level of support. Conversely grant program managers can attempt to assess certain qualities of their grantees via surveys, proof of work or other feedback gathering mechanisms.
This could comparatively be used for ranking grant programs across industries, on one index.
Note: This section is not final as we are researching multiple implementation options.
We would like to extend our deepest gratitude to the following interviewees whoâve been graciously generous with their time and contributions to enhance the study. Weâd also like to thank the numerous reviewers who shared critical feedback and helped refine this report.
List of Interviewees
- Sov , Gitcoin and the Cartographer Syndicate @Sov
- Eugene Leventhal, Metagov
- Griff Green, Giveth @Griff
- Mahesh Murthy, Karma GAP @mmurthy
- 0xbill, Aave Grants DAO (AGD)
- Gonna, Grant Council Lead, Optimism
- DisruptionJoe, Thrive AND ThankARB (Arbitrum DAO) @DisruptionJoe