Mid-program Retro
Summary
We are very excited to have reached a pivotal moment in the pluralistic grants program, Thank ARB Milestone 1(b). To date, we have approved 8 grant programs through our RFP process and 3 grant projects via our Firestarters program sourced from over 200 applications. We also made significant steps towards decentralizing the review process, with 50+ reviewers from the Arbitrum community.
On a personal note I wanted to take a moment to say how proud I am to be part of this extended community that genuinely values decentralization and is excited to explore innovative mechanisms for the allocation of funds. This grants program is attracting a wide range of visionary ideas and it was truly painful to not be able to fund more of them. We have so many fantastic ideas coming forward, trusting in the community to do the assessment of which projects to fund felt like a leap of faith at times but an important one.
In truth I may have personally selected one or two different proposals that I was excited by but that is the beautiful thing about harnessing the wisdom of the crowd. It’s not the views of any one person that matter, it’s the collective wisdom that is most important. At the end of the day all the projects that were shortlisted could have done amazing things and I hope we are able to fund many more similar projects that are inspired by the process in the months ahead.
This document outlines key learnings from the M1b application and review process, which ran from April to June 2024. Where possible, some of these takeaways have already been applied, while others are highlighted for future implementation. Please share your thoughts and feedback in the comments section below. Building in public is important and we welcome your ideas and observations.
General takeaways and things to do differently
RFP overview
We published an open call for applicants on the Arbitrum DAO Forum and Twitter looking for grant ideas. We were focused on funding a plurality of allocation methodologies to test with the Arbitrum community.
Program idea applications
- These applications were screened by the Thank ARB team.
- We received 120 program idea applications, of which 64 (53%) passed screening.
- 26 of the 64 (41%) scored high enough on our rubric to be considered strong applicants
After deliberating on more difficult to quantify criteria such as risk of failure as well applicant background checks, we selected 22 applicants to receive a 1,000 ARB planning grant to refine their proposal before making a final selection.
Planning grants
In contrast to M1a, M1b piloted planning grants as part of the program applicant cycle. The thinking behind this draws on best practices established in the traditional grants industry, where strong candidates are paid to develop their idea in alignment with the goals of the program. By compensating applicants to put time and resources into designing a good program, we have a stronger framework for making future “cut, coach, grow” decisions.
We gave out 22 planning grants and hosted 2x workshops geared towards helping applicants refine their allocation mechanism, metrics and overall execution plan. We received overwhelmingly positive feedback that applicants found this process both helpful and educational, including those that weren’t selected for final funding.
Program manager applications
Note: Originally, these applications were considered served to fill gaps where programs were missing a manager, but because all shortlisted programs already had a manager, we decided to a) reserve shortlisted managers for future consideration and b) onboard very strong applicants as Tier 1 or 2 reviewers (see below for details).
-
These applications were screened by the Thank ARB team.
-
We received 62 program manager applications, of which 49 (79%) passed screening.
-
44 (90%) of these were approved as qualified candidates to run a grants program.
-
23 of the 44 (52%) scored 5 out of 5, marking them as very strong applicants according to our rubric.
Final program applications
- Each application was reviewed by a minimum of 5 reviewers, from a pool of 18 external reviewers sourced from the Arbitrum community and has continued to grow to over 40 reviewers now as the result of our post on the Arbitrum DAO Forum.
- Successful applicants had to score a minimum of 7/12 using our adapted Likert rubric and receive at least 3/5 “approve” votes (see details below on review methodology).
- 8 programs were selected for funding – see [Appendix A]
(M1b Application & Review Retro - Google Docs) for a summary of each.
Allocation mechanisms in final program applications. Link to dashboard
General takeaways and things to do differently
- Better communication around the purpose of M1b. Many applicants were obviously seeking seed funding, so they framed themselves as grant programs. Although some were very strong proposals, they could not be considered for funding. This highlighted the need for a general catch-all grant program to fund individual projects adding value to the Arbitrum ecosystem.
Going forward, there should be clearer communication around what constitutes a project versus a program, with this criterion included at the screening stage. We redirected many grantees to apply for the Firestarters program in order to access funding for development of mechanisms that could be of service to the DAO community.
- Greater transparency around the entire application and review process.
We received feedback that too little of the review process was available for public scrutiny earlier on in the roll out. Several applicants requested a summary of why they were not selected, so we responded by providing summaries. This could have been more proactive.
We are now testing the following changes aimed at improving transparency:
-
All applications that pass screening are made publicly viewable and open for commentary (with personal identifying information removed). See shortlisted programs and Firestarters as examples. Applicants are told to expect this when submitting their applications.
-
While we decided it was best not to share reviewer scores or comments to prevent personal attacks, bribery or other shenanigans, we have implemented AI-generated summaries that will be shared with all applicants who pass screening. This process is being automated.
-
Refined and targeted application questions. In the future, we recommend including the following questions at the program idea sourcing stage:
- Demographics and more specific questions related to whether the applicant has run a program/grant-funded project before.
- Drop-down of allocation mechanisms for applicants to self identify and as the result easier sorting.
- Budgets would be helpful to know earlier in stage 1 so we can work with applicants around any issues during the Planning Grants stage.
-
Better rubric scoring. -3 to +3 rubric scoring is more effective than a 1-5 Likert scale. We’re not just interested in positive measures of popularity (i.e. which applicant got the highest score). Negative numbers indicate varying degrees of disagreement and positive numbers indicate varying degrees of agreement while avoiding middle value ambiguity. This change in rubric scoring was implemented for the final program reviews and Firestarters.
Review process & methodology
Stage 1: Screening
The purpose of screening is to cut out obviously bad applicants such as spam (the applicant has made no effort whatsoever to provide a comprehensible program idea) and ChatGPT junk (the application is not only poor but appears to have been generated entirely by AI). For this stage of the review process, we felt it was reasonable to optimize for speed over plurality of decision-making and hired an external contractor with extensive experience as a Gitcoin grants reviewer.
Stage 2: Community reviews
Types of reviewers:
- Tier 1 reviewers are paid 100 ARB per review. These are people with deep knowledge of the Arbitrum ecosystem. They have high context and expertise in specific verticals such as web3 gaming, RWAs, oracle technology, etc. Compensation is akin to hiring a talented plumber – we are paying for quality of the task completed, not the number of hours worked.
- Tier 2 reviewers are paid 10 ARB per review. These are people with sufficient context and skill, but who may lack expertise in the Arbitrum ecosystem and/or specific verticals.
- Note: In the future, we will pay Tier 3 reviewers 1 ARB per review for initial screening.
Reviewer tasks:
- Score each applicant (-3 to +3) and provide specific comments on each rubric prompt
- Answer “yes” or “no” for whether they think the applicant should be funded
- Rank applicants 1 to n (most preferred to least preferred)
Note: We are developing a process where reviewer scores are compared to the consensus view and reviewers who most often reflect the consensus view will be given “points” as well as those who provided useful commentary to justify views that were outside of consensus. This assessment will be done by other reviewers creating a feedback loop for the process itself. Reviewers that reach a certain threshold of points will be promoted to higher tiers, or demoted if they fall below that threshold.
Stage 3: Aggregating reviews to make funding decisions
Funding decisions were based on the following criteria:
- Applicant received a minimum median total score of 7 out of 12, AND
- Applicant received at least 3 out of 5 “yes” votes on funding the approval question
Stage 4: Communicating decisions
- Next steps for funded programs
- Next steps for unfunded programs
Appendix A
Final selected programs for Thank ARB Milestone 1b
Amplifying Impact – Bonus Funding for Outstanding GG20 Arbitrum Round Projects
Tarah, Life on Mars
Allocation mechanism
- Type: Retroactive
- Mechanism(s): Expert multi criteria voting with anti-polarization algorithm
- Voting rights: Badged reviewers
- Platform(s): Ethelo
AI-generated program summary (see final application here)
The program uses Ethelo’s platform for identifying impactful projects through expert reviews, a predefined rubric, and the Ethelo algorithm. The weighted evaluation avoids popularity contests and cronyism, prioritizing projects exceeding expectations in the Arbitrum ecosystem. Blind expert reviews provide unbiased data, and Ethelo’s algorithm ensures transparent, consensus-driven allocation.
Cartographer Syndicate
Sov
Allocation mechanism
- Type: Proactive and retroactive
- Mechanism(s): Quadratic voting, conviction voting
- Voting rights: Program managers
- Platform(s): Web3 Grant Registry, Gitcoin, 1Hive/Gardens
AI-generated program summary (see final application here)
Direct grants are allocated through RFPs for registry development, such as reputation systems and advanced analytics, with fund allocation via conviction voting and delivery through Gitcoin Grants Stack. Contributor incentives reward those maintaining the registry and conducting analyses. Novel funding mechanisms like MACI conviction voting are explored, positioning the program as an innovator within the Arbitrum ecosystem.
Farcaster builder program
JB Rubinovitz
Allocation mechanism
- Type: Proactive
- Mechanism(s): Voting
- Voting rights: Public, badged reviewers
- Platform(s): Farcaster
AI-generated program summary (see final application here)
Funding will involve partnering with Farcaster builder KOL and channels, allowing real users to vote by “Liking” on Farcaster. Popular voting is done by quality users and work streams; expert voting by top builders as measured by Github, onchain, and social graph. Tools include Farcaster and a Tenfold whitelabeled Arbitrum builder client. Methodology differs from contests like Rounds.wtf and Jokerace, addressing past issues like spam. No further development funding is necessary.
GIV-Arb Ecosystem Accelerator Program
Kieran O’Day
Allocation mechanism
- Type: Proactive
- Mechanism(s): Quadratic funding, connection-oriented cluster match, bounties
- Voting rights: Public, badged reviewers
- Platform(s): Giveth
AI-generated program summary (see final application here)
The allocation methodology involves two QF rounds and a bounty component for Giveth projects adding an Arbitrum address. This prepares them for the Arbitrum ecosystem. The four program phases are:
- Onboarding/ARB address bounty phase: Rewards existing and new Arbitrum projects for adding an Arbitrum address, with bounties distributed at phase end.
- Initial and follow-up QF rounds: Operated on Giveth Dapp using Quadratic Funding with connection-oriented cluster match (COCM), featuring Sybil defense and a $0.90 donation value threshold.
- Between rounds: Encourage recurring donations for sustainable funding.
Oasis Onchain Quick Grants
Oasis Onchain (Stefen Deleveaux, Mashiat Mutmainnah, Estefania Ochoa, Amira Gariba)
Allocation mechanism
- Type: Proactive
- Mechanism(s): Direct
- Voting rights: Tokengated
- Platform(s): Charmverse
AI-generated program summary (see final application here)
Oasis Quick Grants aims to expand Arbitrum’s tech, community, and governance in the Global South. It provides quick funding (1K-5K ARB) to 15 projects, enhancing contributors, governance, and education within 8-9 weeks. The 75,000 ARB pool supports DeFi, NFTs, governance, and more. Proposals are submitted on Charmverse, reviewed in 3-5 days with biweekly check-ins and a final report by August 15.
Oxcart Delegation Engine
Mel Oxenreider (mel.eth)
Allocation mechanism
- Type: Retroactive
- Mechanism(s): Targeted allocations
- Voting rights: Tokengated
- Platform(s): None
AI-generated program summary (see final application here)
- Delegate Redelegation Incentive Program (DRIP): Weekly lottery prize for redelegated delegates performing useful activities, with Twitter announcements and outreach.
- SC Registry and API (SCRAPI): RFP and deployment of redelegation registry.
- Delegation Applied Research Tracking (DART): Incentivizes dashboards and tools for community information, featuring a virtual hackathon and workshops to award the best community tools.
ReFi in Arbitrum
Alex Poon
Allocation mechanism
- Type: Proactive
- Mechanism(s): Direct milestone-based
- Voting rights: Badged reviewer
- Platform(s): CharmVerse
AI-generated program summary (see final application here)
Funds ReFi projects based on milestone achievements, rewarding overperformance. CharmVerse powers the grant process, ensuring transparency and ease of use. Expert reviewers, selected for their ReFi and Arbitrum knowledge, will choose grantees using an evaluation rubric designed by the Program Manager.
RWA Innovation Grants (RWAIG)
Bernard Schmid
Allocation mechanism
- Type: Proactive
- Mechanism(s): Quadratic funding, direct
- Voting rights: Tokengated, badged reviewers
- Platform(s): Gitcoin
AI-generated program summary (see final application here)
The program integrates quadratic and direct funding mechanisms using Gitcoin’s tools, which have distributed over $60M since 2017. No additional development funding is needed. After demonstrating pilot outcomes, a retroactive round will follow. Quadratic Funding (100K ARB) allocates matched funding via community donations. Direct Funding (200K ARB) involves committee-defined eligibility, on-chain voting, and project execution, decentralizing decision-making and targeting specific areas for RWA development on Arbitrum.