Plurality Labs Milestone 1 Review

Table of Contents

  1. Key Learning & Takeaways
  2. Our Deliverables After Three Milestones
  3. How to Review Milestone 1
  4. Deliverables from AIP-3
  5. Grant Program Selections

We have two ongoing projects expected to be delivered in January which will make it very simple for anyone to look up and review individual grants across all the programs. For this reason, we didn’t include the details of individual grants here as the document is already 30 pages long. :grin:

Key Learnings & Takeaways

DAO Needs

  • The Thank ARB strategic priorities found during #GovMonth did not receive support from top delegates because we did not include them in the process in a way that was seen as legitimate. \

  • Without legitimate priorities and a process for setting 3-6 month strategies it is hard for the DAO to enable more experimentation.

  • Potential contributors need a pathway to get a small grant to do narrow scoped work which then is assessed to provide next steps for the contributor to know there is opportunity and also for the DAO to double down on high performers.

  • DAO members need communications to be simplified and aggregated. They don’t know where to go to maintain context, know what they should do, and learn about other ways to be a good Arbitrum citizen.

  • The DAO needs workstreams ASAP because we are lacking people who organize and contextualize situations.

  • The decentralized nature of Arbitrum means that things move slow. We are in a position to help move things faster.


  • We need to segment by personas (delegate/builder/grantee/etc) and match our channels to the user. Many people do not know what we have done.

  • We need to double down on USER CENTERED DESIGN on the Thank ARB platform

  • 13 Grant Programs makes for too much complexity. It would be better to have a few focused yet workstreams.

  • The community is willing and capable. They just don’t know what to do.

  • Common delegate feedback: “I like the energy Plurality Labs brings. You have been a spark that brought us from 0 to 1. However, I don’t exactly know what you’ve done.”


  • The strategic priorities from GovMonth did not achieve legitimacy. Had we involved delegates earlier in the process, they would have understood what we did and likely would have seen the results as legitimate even without ratifying them.

  • Quadratic funding is likely not a good meta-allocation mechanism. We are going to try again with Quadratic Voting and another custom algorithm.

  • Successful tools usually solve more than one problem. Hats protocol provides many solutions from workstream accountability to dynamic councils. Hedgey streams are easy to use and can work for grants as well as salaries. When we find solutions, we should double down to see what other ways we can use them.

  • The DAO needs more indexed data available for the community to interpret. Overall, the DAO does not have any agreed upon metrics to say what success looks like.

Funding Success

  • Huge amount of learning happened with the foundation (Compliance, Process, Etc). Some funding didn’t happen until months after the grant approval. We are now down to 2-3 weeks to get funding sent from when the grant is approved, but we need better systems.

  • We need to balance the desire for clear criteria with decision making that has common sense (NFT Earth removal from grant eligibility). As we make decisions, we are observing and documenting criteria to open up the process in the future, but confirming community led review capabilities is a priority before removing decision making power. This applies at the framework level as multiple programs are attempting to draft and use criteria based approaches.

  • Firestarters served a clear purpose and was our most successful program as far as we can tell. It drove tangible results for the DAO in short time. Now we need next steps.

  • Data driven funding is critical in milestone 2 (STIP support, OSO, etc)

Our Goals After Three Milestones

Different from most grant programs, Plurality Labs is not only responsible for allocating grant funds. AIP-3 also directs us to spearhead the design of a pluralist & capture-resistant grants governance framework. This means that over the course of three milestones we are committed to, and accountable for, the work to design the framework, the ultimate deliverables of the three milestone effort, and the metrics that signal our success.

Let’s break that down for a quick reminder. Our team sees this daily:

These metrics will be baselined in January and visited twice a year from here forward. We are sharing this here as a reminder of where we are going and to bring intention to how you review our progress towards the overall goals after 3 Milestones vs the work done in Milestone 1.

How to Review Milestone 1

Milestone 1 can be evaluated using the following 3 dimensions:

Did they do what they said they would do in Milestone 1?

In Milestone 1, the scope of our efforts may have been downplayed in our initial proposal. The substantial work required for the initial setup, including coordination with the foundation for compliance and fostering effective collaboration, was not adequately detailed in AIP-3.

Nevertheless, a comprehensive overview of our commitments and deliverables for Milestone 1 can be found in the section labeled “Deliverables from AIP-3”.

Did grant funding go to worthwhile programs & projects?

Plurality Labs predominantly opted for external program managers to oversee grant programs aimed at achieving diverse objectives. While some programs were internally designed by Plurality Labs and then assigned a program manager, others emerged through an open application process. Notably, GovBoost and Firestarters were overseen by Disruption Joe as the program manager, aligning with their tailored focus on addressing specific needs of the DAO.

For a comprehensive overview of all program selections, please refer to the section titled “Grant Program Selections”.

Higher up indicates alignment to funding DAO needs. To the right indicates a focus on experimentation. Circle size indicates the amount of funding given to the program.

Did they prioritize experimentation & learning for the future?

Within each grant program, varying levels of experimentation exist. Those depicted on the right side of the bubble chart above are particularly high in experimentation. In these instances, the allocation of grant funding is not only a financial endeavor but also a means of acquiring insights into new and effective mechanisms and processes. These insights contribute to a comprehensive and holistic approach to capture-resistant governance frameworks.

If successful, our grants program has the potential to endure beyond Plurality Labs’ active involvement. Furthermore, the knowledge gained in developing capture-resistant governance within the grants framework can be extended to governance related to on-chain upgradeability, enhancing the security and value of Arbitrum. Additionally, it lays the groundwork for a future characterized by genuinely neutral digital public infrastructure.

Detailed insights into these learnings are provided under each grant program in the section titled “Grant Program Selections” below.

Deliverables from AIP - 3

13 of 20 (65%) have been completed

Our biggest deliverable has been allocating 3 million ARB in grants while building the processes to do so. Our strategic framework was built in a DAO native way and has been widely seen as a new innovation in how DAOs can function. We had plenty of takeaways and learnings which are detailed below.

We took a lot of shots and while not all of them landed, we feel most were quality attempts to make progress. Even though every experiment doesn’t provide the insights we hope for, we feel that an objective review of what was funded will show that Plurality Labs grants framework allocated funding as well as any other grants program out while running the experiments in tandem.

5 of 20 (25%) will be completed by the end of milestone which ends on 1/31

  • Establish and confirm key success metrics for the Grants Program
  • Establish clear communications cadences & channels for all key stakeholders to engage with the program
  • Design approach, process and channels for sourcing high impact grants ideas
  • Design Grant Program manager application process and assessment criteria
  • Onboard and coach Pluralist Program Managers in grant program best practices

2 of 20 (10%) will not be completed

The fulfillment of these programs hinges on reviewing the success of individual grants. Due to the timeline of distribution, it was not possible to complete grant success review prior to grant program review during this milestone. The following items were replaced during this milestone and are on the roadmap for Milestone 2.

  • Collate community feedback and input on grants programs efficacy and success
  • Evaluate, review and iterate based on this feedback to continually improve the overall impact of the Arbitrum DAO grants program

Because of this timing, we decided not to evaluate the grant program’s success and instead use January to review every grant that was funded, beginning a continuous process of community led evaluation. In hindsight, this was probably needed before evaluating at the program level anyway.

Grant Program Selections

Each section below is a grant program. Programs are listed in chronological order by start date. Within each program you can review the following:

  • Program Description
  • Why the Program was selected
  • Alignment to Thank ARB Strategic Priorities
  • Program manager
  • Amount funded
  • Types of projects funded
  • Expected outcomes
  • Experiments conducted
  • Learnings (if program has concluded)

Questbook Support Rounds on Gitcoin

Program Manager: Zer8

Allocation Amounts: 100K ARB

Status: Completed & Paid


This program started with two unique governance experiments. A “domain allocation” governance experiment allowed the community to direct funds to four matching pools based on Questbook program domains. Then 4 quadratic funding rounds sourced grants to assist their domain allocators in sourcing grants.

Why PL Funded This Program

We wanted to start quickly getting ARB into the hands of builders. Questbook had been asked “What would be the top reason your program might fail?” and they answered that sourcing enough quality grants is their top potential problem.

At the same time, our proposal had introduced the concept of horizontally scaling a pluralist grants program. This program was a great opportunity to show how multiple grants programs could collaborate to provide better outcomes. Quadratic Funding is great for sourcing new grants. Delegated Domain Authority, the system used by Questbook, is better for quickly doubling down on quality performance.

This program was funded to support and bring awareness to Questbook’s grant programs and as a governance experiment in funding distribution.

Alignment to Thank ARB Strategic Priorities !(upload://7UravR3RZXsGS3oattrRl4frlwz.png “Chart”)

The types of projects include…

The outcomes we hope to achieve are…

  • Support initiatives on Arbitrum.
  • Find high-quality grants that might get additional funding from Questbook if the grant is successful.
  • Minimize the risk of choosing grants that might fail

The program will support experiments to see if…

  • Decentralized meta allocation improves how the community decides to allocate funds between matching pools using quadratic funding.
  • Allow-list restricted voting in quadratic funding is userful
  • Collaborative efforts between different programs have positive effects.

Thank ARB (Sense and Respond)

Program Manager: Payton / Martin

Allocation Amounts: 350K ARB

Status: ⅓ modules completed - Ends early February


This program introduces a novel way for the community to connect with the DAO. It provides a foundation for the community to continually sense and respond to the fast changing crypto environment. It provides a place for DAO members to learn what is happening, voice their opinions, and learn about opportunities to offer their skills.

Why PL Funded This Program

This program was selected to ensure an engaged and informed community is ready to mobilize to solve Arbitrum’s biggest challenges. The most powerful advantage a DAO has is its community.

To this day, DAOs have not solved communications in a decentralized way. Thank ARB creates a conversation by filtering signal from community input to identify expertise and effort from community members and offers them opportunities to engage as Arbitrum citizens.

Thank ARB is also used to reach either statistical significance in surveys or minimum viable decentralization in curation. This allows Arbitrum to refrain from delegated authority for critical functions by using swarm mechanics to resolve conflicts in curation.

DAOs are complex by nature, thus they require complexity aware solutions based on a continuous practice of sensing and responding in order to grow.

Alignment to Thank ARB Strategic Priorities

The types of projects include…

  • Rewarding community members actively participating and adding value.
  • Inviting stakeholders, delegates and Arbinauts to join activities that help make sense of our priorities and contribute to governance processes, like grant evaluation.

The outcomes we hope to achieve are…

  • Improve how grant funding is distributed.
  • Increase delegate and community confidence that funds are allocated as intended based on a collaborative process.
  • Assure the community that we can manage the grant process inclusively and efficiently.

The program will support experiments to see if…

  • Building non-transferable reputation is feasible and viable.
  • There are new ways to distribute power
  • How we might establish a baseline for contributor engagement data across different platforms and activities.
  • It is effective to create awareness of DAO activities and priorities through experience vs. marketing.


Arbitrum's Short-Term Incentive Program (Arbitrum Improvement Proposal)

Program Manager: Disruption Joe

Allocation Amounts: 350K ARB

Status: Open


The Firestarter program is designed to address specific and immediate needs within the DAO. Problems as identified by delegates and Plurality Labs are given a grant to do the initial catherding and research that is required to kickstart action.

Why PL Funded This Program

We started this program due to an acute need which arose after Camelot’s liquidity incentive proposal was voted down. After conducting the first workshop to understand if there was potential for a short term triage solution, we realized that there was no current mechanism to fund the necessary research and facilitation to find a non-polarizing solution.

Alignment to Thank ARB Strategic Priorities

The types of projects include…

  • STIP Facilitation
  • Treasury & Sustainability Working Group
  • Security Audit Provider RFP Working Group

The outcomes we hope to achieve are…

  • Quickly and effectively address urgent needs resulting in high-quality resolutions.
  • Demonstrate the ability to create fast and fair outcomes that benefit the ecosystem.
  • Frameworks for service providers to build a scalable foundation for Arbitrum growth.
  • High quality resolution of key, immediate needs. better more fair outcomes for the ecosystem, more fair frameworks for service providers speed and fairness.

The program will support experiments to see if…

  • Self management and autonomy advance DAO outcomes
  • There are new ways to implement checks, balances, and feedback loops between different parts of the system like Plurality labs, the grant security multi-sig, and the DAO.
  • Using on-chain payments such as Hedgy can improve operational efficiency. Due to compliance timing, some firestarters were given as direct grants.

MEV (Miner Extractable Value) Research

Program Manager: Puja

Allocation Amounts: 330K ARB

Status: Ends late March or April


This research program focuses on the topic of Miner Extractable Value - MEV. It is a topic where less than 100 people are truly qualified to have high expertise conversations.

Why PL Funded This Program

This program was selected to further the work of Puja Olhaver in creating better mechanisms to propagate quality conversations at the information frontier. This work convenes the top MEV research thinkers in order to be beneficial to Arbitrum as well as the broader Ethereum ecosystem. It is a next step to her prototype design built and tested at Zuzalu.

To improve our resource allocation, we need to bridge the information asymmetries between the scientist/academics and those allocating funds.

Adverse selections & moral hazard are fundamental problems that tokenizing and decentralizing can’t solve on its own. A goal of this program is to elevate the quality of conversations about high-expertise topics and promote breakthroughs using mechanism design.

Puja talk at DeSci Berlin

Alignment to Thank ARB Strategic Priorities

The types of projects include…

  • Building the MEV research forum
  • Conducting research (data gathering and synthesis throughout the program)
  • Covering operational costs for hosting a conference with 50 top MEV researchers

The outcomes we hope to achieve are…

  • A new forum dedicated to high expertise discussions, pushing the boundaries of decentralized governance and credibility.
  • The forum will be a place where qualified voices and thought leaders stand out and the most important discussion get attention in a decentralized way.
  • Radically amplify new ideas and technology.

The program will support experiments to see if…

  • Exploring using external data to measure social distance
  • Considering academic and professional credentials in up and down voting
  • Evaluating group thinking
  • Using quadratic voting
  • Cluster mapping & correlation discounting

Arbitrum Citizen Retro Funding

Program Manager: Zer8

Allocation Amounts: 107K ARB

Status: Completed & payout expected in December ‘23


This round was in support of the outstanding individuals who have contributed to the DAO in a proactive way since the launch in May 2023. In simple terms, we want to reward people, not big organizations, who have helped our community grow by contributing to the DAO. We’re trying to show that hard work should be recognized and rewarded fairly via recognition and ARB.

Why PL Funded This Program

During the execution of STIP, it became clear that many people were putting time and effort into improving the DAO. This program was designed to motivate others to step up by setting a precedent that work for Arbitrum will be rewarded. Many DAOs rely on free labor, some to the point of exploitation, as a normal part of their process.

Arbitrum can set standards that lead the evolution of decentralized governance and technologies by setting an example.

Alignment to Thank ARB Strategic Priorities

The types of projects include…

  • Making the Arbitrum DAO more efficient
  • Supporting long-term success for projects using Arbitrum
  • Providing insights & analysis

The outcomes we hope to achieve are…

  • Individuals are recognized by the community and awarded ARB for contributing to our growth in a meaningful way, establishing a precedent that meaningful work is fairly and appropriately rewarded.

The program will support experiments to see if…

  • Rewarding individuals is more effective than rewarding organizations
  • Quadratic funding to is useful in distributing grants retroactively

Allo on Arbitrum Hackathon

Program Manager: Annalise (Buidlbox)

Allocation Amounts: 122.5K ARB

Status: Ends mid January


This program is all about expanding what Gitcoin’s Allo protocol can do on the Arbitrum One Network.New funding strategies, interfaces, and curation modules will be available for all 490 protocols on Arbitrum to use freely to fund what matters.

Why PL Funded This Program


Alignment to Thank ARB Strategic Priorities

The types of projects include…

  • Our goal is to give the 490+ protocols building on Arbitrum new ways to fund their communities. We want these funding methods to be fully on-chain, transparent, and auditable.

The outcomes we hope to achieve are…

  • Fresh funding strategies using the Allo protocol (like direct vs. quadratic approaches)
  • Creating modules to clarify and simplify grant round managers’ work
  • Customizing solutions using Gitcoin Passport

The program will support experiments to see if…

  • Using innovative voting methods to decide how to distribute prizes for the hackathon is effective.

Aribitrum’s “Biggest Small Grants Yet”

Program Manager: Diana Chen

Allocation Amounts: 90K ARB

Status: Ends early March


This program uses Jokerace to give out small grants in a fully on-chain and decentralized way.

Why PL Funded This Program

This experiment in fully decentralized decision making using Jokerace is managed by Diana Chen, host of the Rehashed podcast. As one of Jokerace’ first successful users, Diana created her podcast using Jokerace to select the guests.

Alignment to Thank ARB Strategic Priorities

The types of projects include…

  • The program will fund a community run mini grants using jokerace for the community to select a grants council who will decide on the winning projects (10,000 ARB will be dispersed each week to 4 of the most deserving projects (2,500 ARB each) as decided by the arbitrumDAO grants council)

The outcomes we hope to achieve are…

  • Make governance fun! Achieve governance optimization by identifying and iteratively improving key capabilities to increase DAO performance and accountability
  • Community run mini grants initiative
  • Support projects building on Arbitrum
  • A stronger, more engaged, and more valuable community that is working together toward a shared goal (funding worthwhile web3 projects

The program will support experiments to see if…

  • Experimenting making governance fun and properly incentivizing participation, we strengthen not only the bond a community member feels to Arbitrum but also the bond a community member feels to other community members. The compounding effect of this over time is a stronger, more engaged, and more valuable community that is working together toward a shared goal (funding worthwhile web3 projects).

Matching Match Pools on Gitcoin

Program Manager: Zer8

Allocation Amounts: 300K ARB

Status: Open until funds are used. Current 150/300


This program adds extra funds to the matching pool for quadratic funding rounds on Arbitrum. Top programs running on Arbitrum will be selected to receive additional funding.

Why PL Funded This Program

This program was funded to bring users to Arbitrum while also convincing Gitcoin’s Allo protocol and GrantsStack teams to prioritize deployment on Arbitrum. Gitcoin rounds produce gas fees and run on an open data substrate. Their swiss army knife of grants funding mechanisms is now available for any of Arbitrums 490+ protocols to use.

Alignment to Thank ARB Strategic Priorities

The types of projects include…

  • Protocols building on Arbitrum to improve the
  • Encouraging open-source dependencies on Arbitrum
  • This includes rounds that were previously evaluated through Gitcoin rounds

The outcomes we hope to achieve are…

  • Boost quadratic funding rounds which can generate gas fees on Arbitrum
  • Encourage Arbitrum protocols to organize quadratic funding for their communities
  • Convert more end users to Arbitrum

The program will support experiments to see if…

  • There are new ways to improve matching pools
  • We can identify unconventional marketing channels for Arbitrum, especially for Gitcoin to deploy on Arbitrum

Not only did the American Cancer Society run their first quadratic funding round ever on Arbitrum, we also supported their round by offering tARB & ARB rewards for donating more than $10 to support their cause!

Open Data Community (OCD) Intelligence

Program Manager: Epowell

Allocation Amounts: 165K ARB

Status: Ends late March


The Open Data Community has established a base of data analysts & data scientist hoping to bring about a new paradigm using open data. This program brings experimentation with increasingly decentralized allocation methods leading to new ways of sharing resources and insights.

Why PL Funded This Program

Open data plays an important role in pluralist governance frameworks. The community will be able to tap into a community of data analysts & scientists familiar with onchain forensics to help us understand our ecosystem.

Alignment to Thank ARB Strategic Priorities

The types of projects include…

  • Data science insights
  • Data analysis regarding grant program and grantee evaluation
  • Custom scoring mechanisms for grants
  • Dashboards for infrastructure

The outcomes we hope to achieve are…

  • Encourage the Arbitrum community to create and share tools that help ARbitrum grow.
  • Expand the group of data analysts and scientists working with Arbitrum.
  • Make community analysis more open and decentralized.
  • Deliver practical open-source Sybil and grant analysis, including reviews and program management of the STIP analysis.
  • Support critical ad-hoc data needs of the Arbitrum community.

The program will support experiments to see if…

  • Exploring permissionless suggestion boxes
  • Shaping on-chain mechanisms using Hats protocol
  • Hosting on-chain hackathons using Allo voting

Arbitrum Co-Lab by RN DAO

Program Manager: Daniel

Allocation Amounts: 185K ARB

Status: Ends late March


This program explores applying proven methods of deliberation, like Citizens’ Assemblies, Sociocracy 3.0, in the Web3 space and also explores AI-based tools to automate the facilitator’s cycle of asking, synthesizing, and echoing.

Why PL Funded This Program

RN DAO has a reputation for user focused design. Their venture studio model allows us to start the conversation around potential pathways for builders to kick start their idea, access a network of experts to help them grow, then potentially receive investment from Arbitrum DAO. This pathway will provide key learnings into what is needed to make Arbitrum a home for builders.

Alignment to Thank ARB Strategic Priorities

The types of projects include…

  • Refining the interaction and usability of the Harmonica bot for large groups.
  • Crafting and coding LLM prompts for distinct facilitation methods
  • Testing these methods with participating communities

The outcomes we hope to achieve are…

  • Build collaboration technology that enhances organizational, community, DAO operations and governance tools.
  • Improved group decision making and responsive decision making ventures.

The program will support experiments to see if…

  • Exploring new ways to apply traditional deliberation methods in the Web3 space is useful, viable and feasible.
  • AI-based tools can help scale and automate the facilitator’s cycle facilitation and discovery cycles.

Grantships by DAO Masons

Program Manager: Matt (DAO Masons)

Allocation Amounts: 154K ARB

Status: Ends late February


This program emerged from a Jokerace methods contest regarding grant allocations. It involved building and dogfooding of software which turns grant program allocation into a game the whole DAO can play. Through a series of algorithmic shifts the eligibility and amount of funding for each grant program.

Why PL Funded This Program

Not only did this team win the original Jokerace contest last summer, they also are the only project addressing both continued autonomy through subDAOs which is fully onchain AND meta-allocation.

Alignment to Thank ARB Strategic Priorities

The types of projects include…

  • Developing software
  • Funding testing rounds to iteratively improve the algorithm
  • Data analysis

The outcomes we hope to achieve are…

  • meta allocation software is built and tested. Community led allocation decisions possible

The program will support experiments to see if…

  • community can allocate well at a grant program
  • what kind and how many wallets need to participate to produce credible outcomes
  • the combination of hats protocol and molochDAO V3 creates emergent effects, novel outcomes and new insights.

Plurality GovBoost

Program Manager: Disruption Joe

Allocation Amounts: 540K ARB

Status: Completed & Paid


This program included direct grants funded to increase Arbitrum DAO ability to effectively govern its resources.

Why PL Funded This Program

We funded this program to address critical governance needs of the DAO which emerged. These grants have specific deliverables and payment is milestone based.

Alignment to Thank ARB Strategic Priorities

The types of projects include…

  • Open Source Observer - Grant Impact Evaluation Data
  • Open Block Labs - STIP Monitoring
  • Blockscience - Governance Needs Assessment

Image from bi-weekly STIP update post from Open Block Labs

The outcomes we hope to achieve are…

  • Reduce the potential for fraud
  • Increase delegate awareness of known governance best practices
  • Inform on framework design
  • Enable data-driven decision making

The program will support experiments to see if…

  • Data driven approaches to funding grants is effective
  • Research to show how to avoid governance capture


While we have a few items still on the todo list for January, this review should provide delegates enough information to assess if we should be offered more or less responsibility for our second Milestone.

We are confident that we will address any open concerns. We are excited to unleash the potential of the Arbitrum ecosystem in 2024.


Welcome to a quick update from Plurality Labs. We have a tremendous amount of work under way and wanted to hit some highlights as we look to the future.

Table of Contents

  1. Communications update
  2. How individual grants decisions were made:
  3. Delivering a Best in Class framework:
  4. Deliverable tracking, Milestone 1:
  5. Financial update:
  6. Looking forward:

Communications update:

Communications still leads as our largest opportunity area. We have increased the volume and diversity of our twitter / x spaces hosted with the foundation and we have moved to hosting DAO calls every week, but our delegates want more. To continue improving communications we launched a Plurality Lab Grants Hub, hired additional communications help, and we committed to more regular, and detailed updates.

More information will be coming on these changes soon, but lets look at the Grants Hub:

This site was launched as a “quick and dirty” program communications site. It is intended to provide a single source of truth for any program, and many projects which are funded through the DAO and Plurality Labs.

From the Homepage, you get a birds eye view of the Plurality Labs programs. Here you can see each program, the status of the program, as well as the amount allocated to the program and the spending to date. Note some projects are still in progress and spending is not complete yet. You can also link directly to the program owner if you would like more information.

On the program details page - we try to give you the next level of information as well as insights on the mechanics of the program. It is here you can find the descriptions, milestones, why it was funeded, the decision mechanisms it uses as well as experiments and intended outcomes.

As we add more content to the site, we will go into deeper detail in subsequent updates:

Finally, within program details you can projects which have been funded through each of the specific programs. As existing programs come to a close, more project details will be added.

How individual grants decisions were made:

In Milestone 1, Plurality Labs worked to identify the right priorities and build the processes to decentralize funding based (mostly) on those priorities. What we learned was that we need to be more transparent about how grant funding decisions were made.

For background, after working through programs selections, Plurality Labs gave individual programs and program managers the autonomy to decide how to fund their grant recipients (in relation to the priorities defined in GovMonth). As we look across grant programs, we found programs could fit loosely into the following grant decision categories: expert decision, expert panel, community quadratic funding, & new or novel mechanisms. The illustration below is not intended for academic debate, but to illustrate how Plurality Labs has attempted to use several different allocation mechanisms.

Plurality labs did act as the program manager for the Firestarter and GovBoost programs. It should be noted that Plurality Labs intended to manage the Firestarter programs to provide a mechanism for rapidly funding emerging DAO needs (ex: Treasury & Sustainability Working Group, Procurement Framework & Security Service, Finance & Transparency Report), but not the GovBoost program. This program was only picked up by Plurality Labs as we lost our intended program manager and we did not want to stop the work underway (STIP data monitoring, Open Source Observer, Hats for TreasureDAO).

Delivering a Best in Class framework:

In Milestone 1, Plurality Lab committed to delivering a best in class grants framework. One interpretation of a “grants framework” is providing a system or collection of documented processes that could be handed over to a greenfield team, and then that team could run those processes to success. That interpretation is exactly not what Plurality Labs did. We invented the capabilities and processes required to run a plurality of grant programs, but we have not documented this work in a way that could be handed over to another team. It is our intention in Milestone 2 to build out the successful experiments in a way that the DAO can take over and expand, but that was not in the scope for milestone 1.

Deliverable tracking, Milestone 1:

As we improve our communications, we want to share our view of performance as it relates to Milestone 1 deliverables. This dynamic tracker is intended to track progress against Milestone 1. To note, the chart includes several “tbd’s” but should be complete shortly.

Deliverable tracking doc

Financial update:

As milestone 1 draws to a close in February, the team is pushing to complete existing programs and projects. There will be several initiatives that will carry past the end of February and more details on those programs will be illustrated in later updates. Additionally, within the context of the GovBoost program, R3Gen Finance has been contracted to provide detailed DAO and grant spending, that report is expected in January. Until then, the Plurality labs includes a quick overview of current spending.

This chart indicates grant allocation to actual grant spending. As most programs come to close or complete in February, we can expect the velocity of spending to continue in to February closing the gap between spending an allocation.

This spreadsheet below indicates spending at a program level. It also illustrates increased the velocity of grant spending over time. This is where you can see that it took to November to work out the program contracts and KYC/KYB compliance processes.

And the program detail behind the two Plurality Lab managed programs:

Looking forward:

The January report will include:

  • The updated assessment of Milestone 1 deliverables
  • Spotlight on decentralizing program and project reviews
  • Financial report - spending to date by program
  • A summary of January communications updates

The below post reflects the views of L2BEAT’s governance team, composed of @krst and @Sinkas, and it’s based on the combined research, fact-checking and ideation of the two.


We want to start off by clarifying that by publishing our review of Milestone 1 of Plurality Labs’ Grants Program, our goal is to share our perspective on its outcome which will hopefully help delegates in their assessment and enable them to make an informed decision when it comes to the proposal of Milestone 2. We do not in any way intend to criticise or offend Plurality Labs or any of the associated parties.

We did our assessment in good faith when Milestone 2 proposal was published on the forum, based on the data available at that time. We have shared our findings with Plurality Labs and we also communicated both our concerns and some suggestion on how to proceed going forward. As the Milestone 2 vote went to the temperature check, we fill obliged to share our learnings with the broader community.


  • We spent more then a full week looking into Milestone 1 and we’re using this post to share our findings.
  • Plurality Labs has also produced their own Milestone 1 Review which they recently updated to include additional information, following discussion with and feedback from delegates
  • in our opinion Milestone 1 hasn’t been fully completed and its’ results are below our expectations
  • There are still numerous initiatives that are in progress (e.g. RnDAO Co.Lab, MasonDAO, Biggest Minigrants Yet), and in our opinion, those are some of the most meaningful ones from the point of view of the original proposal, we should probably wait for them to conclude or at least show some meaningful progress in order to be able to assess Milestone 1 better.
  • Most importantly, the deliverables outlined in the original proposal of Milestone 1 weren’t fully completed.
  • The biggest expected outcome of Milestone 1, which was to deliver a grants-framework, wasn’t met. As we understand it, a framework is a set of structures and procedures that can be followed without the involvement of the original team, and the process to do so as well as the expected results are repeatable and predictable.
  • We believe that right now, the best course of action would be to wrap up Milestone 1, compile the lessons learnt, and plot actionable next steps from there (ideally restarting Milestone 1 or doing a followup program to continue working on Milestone 1 vision, within similar budget).
  • We do not think that we should be talking about scaling at this moment because we a) do not have a framework as mentioned above, and b) allocating in Milestone 1 proved to be challenging, and we feel that some of the amounts were greater than what we’d feel comfortable given the experimental nature.
  • As things stand, and even though we appreciate the work of Plurality Labs, we do not feel comfortable supporting Milestone 2 for the proposed amount.
  • If a framework was in place and it had been ratified or somehow approved by the DAO via a vote, we’d be much more comfortable with supporting an increase in the budget available to Plurality Labs to continue the work on creating the grants framework for the ArbitrumDAO.
  • We will also be conducting and publishing a similar review for Questbook’s Grant Program in the coming weeks as well as for the STIP program.

Our Full Review


The way we approached the review of Milestone 1 was to go through the expected deliverables outlined in the original proposal and see whether or not they have been delivered, and to figure out how the envisioned grants framework works in light of chosen programs and funding decisions made.

To do so, we needed to go through all the individual programs PL funded, and check their respective deliverables, their progress, the impact they had (expected and actual), and verify all accompanying data, including financials.

The review proved challenging, since a lot of the data wasn’t readily available in the public domain, was not easy to verify, or was simply outright missing.


Our current understanding and belief is that Milestone 1 hasn’t been fully completed yet, and the results of the programs that have been completed weren’t up to the expectations that were set when we voted to fund Plurality Labs to deliver a ‘best-in-class’ grants framework.

First of all, when thinking about a grants ‘framework’, we think it’s safe to assume that most people, including us, envision structures and processes that are repeatable in a predictable manner, regardless of the future involvement of the team creating them. As a reference, you can think of the different frameworks being tested on Optimism, or even in different grants programs in the European Union — specific processes to follow, structures that make accountability and responsibility obvious, and predictable expectations.

We cannot say that we saw a framework like that delivered in Milestone 1, and we’re unaware of any such frameworks existing even internally within Plurality Labs as some of the experiments and programs funded seem arbitrary. Also, despite information about which projects got funded being available, there’s no information on which programs did not get funded and why.

Furthermore, it’s not clear how the different programs funded fit together in a greater, coherent, plan to create a grants framework. If the DAO was to stop funding Plurality Labs tomorrow, we do not think we’re left with any sort of ‘grants framework’ that the DAO could pick up and replicate.

Having said that, we want to clarify that we do believe that the work Plurality Labs has done, and continues to do, within Arbitrum DAO is valuable. We simply think that moving forward with Milestone 2 when Milestone 1 didn’t exactly deliver the expected results isn’t the right way to go.

Our understanding and assumption is that somewhere down the line, Plurality Labs pivoted from trying to create a grants framework to trying to create an Ops & Facilitation Program that reacts to current DAO events. Maybe that’s why most delegates only associate Plurality Labs with the “Firestarters” grants. These initiatives are definitely valuable and something we should double down on, but not without firstly creating a structure that doesn’t rely on a single person making decisions solely on their discretion.

Lastly, when trying to review Milestone 1, we started by going through Plurality Labs’ own review. We were disappointed to see that the report was more of an overview of the different programs funded, with crucial details for each omitted.


Before jumping in the specifics of each program, let’s first look at the deliverables of Milestone 1 (as described in original proposal). Since we first started our review, Plurality Labs has created and shared a Deliverables Tracker. At the time of writing this post, we’re going over our initial analysis and taking the new information into consideration.

Discover | Facilitate DAO native workshops

  • Conduct DAO native sense making to find the Arbitrum DAO Vision, Mission, Values

While we appreciate the efforts of trying to find the DAO’s vision, mission and values, we’re not confident that we got any meaningful and DAO-native insights. Both JokeRace and ThankARB initiatives (which are relevant here) seemed to be heavily botted as we’ll see further down this post. Furthermore, it seems that most of the active contributors to the DAO did not participate in these campaigns, thus making it difficult to claim the results to be DAO native. Lastly, we believe that any sort of results should be ratified by the DAO through a vote before being labelled as ‘official’ and used as the foundation on which we base different initiatives.

  • Clearly define funding priorities including short & long term goals & boundaries

We couldn’t find any clearly defined priorities, except for some vague insights in the GovMonth Report.

  • Establish and confirm key success metrics for the Grants Program

While some information around why a program was funded or the expected results are available, there’s no obvious key success metrics for each program.

  • Scope out requirements for a Gitcoin Grants round on Arbitrum

Given there were three Gitcoin Rounds included in the different programs, we consider this to have been delivered. It is worth to note though that there’s no written requirements specification for running Gitcoin Grants in the future.

  • Establish clear communications cadences & channels for all key stakeholders to engage with the program

As it became apparent to us after-the-fact while reviewing, there were no clear communication cadences & channels. While we appreciate Plurality Labs’ availability, engagement and proximity to the DAO, it doesn’t really constitute a communication channel through which delegates could regularly be updated in a comprehensible way regarding progress made.

Design | Construct best in class Pluralistic Grants Program

  • Identify suitable tools and technology to support a robust, secure and efficient grants program (i.e. Allo)

As far as we can tell, there isn’t a mention of the process used to identify the aforementioned tools and technology, there’s no clear mention of which tools were identified, neither a review of their pros and cons. We find it hard to agree that simply the fact that PL did use some tools and technology for Milestone 1 means that this deliverable has been completed.

  • Design and process map the end to end grant funding flows
  • Design approach, process and channels for sourcing high impact grants ideas
  • Design Grantee Registration process and grant pipeline management structure
  • Design Grant Program manager application process and assessment criteria

For the above 4 bullet-points, there have been some deliverables (like Miro boards), but we’re not entirely sure they reflect the points mentioned above. It’s not clear who was involved, what the process behind them was, or what the final structure should look like outside of the Plurality Labs’ programs.

  • Work with Gitcoin to set up and launch a Gitcoin Grants round on Arbitrum

This was delivered.

  • Design credibly neutral grant funding evaluation criteria, reporting structure and cadence

This wasn’t delivered at all.

Execute | Facilitate the successful execution of Pluralist Grants Programs

  • Onboard and coach Pluralist Program Managers in grant program best practices

What are the best practices for grants programs? Who are the Pluralist Program Managers being on boarded? The document provided in PL’s Deliverables Tracker doesn’t include any of that information.

  • Deploy 2.6 million ARB in funding to programs selected via the Pluralist Program Managers

At the time of writing, the multisig still contains 1,9M ARB. Of that, there seems to be 1,535M allocated but not paid yet (as found on a screenshot here). The same screenshot mentions that there’s 0.151 ARB unallocated. Which begs the question, what is the 0,214M ARB remaining? There’s a report from R3gen Finance that is due in January, but other than that, there’s little insight into the financial data around Milestone 1.

  • Deploy 300k ARB to Gitcoin Grants round recipients

Of the 300k ARB for Matching Pools, only 40k has been spent so far, which means this deliverable hasn’t concluded yet. Furthermore, if we also include Questbook Support Round, and Arbitrum Citizen Retro Funding, the total budget allocated to Gitcoin Grants rounds exceeds 500k ARB, which is almost double the initially planned amount.

  • Oversee grants rounds to ensure they are free from fraud or abuse

We understand that sybil protection is a difficult thing in crypto. However, when we went to check the ThankARB program, to which 350,000 ARB was allocated, it became easily apparent that the vast majority of contributions were bots. Why did we allocate such a significant amount (worth over $500,000 with the recent price increase of $ARB) to something that is difficult to protect from abuse. In contrast, Firestarters, which was arguably one of the most successful programs, was allocated 300,000 ARB.

Evaluate | Report back on grant funding outcomes

  • Publish financial & analytics reports on grant funding value, volume, outcomes and other relevant metrics requested by the community

We’re waiting for the financial report from R3gen Finance, but other than that, we haven’t seen a lot of information delivered on this front. Transparency and accountability is typically a prerequisite for any grantee, let alone a grants program itself.

  • Share key learnings and grants program best practices with Arbitrum DAO and the wider web3 community

While there has been a plethora of insights shared with the DAO, we’re not sure any of it constitutes ‘key learnings’ or ‘grants program best practices’ with any actionable and material impact. We’ve found that most of the insights shared (e.g. in the GovMonth report) to be vague and to not be at a point in which they can be used to inform the DAO’s decision-making mechanism.

  • Collate community feedback and input on grants programs efficacy and success
  • Evaluate, review and iterate based on this feedback to continually improve the overall impact of the Arbitrum DAO grants program

The last two items have not been delivered in the sense that community has not been asked to provide input to the programs, and therefore the programs have not been iterated based on that input. Furthermore, in the Milestone 1 review, PL stated that these deliverables will not be delivered at all, which is quite worrisome given that the DAO is being asked to do a 10x follow up in Milestone 2.

Our assessment of specific programs run by Plurality Labs

First of all, we need to note that the original proposal for Milestone 1 described a comprehensive process for sourcing and selection of program managers. Plurality Labs pitched an idea of validating different grant distribution mechanisms by running various different grant programs in parallel and comparing their results to choose those that perform best. Looking into the results of Milestone 1 we believe that this has been delivered only to a very limited extent, with most of actual grant programs starting only after Plurality Labs announced the completion of Milestone 1. The process of sourcing and selecting specific programs was barely communicated and is not documented.

The Milestone 1 review presented only the general assumptions behind each selected program but not the process behind the selection of those specific programs and it did not present any funding details, outcomes or lessons learned from these programs. Moreover in some cases while we understand the overall value of funded initiatives, we struggle to understand their relationship to the scope of original proposal.

Below we present our own assessment and our point of view on each individual program in relation to the Milestone 1 goals, as we’ve come to understand them. We’ve divided them in three categories, related to our assessment of their overall alignment to the goals of Milestone 1 and their potential value in a grants framework.

1. Programs that we find as valuable experiments in terms of future grants framework:

Questbook Support Rounds on Gitcoin

We find this program to be successful in the way it:

  1. Managed to distribute funds to builders and,
  2. helped provide more meaningful division of funds allocation between different domains for Questbook program.

It’s interesting to note though that final community allocation to projects in specific categories in the second round differed from community original allocation by category. For example “Developing Tooling on Nova” ranked 2nd in the category allocation but ranked last in the final allocation to specific projects. It’s also worth to note that the matching pool was significantly larger (6x higher) then the amount collected from the community.

What we found lacking is any kind of conclusion and lessons learned from this program, especially given that this was one program that was completed quite a long time ago.

Allo on Arbitrum Hackathon

We found this initiative a valuable experiment in grants distribution, however at the time of collecting the data for our review, the program was still in progress so we weren’t able to assess its outcomes.

One thing we found confusing is that the program was officially communicated as sponsored by Arbitrum Foundation and Allo Protocol. Moreover we couldn’t properly match the funds allocated by PL to this program and the funds distributed in the hackathon, so it’s hard to assess its overall effectiveness.

Arbitrum’s “Biggest Small Grants Yet”

At the time of collecting the information for this assessment, this program has not started yet, so it’s hard to say anything concrete about the results. However we find the overall structure of this program as a valuable experiment that might provide insights for the future grants framework.

Arbitrum Co-Lab by RnDAO

Again, this program has just started while we were collecting the materials for the assessment so we can’t tell much about its’ outcomes, but similarly as in a previous one, we find it promising in the scope of Milestone 1 goals.

One thing we noticed though is that even though the incubated projects are funded by the ArbitrumDAO, it is the RnDAO that gets equity in these projects through this program. We are not against this particular setup, but if this program is set to continue, we think it would be interesting to consider transferring some of the equity in accelerated projects to the ArbitrumDAO as well.

Grantships by DAO Masons

We can’t say much about this program as it is planned to start mid-February. However, from what we could learn, it seems as a likely valuable addition to the lessons learnt within the scope of Milestone 1.

2. Programs that we find as valuable initiatives but with unclear connection with the scope of original proposal


We see Firestarters as an answer to the DAO’s need for facilitation of workstreams that aim to solve specific problems. We believe it’s a very valuable initiative and we think many contributors perceive it as such. However it does not seem to be an organised program by itself as seen from a perspective which takes into account the goals of Milestone 1. As far as we can tell within this initiative Plurality Labs is funding workstreams that are needed by the ArbitrumDAO based on their own judgement and not based on any organised framework.

Some of the remarks we had while reviewing this program:

  • There’s lack of clarity in terms of what can and should be funded through a Firestarter grant. We couldn’t find any instructions on how to apply for it or any criteria on which future grantees are selected.
  • We see that there is a big disparity between the amounts of grants, ranging from 8k ARB for the work on Arbitrum DAO Procurement Committee to almost 80k ARB for the Treasury Sustainability working group.
  • We couldn’t find any info on the scope of some Firestarters grants.
  • We think that the program overall lacks oversight of the grantees work and their deliverables. From what we can tell these are rather “fire & forget” grants, which is understandable for an experiment, but not acceptable for long-term scaling

Open Data Community (OCD) Intelligence

We couldn’t find any specific information on that program in the forum. From the review post it seems that the outcomes of this program might be valuable but it’s hard to say what they will be and how they add to lessons learned for the grants framework.

Matching Match Pools on Gitcoin

While we find participation in Gitcoin matching pools valuable overall, it’s hard to frame this as an experiment in the grants framework design - at least from what we can tell this are just regular Gitcoin rounds that are well known in the crypto grants space and there’s not much we can learn from this experiment in terms of a grants framework design. In the forum we didn’t find any lessons learned from this experiment, just the post announcing it 3 months ago.

3. Projects in which we struggle to position in the scope of original proposal

MEV (Miner Extractable Value) Research

While we understand the value that lies in research of MEV, we don’t understand how does a MEV-related conference fit into grants framework experiments, especially in the amount that represents more than 10% of the whole budget for the program. Moreover, while MEV research is important in general, its implications to Arbitrum as a rollup are quite specific in nature. Therefore we feel that funding of this conference from ArbitrumDAO should be approved by the DAO itself and we should come with some expectations regarding the topics covered so that implications are relevant not just to the crypto industry in general but also specifically to Arbitrum.

Arbitrum Citizen Retro Funding

While we find rewarding active DAO participants important and valuable, we struggle to understand how this experiment in the form and structure in which it was executed, fits into the grants framework development. The eligibility criteria seemed quite arbitrary and we found the final results quite mixed - rewarding valuable contributors, rewarding some random projects and not rewarding some contributors that definitely had a huge impact on the DAO.

Thank ARB (Sense and Respond)

This is probably the most problematic program for us. It seems like the results are kind of random, with few hundred participants in some campaigns and dozen thousands in one of them. When looking at the contributor lists we get a strong feeling that they are composed mostly of bots and farmers. We tried finding any contributor who we know from the DAO on the list but the only familiar names were these of Plurality Labs or Thrivecoin team members.

We also got an impression that the program was abandoned after first few iterations (which was then kind of confirmed in discussions with Plurality Labs) but it hasn’t been communicated and in all the review materials the whole budget is still described as allocated to this program.

We couldn’t find any lessons learned or outcomes from this program, and the original introductory post on the forum has not been updated.

Keeping in mind that there was a big portion of the overall budget allocated to this program (350K ARB), that it serves as a backbone Milestone 2, we believe that there is a need for further discussion on the outcome of the program or at least a publication of the lessons learned.

Plurality GovBoost

This program seems to be just a (random from our perspective) set of open data platforms. While we do not deny that there is a significant value in having those platforms and having open data we do not fully understand what was the rationale behind choosing these particular platforms, how are they related to this particular grant program (the biggest grant here relates to STIP which is a separate proposal).


The original scope of Milestone 1 was to discover, design, execute and evaluate a best-in-class grants framework for the Arbitrum DAO. Based on our assessment above, we believe Milestone 1 has failed to deliver on its original scope so far, with the programs that are expected to finish in the following weeks to not be in a position to change this view regardless of whether or not they’re successful.

Although we believe there were valuable experiments during the course of Milestone 1, we do not believe these justify a vote of confidence in Plurality Labs’ plan to move forward with a proposal for Milestone 2. If anything, we should take a step back and collectively reassess what the mandate for creating a grants framework means. We should seek to double down on initiatives that have proven to be successful in allocating capital to the right contributors who are able to deliver value to the DAO in an impactful way.

Also, going through the process of reviewing the progress of Milestone 1 has made it clear to us that we need to do a better job as a DAO to incorporate accountability, transparency and reporting practices within proposals, and not have them as after-the-fact add-ons. We found the lack of data disturbing, especially given the amounts involved.

In their Milestone 1 Review, Plurality Labs actually shared 3 dimensions with which delegates could evaluate Milestone 1. These dimensions are captured through 3 questions delegates can ask themselves while assessing the impact Milestone 1 had. We tried answering these questions based on all of the information above.

  • Did they [Plurality Labs] do what they said they would do in Milestone 1?

Even if some deliverables were completed, we believe the ‘headline’ of Milestone 1 was to deliver a best-in-class grants framework. As we explained throughout this post we don’t believe that PL delivered the grants framework, so we can’t say that they did what they said they would in Milestone 1.

  • Did grant funding go to worthwhile programs & projects?

The response to this question is debatable. There were both worthwhile and questionable programs and projects funded, with the latter unfortunately being the majority.

  • Did they [Plurality Labs] prioritised experimentation & learning for the future?

While we can certainly say that experimentation was prioritized, we cannot confidently say that it was done in a way that equipped us with learnings for the future as a lot of the supposed learnings were not documented, or were not made publicly available, as mentioned above.

Milestone 2 Proposal

We will be commenting on Milestone 2 under its respective thread, but we want to take this opportunity and explain that we see Milestone 2 as an extension of Milestone 1. With all the points of concern raised through our assessment of Milestone 1, we do not feel comfortable voting for Milestone 2 to be funded with any amount. We’d much rather invite Plurality Labs to go back to the drawing board with the DAO and figure out how we can best leverage their experience and learnings so far to continue working on Milestone 1.

Invitation to Discussion

There’s a call being hosted to discuss Plurality Labs’ Milestone 2 proposal on Wednesday 31st of January at 4:30 pm UTC. We’d also like to invite all delegates and Arbitrum DAO participants to discuss Milestone 1 and our review during our Arbitrum Office Hours on Thursday 1st of February at 4 pm UTC.


TL;DR Response

  • It generalizes opinions that we don’t think the DAO in general agrees with and then refers to their opinions as fact throughout the document. They sited the Treasury & Sustainability Working group as a failed grant. This was both an experiment and what we think was a successful grant. I was at Gitcoin when the treasury didn’t get diversified as GTC went from 15 to 3 before they took action. We got something moving with this grant. There are 3 key research artifacts, a proposal and next steps.
  • The entire review is a cost analysis. There is no benefit analysis. Example: They don’t address that STIP likely wouldn’t have happened without us or the value of Open Block Labs grant.
  • There are many assumptions made such as the importance of checking boxes against what we first said was likely work to assuming there shouldn’t be flexibility in our need to execute on deliverables which we determined had changed in importance during the campaign.
  • They review the experimental programs as though success and failure are the outcome. In experimentation, there is nuance to which failures are successes based on learning. They call out that there weren’t learnings to some of the programs but there is quite literally a database in the comment above the one they posted with a section for each program titled “Lessons Learned”!
  • Many of these reviews are critiquing the review we put out in December. In some places they add a sentence saying “they have since updated this” without a new evaluation of what was done. In most places they simply have factually incorrect info.
  • They don’t seem to understand the core value proposition of our proposal. Delivering capture-resistance over three milestones based on experimenting with governance of grant programs and how they operate. The misunderstanding can be seen here:

How are we supposed to deliver capture-resistant grants governance without experimenting with the operations that support the governance which decides how to deliver resources?

We are currently the only program which has funding available for DAO operations. Please read my companion piece to the proposal which assesses this need and how we are directly addressing it.

Funding governance operations experiments is not a pivot, it is the primary deliverable across three milestones.

Overview Issues

This was mostly because we had a deliverable date for the building of the database to consolidate the information of Jan 31st. We did make this available upon your request and later in this document you acknowledge this.

We did take longer to get some aspects of the proposal moving. Compliance alignment was difficult to figure out. We didn’t get payments out to grantees until late November. Therefore some programs started later than intended and will run past the end date for milestone 1.

To learn quickly, our experimentation in milestone 1 was designed to be decided by Plurality Labs. We would then assess the programs and how each worked to then craft specific experimentation around the needs for solving capture-resistance. Our professional discretion was used to select the programs.

We needed an initial set of programs to develop the assessment framework. This is a principle of iterative design. We didn’t want to create a framework, then be constrained by an arbitrary set of self-imposed limitations.

While the expectation that a greenfield team could run the framework after 6 months, the reality is that we are designing the first ever pluralist program. We’ve learned about considerations we didn’t know about at the beginning. Any entrepreneur will recognize this - we learn as we go. We sense and respond. In this review, it seems that our sensing to the needs of the DAO and responding to them is a bad thing.

imho - There is no point to designing capture-resistant governance that doesn’t sense and respond to the actual needs of the DAO.

That said, we didn’t deliver a framework that can be handed off, but this is directly addressed in the milestone 2 proposal.

This was posted in December. We’ve discussed this with you multiple times that we posted a review with what we had at the time to start the discussion. It seems like multiple parts of your review here refer back to this point which is no longer true.

Deliverable Issues

As we explained and can share the data for, just because the bots were submitting items on jokerace did not mean that they got paid OR that the data from their submissions was used. In the govmonth report, we discussed the methodology used to remove these.

As for the farmers on Thank ARB. There is a big difference between “farmers” and sybils. We conducted a thorough with TrustaLabs to remove the sybils from the allowlist. This means they didn’t get paid, but not all applications are gated.

Now for the engagement farmers, if it is one human with one account, who are we to say they are illegitimate! Here is an article about the methodology TrustaLabs used to identify 96,000 sybils which got past the sybil detection efforts done for the airdrop. Here is an old article showing the nuance between farmers and sybils.

We hired TrustaLabs to repeat this review work in August and created the allowlist for GovMonth with sybils removed and only ARB holders eligible.

We addressed the fact that many delegates and active contributors didn’t participate as a failure on our part. We also tried to explain that this is an iterative process and since the findings are high level, we wouldn’t need to redo - we would continue to sense and respond through a variety of methods including the Tuesday workshops and progress from the IRL events like Istanbul.

We disagree that the findings need to be ratified for the whole DAO. In a pluralist model we can use these strategies to guide our program, but don’t need to be ratified to the entire DAO. Like all the governance mechanisms we are discovering and using, if they work well enough the DAO will opt-in to their usage.

We understand if you think these don’t count because they weren’t ratified. We do think they count.

We ran programs that we thought would be interesting using professional discretion. Afterwards, we will then learn what things are truly successful. This is innovation work.

There are clear runbook guides available on Gitcoin website. What our framework must figure out, is what “settings” to use in which situation based on how they compliment each other. Running one experiment and delivering end-to-end how to guides is unrealistic.

We agree. While we did have regular twitter spaces with the foundation, tweet threads, Tuesday workshops, all of our programs posting updates on the forum, and our only updates on the forum - we did not find the method that achieved the outcome of delegates being informed. We gave ourselves a red on this deliverable and consider it a challenge for milestone 2.

We do think this is not in a desired documentation format. We have hired a team member to own the public facing organization of this research.

Here we go:

What deliverable should there be other than a mural board?

We realize this happens at a program level. We are observing and documenting what works. It turns out this is an ongoing task at the program provider level.

At the pluralist framework level, we realized this is infrastructure, a database which is being jointly designed with the foundation. We shared this work with Krystof. There is a one link to “pre-apply” for grants and have it routed to the applicable programs. It automates some of the compliance comms. It also provides community dashboards and a database for review.

We picked using our judgement this time. When we see what works and what doesn’t, we can then assess what criteria to use.

For the last 4 weeks we’ve been onboarding grantees onto Karma GAP. We also funded Open Source Observer for reliable data which is now integrated with Karma.

For the pluralist framework, we are interested in decentralized review. Capture-resistance depends on removing single points of failure.

This is another clear example of how this review misses the forest for the trees. If you review us as a dept and a corporation or a government entity, this review will seem like quality work. However, there is a reason that those companies can’t access innovation. If you evaluate our work like a startup innovating highly valuable solutions, we think it is hard to say we shouldn’t follow up with a seed investment.

This was a bad deliverable to include and we learned that during the course of milestone 1. If we run an experiment, how can we know best practices before we clearly define the next stage with a hypothesis. A single instance cannot show best practices. Once enough programs finish, we can begin documenting the consistencies across programs.

We are paying out a lot everyday now. Would this explain it?

Yes, the matching program which brought the American Cancer Society first round to Arbitrum along with MetaGov and TokenEngineering Commons, is still going! We couldn’t start it until December and Gitcoin has postponed their GG20 round until April.

To include the questbook support rounds and citizen retrofunding here is disingenuous. Gitcoin has a plurality of mechanisms available now. Using their ready made smart contracts for deploying funds to Arbitrum community members is a totally different things. Especially since no fees were paid.

Again, this doesn’t include the discussions we have had addressing this. We did run sybil protection and you are mistaking many people airdrop farming for “bots” or bad actors. Now, there are some that may get through, but I’ve also had people thank me for the campaigns at IRL events.

This says “with the recent price increase in ARB” - it was budgeted before the recent price increase in ARB. We’ve only spent 87k of it so far with 100k earmarked for grant reviews.

The new thrive protocol addresses this problem with human validations. While still in an mvp stage, we do think we can get meaningful crowd validations and only payout the users who put in the effort to be aligned with the crowd. This is our problem to solve!

The information including finances we have is in the comment that is literally above yours. Yes, R3gen is contracted to do a full DAO financials report. We will be receiving the 1 year report of financials to date on the DAO bday (in March) and then we will receive monthly reports thereafter.

There is quite literally a field on lessons learned in the review that this is responding to and in the database with all the grant information.

We did conduct community reviews of what they thought about the selections of grant programs and this report will be out soon.

We realized that understanding the success of the grants programs we funded depended on understanding the success of the grants the program funded. As the decentralized review is underway, this dependent step cannot be started. It is a different deliverable which we would argue makes sense.

Program Assessment Review Issues

2/12 programs will have started during January or later. I don’t believe this constitutes as “most”.

Because of the delays with compliance, many programs which we would have hoped to end before our milestone are now extended beyond it.

AGAIN, this is assessing the post we put up in December!

We put this up in an attempt to begin communicating what we had knowing full well that our milestone wasn’t done and there should have been NO expectation of complete information. It was designed to be a start. All this info you mention is in the latest update.

Here is a review of the event which ended last week. Allo / Arbitrum Hackaton Hosted by BuidlBox, Allo & Arbitrum - 👋 News and Community - Gitcoin Governance

Here is the week 2 recap

Of course there was. We were defining it as we went. We quite clearly have stated that this was a response to DAO needs. In our milestone 2 proposal, we discuss needing to build accountability structures and pathways for success. We also discuss creating a role that has the power to initiate these grants.

We were figuring it out as we go. Yes, we need to structure now.

Exactly. Its an experiment and we clearly discuss next steps in the milestone 2 proposal.

This was an explicit part of the milestone 1 proposal. Gitcoin expedited deployment and integration of their protocol on Arbitrum.

You won’t understand without watching this to understand how we can improve our conversations. It is based on a paper by Puja, Vitalik, and Glen Weyl. It is the next step of protecting the information frontier.

MEV constitutes an existential crisis for Ethereum. Arbitrum is scaling Ethereum. Not to mention there are potential revenue opportunities.

We will not be able to find capture-resistance without being able to support these types of ideas from the best minds in the space.

The other programs

It seems like the critisism is undefined. Something like “I have preset assumptions about the scope of your program which everyone else should consider.”

Please point out a better pluralist program. It’s 6 months since approval and 2 months that we have been able to fund things. Remember when the DAO, partly driven by you, negotiated us down from 750k to 336k for our fee for milestone 1. Had we been properly staffed do you think we wouldn’t have buttoned up many of these issues you have?

There seems to be zero appreciation for any of the value we did add. I’d advise others in the DAO to think critically about the effect we have had on the DAO. The opinion that the majority were unworthy comes from a source which thinks the treasury and sustainability and open block labs grants were not worthy. If you think these were, then you would likely disagree with this assessment overall.

Additionally, the point of experimentation is that some will not work out. This critical point is missing any benefit analysis and focuses only on cost.

There is literally a database with every program and grant that has a section about what we learned!

It looks like this:

Then you click onto a program:

Then scroll down and there is LITERALLY a “lessons learned” section

Then you can see ALL the grants in that round.


L2Beats shared this information with us weeks ago. We asked for them to publicly share it so we could address it then. For some reason, it was decided to hold it until today.

I appreciate L2Beats. They dig in and find things no one else would. He’s like a coach that makes us all better and perform at the highest standard. We will look back and reminisce about how real he was.

However, I see problems here that delegates must take into account.

  • The amount of old info which doesn’t consider the work we’ve done in the past month.
  • The clear misunderstanding about the purpose and intention of our work overall.
  • The unrealistic expectation to stick to deliverables stated 6 months ago in changing circumstances even if the deliverable is replaced or easily explained.
  • The lack of empathy for the role of a founder pushing new innovative solutions to actual problems.

These considerations suggest that we need to take L2Beat criticism to heart, but we should not stop with critical work or cut funding to people committed to Arbitrum: who are going above and beyond to deliver a FIRST in class pluralist grants program. Of course there will be roadbumps along the way. To expect their wouldn’t be, especially after negotiating the rate to less than half of what we initially suggested it would take, is simply unrealistic.


I really appreciate the time taken by @krst & @Sinkas in performing such an extensive review. I had a call with them yesterday regarding comments made about our Treasury Working Group, details of which i wanted to share ahead of the review call tonight.

Here are valid concerns raised by @krst about how we operated

  1. We did not have an end of season report, just conclusions and a way ahead. I am waiting for my WG co-lead @sids2000 to complete his final deliverable before sharing that; I should have made this clear rather than announcing the conclusions as our wrap-up

  2. As program manager, @DisruptionJoe should be keeping us on our toes and telling us the above point, not @krst .

  3. Our expertise is not treasury management (I’m a journalist) so he was initially skeptical about how we would lead this group. He’s come around to seeing that we add value by fostering debate & discussion, so he wants to see more frequent, shorter communication over just polished reports.

His invalid point was on separating the budget for our WG (48k ARB) from that of our partners (30k ARB for the research artifacts). The point being, someone might say lets get rid of the WG but keep the partners

As @Aera , @karpatkey & @Avantgarde would attest, we were not absentee managers but spent time in the trenches with them. Their success was our success and separation between their deliverables and our WG is illogical. It costs money to have good managers/marketers for ensuring work commissioned is relevant to our needs and widely disseminated in a digestible format.

Our WG budgeted 20 hours a week from @sids2000 and I for 3 months, at 100 ARB per hour. If we were forced to give a break up on how we spent these hours , here’s some rough numbers

  1. 30k ARB to each of the partners

  2. 15k in time we spent onboarding partners, determining agenda, feedback before final release, summaries of report ( 1 2 ) and a tweet thread post release. This includes a recruiting trip I had to make to Berlin dappcon for recruiting Karpatkey and Avant Garde

  3. 15k towards the STEP framework and the extensive time taken in recruiting the highly qualified committee members, speaking to RWA providers, drafting the framework, holding calls about it, incorporating feedback, etc

  4. 10k towards @sids2000 analysis in price ceiling analysis and the upcoming CDP research report (still pending)

  5. 2k to Centrifuge for their report

  6. 6k in miscellaneous (KYC/admin work, meetings with projects & people reaching out to us about STEP 2/grants, reviews on other proposals etc)

We welcome discussion based on these numbers on whether we’ve delivered sufficient value!


note: This one is my private response and not a statement from L2BEAT gov team.

First of all, thank you @thedevanshmehta for your kind words.

Actually, I was a little surprised to read in Joe’s response that we said T&SWG was a failed grant, because I believe we never said anything like that.

In fact, the only place we mentioned T&SWG in our summary was about the wide disparity in FIrestarter grant sizes. Other than that, we never mentioned T&SWG in our post, and we never rated it as a failed grant for the very simple reason that we don’t actually think it’s a failed grant.

I find the outcomes of this working group valuable, as you stirred a discussion on treasury management, you provided a recommendation on treasury spending and got us four different reports from active participants in the RWA field. I also value the STEP proposal that you pushed forward, I engaged in Twitter spaces dedicated to this proposal, I reached out directly with some additional feedback and I agreed to post in on Snapshot later today and I declared support for it during the vote.

One thing that I did criticize openly and publicy on Telegram is the lack of regular communication on your side that would allow us (delegates) to properly assess the progress and status of the work you’ve put in front of you at the very beginning in the kickstart post. I find your conclusion post way below my expectations of what a conclusion post for a three month working group results should be. It didn’t even mention some of your activities (like the treasury spending recommendation).

You mentioned above that I only take into consideration the simplified cost breakdown between your working group and service providers and not the detailed work breakdown you presented above. Yes! But this simplified cost breakdown is the only thing that I had up until your response above, how was I supposed to know about it if you haven’t shared it before? How are other delegates supposed to know about it and take it into account (when considering extension of your working group for “season 2” that you’ve mentioned in the post) if you posted a conclusion post (not mentioning that more detailed report is coming) that didn’t contain specific details?

I think we can do better as a DAO in terms of accountability. And I know for a fact that you can do better as well, that’s the sole reason why I limited myself with criticism but rather asked you directly for more information (first time in the TG group just after you published you conclusion, second time yesterday when we spoke).

And I do in fact believe it should not be me (or any other delegate) calling that out in this case, but that’s a topic for a totally separate discussion.


Hello Kristoff, I just wanted to try to help clarify some details around the Gitcoin Grant Rounds on Arbitrum and the 300k Matching Fest and how they fit into the framework:

I. Information about the 300k Matching Fest was updated two weeks ago: The Arbitrum Matching Fest for running rounds on Arbitrum One 💙 - #2 by ZER8 . -hope this helps **

As Joe said above, these programs also enabled Grants Stack to be available on Arbitrum thus allowing anyone to launch and host grant programs using it, such as the GMX Direct Grants. Personally I think it will be really interesting and would want to help pave the way into a future in which all major protocols on Arbitrum would leverage grants stack for their grant programs.

Related to the framework, the fact that anyone can launch grant programs using grantsstack on Arbitrum in the present + the decentralized reviews that are live + education/awareness will enable fully permissionless, onchain and transparent grant programs , making the Arbitrum on Gitcoin programs an important piece of the grant framework puzzle imo. It gets even better as this is not restricted only to the programs that want to use grantstack, but to some of the other programs that PL funded, for example Firestarters, Grantships, Minigrats, etc**

II. Some thoughts around the three Gitcoin rounds

The Domain Round, The Arbitrum Grant Funding Fest and the Arbitrum Citizens Retrofunding Round - which besides yielding the results and delivering on their set goals have proved to also be sustainable- they basically paid for themselves + more. This is one of hidden values of QF - if they grow in a strategic and sustainable manner - the community can actually cofund them!

I agree it would be cool if the team and I hosted some workshops in the future to teach people how to run their on rounds on Arbitrum with the experience and learnings we have gathered. We do have a couple of new rounds that will launch in February which I had to consult and teach anyway so agree this would be a natural move here

Really appreciate the way you and L2beat conduct their role as a delegate, investigating, asking hard questions - if Arbitrum would have all the top 50 delegates as active, analytical and thourough as you and as some others are the DAO would be set for 1000% success