This is an article I was gonna post on Mirror once we got Plurality Labs rolling, but I think it could help now with the vote for AIP - 3 [Non-Constitutional] Fund the Arbitrum Grants Framework Proposal Milestone 1
Understanding Grants Frameworks Design at MetaCamp
Costa Rica is a perfect place to ride the waves, watch the sunset, and workshop DAO grants frameworks.
At MetaCamp, a gathering sprung out of MetaCartel, this is exactly what we did. Magic happens between cooking korean food or tamales for a group of 50, riding motorcycles across the Guanacaste desert searching for volcanoes to climb, 5 hour fishing charters which were unsuccessful in catching fish but totally worth it, playing soccer against the hotel staff, and generally vibing at the pool to eclectic & multi-generational playlists.
Rediscovering your inner child is easier when there are children around. The right environment sparks hope that most people don’t dare to think about. This is what separates the DAO natives, the MetaCampers, from your average product manager. Fearless belief that the solutions could exist. The belief that we can solve many of the worlds biggest problems. And the action-orientation to participate in workshops with some of the brightest minds in the space when we could be doing anything else in paradise.
About Legitimacy
Imagine being in a room with 10 friends trying to decide what to eat. The loudest two people drive the conversations. Introverts don’t stand a chance to be heard. After one of the loud people gets sick of arguing, we all agree to go for pizza… but did we really agree?
If we make decisions like this, some of the people will leave the group. They will find a group where their voice counts. It doesn’t have to be the loudest, but it must be counted.
This is what legitimacy is. It is the thing that keeps the group together across many group decisions. It is the feeling that we are all equal in our voice. It is the feeling that we are respected. It is the feeling that a fair process was followed.
Design thinking workshops can give us a fair process which maintains legitimacy.
Think about democracy. Half the people don’t get their way half the time! So how does a democracy stay together? Why don’t they split up?
Governance processes which feel fair are employed to keep the people feeling that they have an equal voice in the system. There is first the sensemaking, then the decision. In our example of 10 friends deciding what to eat, they could have used design thinking principles to maintain legitimacy.
How would this work? First, all 10 friends would write their preferred food and put it on the wall. Then, we could give each person 2 votes which they could place on the two that they like the best. After counting the votes, a tie could be handled by a game of rock paper scissors.
Is this a good process? Well, rather than arguing for 10 minutes and not having an answer, the group would have participated in an activity (sensemaking process) together. The decision wasn’t made by the loudest or most aggressive person, it was made by a process. If you don’t like the process, you could suggest an alternative to the part which seemed unfair. It might not be the best process, but it gives the group an answer. Most importantly, for the people who didn’t get their way, it allows the scapegoat to be the process rather than another person in the group.
Who knows - maybe the loudest person in the group was only talking the most because they were trying to fill the void left by 10 introverts unwilling to speak up!
The Grants Framework Workshop
Level Setting & Finding Common Ground
We started by thinking about the biggest problems we’ve seen DAOs have in relation to their grants framework AND thinking about the best outcomes and solutions we have seen so far. After 10 minutes of Spotify’s Retrowave playlist inspiring individual creativity, we then shared our answers with the group.
This helped us level set. Some of the things we heard people say as they listened to others:
- I’ve seen that happen before - many times
- I fully agree with that
- Could you explain more about why that happened
- Yep
It was important for us to baseline, to understand that while our experiences differed, the patterns which played out were similar. Almost as though the outcomes were simply following natural patterns.
Each person quickly read through their “heaven & hell” scenarios. As the group listened, people would hear a new theme and add it as a Post-It on our primary wall. At the end, we had gathered all the insights, both good and bad, into one place.
Finding the Root Causes & Major Themes
Next, we split into two groups. Each group was assigned to heaven or hell. Their task was to take all the Post-Its with issues and find 2-4 major themes.
An interesting takeaway was that the themes produced by the problem and the solution groups matched! Even though they were derived independently, identical themes emerged from both sets of input signals.
Problems
- Mission alignment
- Voting & power distribution
- Reputation & trust
Solutions
- Setting a clear vision - The Why
- Mission Vision Values
- Getting the right people to vote or participate - The Who
- Talent Incentives
- Accountability
- Data collection
- Impact evaluation
- Milestone payouts
Decision Variables Create Funding Models
Next, we focused our attention on the variable components of a grants round. We didn’t get to cover all of them, but our teams of two each thought deeply about how they might approach one specific decision type or mechanism.
Pooled funds are distributed using some type of allocation mechanism. You can think of a mechanism as a ruleset that could be coded to elicit predictable outcomes. This is dependent on a few other decisions that are critical components of any grants round.
Here are some examples of decisions that each grant round must grapple with:
Allocation Methods
- Proactive vs Retroactive
- Delegated Authority: To individuals or councils/multisigs
- Collective intelligence: Quadratic funding or voting / Other algorithmic allocation such as Decartography or Jokerace
Different allocation methods are good for different situations. We split into groups of two to tackle understanding which allocation methods are best for what outcomes.
Grant Eligibility & Regulation
- Inclusive or exclusive logic
- Criteria
- Sensemaking
- Amending
- Proposing
- Disputes & Appeals process
- Voter Discovery Sortition
Voter Voice Weighting & Eligibility Regulation
- Gating systems
- Retroactive Sybil Defense by behavioral analysis
- Appeal process
Communications
- Sourcing grants
- Informing voters
- Policy education
- Supporting documentation
- Support requests
These decisions make up a funding model. When execution of a time or amount based allocation session happens we call this a “round”. The iterative improvement of rounds over time is called a “program”. The person or entity who is driving the process is a “program manager” or a “round operator”.
Funding Models have a variety of outcomes they can drive. Here are some tradeoffs and considerations that one might encounter.
- Deliver outcomes vs impact
- Representation of minority voices in allocation
- Public goods lie on a spectrum
- Opportunity costs
- Capital efficiency / measurability
- Scalability
- Precision of allocations
- Providing grantees steady funding vs one off
- Milestone based allocation
If different funding models drive different outcomes, how do you know which one to use?
This is the key issue most DAO grants programs need to address. There isn’t one funding model that will magically accomplish all of a DAOs goals. A DAO needs to establish a “framework” for its grants program.
Frameworks, Programs, & Rounds
Frameworks
Impact networks composed of both competing and complementary participants require a grants framework to optimally allocate resources. These networks, often DAOs, need the constraints of an agreed upon framework to maximize autonomy and efficacy of programs.
Here are some items to consider when discussing your grant framework
- Vision & Mission
- Priorities - Short & Long term goals
- Values, Principles and Boundaries
- Total spending limits
- Outcome & Impact evaluation standards
- Composability requirements for software
When these components are set in DAO native ways which maintain legitimacy, they open up the door for composability of programs. Many programs, especially those based on a single software, specialize in one funding model.
A good framework will discover the above components, communicate them effectively, and provide space for multiple programs to test their
Programs
Programs can be nested in larger programs. This makes the concept confusing. The difference between a program and a framework is that there is a constant manager of a program. A framework can be set and regularly reviewed for needed updates. The program holds or has access to funds which need to be allocated over time.
Evaluation and iteration funding models happen after rounds. The program is what continues to exist and improve.
Rounds
Time-bound is usually a sign of a grants “round”. The round generally only involves one funding model, but isn’t limited to one.
A Few More Boards to Share
Big Takeaways
- We are all seeing the same problem patterns emerge
- We know what experimentation levers can be pulled, but need more testing to validate how they affect outcomes
- Legitimacy can be created through group sense-making
- Mission, Vision, and Values alignment needs a DAO native process
- A clear framework can increase the efficacy of programs & rounds
- Sense-making and decision-making do not need to be “welded” together
- There is no one-size-fits-all funding model for a DAO