You can review everything we did in Milestone 1 here
Plurality Labs is excited to start off 2024 strong. We need your opinions to help us craft a proposal for our second milestone. The intention of this post is to share ideas we are considering for our 2nd milestone to gain a broader perspective. We ask that you review and respond using the recommended format.
- Question to you
- Priorities for Milestone 2
- Work We Will Facilitate
- What We Will Fund
- Experiments We Will Conduct
- Reply format
Frameworks should aid in our ability to find talent, double down on winners, and cut what isn’t working. There is a fundamental mismatch between the risk taking innovation needed to build capture-resistance and the community led governance of a DAO.
For example, watch 20 seconds of this discussion where they discuss Elon Musk’s ability to blow up rockets vs Nasa.
To build capture-resistant governance we need the ability to run experiments. The DAO has made it known that it prefers frameworks for hiring service providers. This is one of the most likely areas for capture. The issue is that different work requires different types of funding mechanisms. The desire to bucket service providers as one type of funding may seem natural, but it is not the optimal variable to use to determine how we fund the work.
Here are just a few other variables that could influence which funding mechanism is best
- Latency of work - How long does it take to see results (research vs discord post)
- Broad scope vs narrow scope
- Difficult to source or difficult to find excellence
- High expertise vs low expertise
There is little difference between resource allocation to service providers, DAO contributors or employees, Firestarters, and other forms of grants or payments. In DAOs, we tend to call any allocation of resource “grants”. In the real world, the term grant generally refers to the type of funding that discounts the requirement for impact. It is generally used to fund innovation that cannot be funded in Milestone or KPI driven ways. The terms and altered definitions we use in web 3 make the conversation noisy.
For example, RPGF is a great meme, but there isn’t much new about retroactive funding. There is another type of retroactive funding which evolved before the web 3 culture which is commonly used for milestone/kpi driven work. It is called paychecks. It’s this system where people do work, then retroactively get paid for it based on their employer’s high context impression of their work. However, we haven’t solved how the community led governance might perform as well as an employer or manager yet.
We even have retroactive paycheck funding for public goods. It is called government salaries. If you’ve dealt with a government, you know that they aren’t known for hiring go getters.
So how do we get frameworks in place which double down on winners and cut what isn’t working?
Tighten up our iterative learning cycles while increasing allocation amounts. This needs to happen under the facilitation of some group with broad and long term context so experiments are iterative and learning is compounded.
We need to increase the amount available for experimental funding because there needs to be enough available to iterate enough times to find the best solutions.
We need to take some big shots as well. Tightening iterative cycles of defining needs, funding work, evaluating results, and sanctioning or boosting based on results lets the process play out faster, meaning we find solutions faster. Our best bet to do this is to involve other experts in the space as grant program managers.
Plurality Labs has started and would love to continue that process. In order to truly test what mechanisms work, we will need to scale the amounts funded by an order of magnitude.
Because the systems don’t currently exist and/or if they do, we have not found the right combination or stack yet. This is an evolutionary game. We cannot be expected to simply intelligently design the future. We must take shots on goal trying new systems to see what sticks.
Disruption Joe had 17 one hour calls with delegates last week. It seems like Plurality Labs has broad support for moving on to milestone 2. There also has been broad support of our idea to extend milestone 2 to cover all of 2024 as opposed to 6 months).
There isn’t clear support for Plurality Labs to allocate 30 million ARB which we had posted several times during conversations in Milestone 1. Some delegates thought that is too much and some said it makes perfect sense.
For those that said they aren’t sure they would support PL allocating 30 million ARB in 2024, their reasoning can be categorized as:
- They need to review our Milestone 1 allocations first
They can review the work here
- They would prefer us to have a smaller increase maybe 1.5 or 3 times the size of our first milestone.
People are asking us to fund clear needs right now and we are fully allocated. We are suggesting we could do much more AND that there are things we think should be done that we couldn’t with a significantly smaller amount.
- The would prefer us to have a few Tally votes throughout the year.
This does not allow us the same freedom in experimentation. That might be ok, but we don’t think it is optimal for Arbitrum. Remember that these funds go to the PL-ARB Grants Safety multisig which has 4 delegates and 2 PL team members and the DAO can clawback funds at any time.
We hope to find the solution to capture-resistant governance, but if we didn’t, Arbitrum would still end up with a well funded ecosystem.
This conversation is not about Plurality Labs service fee (that is what the proposal will do). For this conversation, let’s discuss allocation amount independently. We are here to solve a problem. A larger allocation amount gives us more shots on goal and the experimentation happens in tandem to quality ecosystem funding.
With 30 million we would be able to:
- Increase allocation sizing to high performers
- Hire higher quality people to work for workstreams because they can expect there is funding available to keep paying them if they do their job.
- Fund the first steps of big bet projects
- Allocate funds to “competitor governance design frameworks” in a way that avoids redundancy and allows us to learn what conflicts will happen while being able to coordinate effective resolutions. (One of these could be the funding needed to compliment the Grants DAO legal funding offer from the foundation.)
- 3 million to high context workstreams aligning with delegates on priorities and doing the research to prioritize what we should fund
- 3 million to high context specialist sharing skills across workstreams (marketing, data, etc)
- 12 million in funding for the workstreams to allocate throughout the year. Each quarter workstreams suggest “campaigns” that are narrow in scope and have clearly defined outcomes.
- 6 million in doubling down on successful Plurality Labs milestone 1 programs
- 6 million in funding new competitive governance (Imagine we select two programs to operate at the level we did in milestone 1)
Think of us as Dad holding the bike seat while you ride. The training wheels are still on. Do you want Dad to see you ride once and then take the training wheels off or do you want him to be confident that you are ready? More funds available means more opportunities for us to truly understand what works.
The DAO should fund multiple grants programs outside of what PL does. We loved seeing the community support STIP and would love to see a gaming STIP! However, tightening iteration cycles and conducting directed experimentation requires programs where PL is in the approver seat.
The more experimentation that is centralized under the PL program, the more learning we can do. Yes, this seems counter-intuitive, but there would still be billions in the treasury which is not centralized in our program.
And again, the experimentation happens at the same time as DAO needs are getting funded. If Arbitrum DAO wants to go heavy on winners, we hope you will allow us to do the most we can to drive this ecosystem forward. You still have our promise to work ourselves out of a job!
Which of the following scenarios would you rather see (assuming the PL fee is acceptable)?
- 5x increase & 1 year term - A proposal to send 30 million ARB to the PL-ARB Grants Safety multisig which would roll over any funds left at the end of the year. (Lowest PL fee)
- 5x increase & 6 month terms - A proposal to send 15 million ARB to the PL-ARB Grants Safety multisig with another Tally vote to ask for more after 6 months if it is smart to do so. (Slightly higher PL fee)
- 1.5x increase to start w/ critical amounts earmarked and ability to “refill” - A proposal to send 10 million ARB with an amount (maybe 3 million ARB) earmarked for workstreams to have a full year of salaries available and PL simply run another Tally vote when more is needed. (Highest PL fee w/ bonus structure at refills)
- Which of these seems like the best option?
- Other - Supportive of PL
- Other - Not supportive of PL
All of our work will involve continuous improvement of our ability to sense and respond to enable growth. We’ve baselined our success metrics, but the number one metric of our success is the value of Arbitrum governance. However, we need to connect that value to being a citizen.
Artwork is a custom design made by Shawn Grubb!
Problem: It is hard to maintain context and awareness of what is happening in the DAO and how to participate. DAO members don’t have a place to go to find out:
- What they need to know to maintain context
- What the DAO needs them to do
- What opportunities exist for them to offer value
Solution: Provide aggregated communications on ThankARB.com segmented to personas. Reward those who consistently do their “civic duty” with reputation. Use ARB rewards to incentivize weekly check-ins. Increase the number of people with context and ready to mobilize.
- Create Segmented User Personas (Delegate/Builder/Grantee/etc)
- Aggregate Communications
- Build a Weekly Communication Cadence
- Measure Weekly Active Users (WAU) & Monthly Active Users (MAU)
Problem: The proper structures don’t exist to create positive-sum interactions and establish goals with long time horizons. The DAO does not currently have:
- A process for hiring
- A legal structure for efficient KYC/KYB
- A governance structure for driving alignment & prioritizing how we spend
- Known next steps or pathways for contributors
- Onboarding contributors
Solution: Expand on the success of Firestarters to take the next steps in going from 0-1. Plurality Labs role includes being a facilitator for designing a well functioning DAO.
- Design participation pathways. Firestarters to Workstreams to Campaigns
- Design legal, technical & governance frameworks (in collaboration with foundation and delegates) that incentivize long term thinking, reduce circles of influence from forming, and provide checks & balances for accountability.
- Involve delegates in the formation and testing of structures
- By having Plurality Labs facilitating the emergent structure of how workstreams are formed and held accountable, we avoid a coalition taking power and not spreading it.
- Ensuring the composability of other DAO grant and procurement structures
Problem: The DAO hasn’t found a way to identify and mobilize expertise when it is needed to ensure positive sum outcomes. Frameworks focus on allowing anyone to participate rather than identifying outliers and continually elevating the performance of the DAO. The DAO is not currently capable of:
- Identifying and validating expertise
- Providing checks and balances for grant program manager decisions
- Reasons for contributors who aren’t delegates to participate
- Retaining collective knowledge about grantees & grant programs
- The ability to assess impact in credibly neutral ways
- Clear definitions of Arbitrum citizens
Solution: Build user profiles over time via a combination of survey techniques, data analysis, and testing to enable the DAO to identify expertise, double down on winners, and cut off poor performance in a credibly neutral and community led way.
- Building user profiles over time using implicit and explicit information
- Conduct community led review experiments alongside control groups
- Identify engaged DAO members willing to provide adequate effort
- Identify expertise in reviewing high value contributions
- Make reputation data available on an open data substrate allowing open interpretation
- Continuously evaluate & Iterate
DAOs are complex by nature, thus they require complexity aware solutions based on a continuous practice of sensing and responding in order to grow. This process is what we will facilitate to ensure the success of the Arbitrum ecosystem.
In January we will host sessions with delegates to determine our funding priorities for 2024. We learned from the GovMonth experiment that we gathered good info and the community appreciated it, but we need better validation from the delegates as the decider.
This does not mean that the Thank ARB priorities are the overall priorities of the DAO. The delegates will determine the overall priorities and Thank ARB will fund within these priorities as a subset of the total acceptable funding areas. It will allow us to execute experimental funding faster knowing we have alignment.
We think that we can support many of these big bets! For some we could directly fund the ideasand we can also help guide big proposals to figure out how to get the votes needed.
- The big bets discovered in the Arbitrum working here
- Making Arbitrum a home for builders with an Accelerator framework giving builders a mentor network, services, and a pathway to investment
- Figuring out how to continuously fund the DAO operations and maybe even grants on yield and/or revenue
Most DAO members we talk to want to be a good Arbitrum citizen. They simply don’t know how. In addition to defining the personas and pathways, we can extend our understanding of users to help the intrinsically motivated remove starting friction.
- Onboarding services, dashboard & experiences
- Clear onchain structures for different allocation mechanisms
- tARB reputation for doing your “civic duty”
- Partner with quest providers hired by protocols building on Arbitrum to drive new citizen traffic
We aim to conduct experiments in these focus areas that we feel are most likely to yield results.
A DAO attempts to use technically & politically decentralization to achieve a goal. That goal is the top of a hierarchy of outcomes which the DAO catalyzes. It will change incentives into action and the DAO will grow in the direction of the incentives. Capture-resistance happens when the pie gets bigger with more participants. This is only possible if the participants are incentive aligned.
Pending the delegates’ discussions in January, the Vision, Mission and Strategic Priorities can serve as this high level alignment. This must include a positive feedback loop between the value of ARB governance and the number of active governance participants.
Our Firestarters program was an undeniable success having a hand in sparking STIP, Security Audit Providers RFP process, ARDC, and the Treasury and Sustainability working group. Now what? We need to provide the next steps and long term pathways for contributors to go from zero to 100. And it all needs to be onchain.
Someone needs to maintain context over time. Workstreams can do this for us. There are known issues which need to be addressed such as:
- Tendency to bloat or hire more and more
- Inability to hold individuals accountable
- Lack of continuous feedback and communication
- Tendency to become silos of information
Workstreams that don’t bloat would need some funds available to grant to the ecosystem to participate in discovery and building of solutions. Plurality Labs will be there to push the needle towards participatory and community led solutions when possible.
How might we involve the community in allocating how much should be available for each outcome campaign? How much should be available overall? How might we fund the entire DAO using yield from our treasury? How might the DAO agree on ways to allocate governance over the next year or two to intentionally rebalance power to provide truly neutral digital public infrastructure?
One of the best ways to spark results is competition. Could we allow a few service providers to construct governance frameworks that can be tested with smaller allocations?
When we conducted the #GovMonth experiment to derive a Vision, Mission, Values and Strategic Priorities. We made a mistake. Our process did not include a special consideration of the largest delegates. This is a disservice to the voters they represent. In January we hope to rectify this oversight via a delegate campaign to surface the outcomes they would like to see.
We did use our #GovMonth strategic priorities to guide our funding during milestone 1 along with consistent interaction with delegates during our weekly workshops (but not specific feedback about the derived priorities). These priorities should be regularly checked and updated.
In January, we will conduct a process to identify legitimate strategic priorities with these delegates. Once we are confident in the legitimacy of the strategic priorities, we can then being the process of allocating funding to workstreams and outcome campaigns.
This campaign will include user research talking to delegates about what information they need and how they need it delivered. Delegates will have the opportunity to be rewarded for their participation in surveys, workshops, and other key value creating activities which will help us reconcile their interests with the #GovMonth results.
Every ARB that has been spent by the DAO will be reviewed by the community along with two independent control groups. As we identify those who are willing and capable of honestly reviewing grants, it will create more opportunities for the community to be involved. Imagine:
- Decentralized and automated milestone payouts
- Decentralized review of eligibility and eligibility criteria
- Massively increasing the DAO ability to process grant applications
- Dashboards allowing anyone to compare grants programs
- One place to send someone interested in a grant
- Automated process operations for delivering compliant grants
- Continuous evaluation of impact relative to intended outcomes