Questions about Plurality Labs Milestone 2

You can review everything we did in Milestone 1 here

Plurality Labs is excited to start off 2024 strong. We need your opinions to help us craft a proposal for our second milestone. The intention of this post is to share ideas we are considering for our 2nd milestone to gain a broader perspective. We ask that you review and respond using the recommended format.


  • Problem
  • Solution
  • Question to you
  • Priorities for Milestone 2
    • Work We Will Facilitate
    • What We Will Fund
    • Experiments We Will Conduct
  • Reply format


Frameworks should aid in our ability to find talent, double down on winners, and cut what isn’t working. There is a fundamental mismatch between the risk taking innovation needed to build capture-resistance and the community led governance of a DAO.

For example, watch 20 seconds of this discussion where they discuss Elon Musk’s ability to blow up rockets vs Nasa.

To build capture-resistant governance we need the ability to run experiments. The DAO has made it known that it prefers frameworks for hiring service providers. This is one of the most likely areas for capture. The issue is that different work requires different types of funding mechanisms. The desire to bucket service providers as one type of funding may seem natural, but it is not the optimal variable to use to determine how we fund the work.

Here are just a few other variables that could influence which funding mechanism is best

  • Latency of work - How long does it take to see results (research vs discord post)
  • Broad scope vs narrow scope
  • Difficult to source or difficult to find excellence
  • High expertise vs low expertise

There is little difference between resource allocation to service providers, DAO contributors or employees, Firestarters, and other forms of grants or payments. In DAOs, we tend to call any allocation of resource “grants”. In the real world, the term grant generally refers to the type of funding that discounts the requirement for impact. It is generally used to fund innovation that cannot be funded in Milestone or KPI driven ways. The terms and altered definitions we use in web 3 make the conversation noisy.

For example, RPGF is a great meme, but there isn’t much new about retroactive funding. There is another type of retroactive funding which evolved before the web 3 culture which is commonly used for milestone/kpi driven work. It is called paychecks. It’s this system where people do work, then retroactively get paid for it based on their employer’s high context impression of their work. However, we haven’t solved how the community led governance might perform as well as an employer or manager yet.

We even have retroactive paycheck funding for public goods. It is called government salaries. If you’ve dealt with a government, you know that they aren’t known for hiring go getters.

Ghostbusters - The Private Sector

So how do we get frameworks in place which double down on winners and cut what isn’t working?


Tighten up our iterative learning cycles while increasing allocation amounts. This needs to happen under the facilitation of some group with broad and long term context so experiments are iterative and learning is compounded.

We need to increase the amount available for experimental funding because there needs to be enough available to iterate enough times to find the best solutions.

We need to take some big shots as well. Tightening iterative cycles of defining needs, funding work, evaluating results, and sanctioning or boosting based on results lets the process play out faster, meaning we find solutions faster. Our best bet to do this is to involve other experts in the space as grant program managers.

Plurality Labs has started and would love to continue that process. In order to truly test what mechanisms work, we will need to scale the amounts funded by an order of magnitude.


Because the systems don’t currently exist and/or if they do, we have not found the right combination or stack yet. This is an evolutionary game. We cannot be expected to simply intelligently design the future. We must take shots on goal trying new systems to see what sticks.

Question: How much should we aim to allocate?

Disruption Joe had 17 one hour calls with delegates last week. It seems like Plurality Labs has broad support for moving on to milestone 2. There also has been broad support of our idea to extend milestone 2 to cover all of 2024 as opposed to 6 months).

There isn’t clear support for Plurality Labs to allocate 30 million ARB which we had posted several times during conversations in Milestone 1. Some delegates thought that is too much and some said it makes perfect sense.

For those that said they aren’t sure they would support PL allocating 30 million ARB in 2024, their reasoning can be categorized as:

  1. They need to review our Milestone 1 allocations first
    They can review the work here
  2. They would prefer us to have a smaller increase maybe 1.5 or 3 times the size of our first milestone.
    People are asking us to fund clear needs right now and we are fully allocated. We are suggesting we could do much more AND that there are things we think should be done that we couldn’t with a significantly smaller amount.
  3. The would prefer us to have a few Tally votes throughout the year.
    This does not allow us the same freedom in experimentation. That might be ok, but we don’t think it is optimal for Arbitrum. Remember that these funds go to the PL-ARB Grants Safety multisig which has 4 delegates and 2 PL team members and the DAO can clawback funds at any time.

We hope to find the solution to capture-resistant governance, but if we didn’t, Arbitrum would still end up with a well funded ecosystem.

This conversation is not about Plurality Labs service fee (that is what the proposal will do). For this conversation, let’s discuss allocation amount independently. We are here to solve a problem. A larger allocation amount gives us more shots on goal and the experimentation happens in tandem to quality ecosystem funding.

With 30 million we would be able to:

  • Increase allocation sizing to high performers
  • Hire higher quality people to work for workstreams because they can expect there is funding available to keep paying them if they do their job.
  • Fund the first steps of big bet projects
  • Allocate funds to “competitor governance design frameworks” in a way that avoids redundancy and allows us to learn what conflicts will happen while being able to coordinate effective resolutions. (One of these could be the funding needed to compliment the Grants DAO legal funding offer from the foundation.)

A Potential Split of Funding

  • 3 million to high context workstreams aligning with delegates on priorities and doing the research to prioritize what we should fund
  • 3 million to high context specialist sharing skills across workstreams (marketing, data, etc)
  • 12 million in funding for the workstreams to allocate throughout the year. Each quarter workstreams suggest “campaigns” that are narrow in scope and have clearly defined outcomes.
  • 6 million in doubling down on successful Plurality Labs milestone 1 programs
  • 6 million in funding new competitive governance (Imagine we select two programs to operate at the level we did in milestone 1)

Think of us as Dad holding the bike seat while you ride. The training wheels are still on. Do you want Dad to see you ride once and then take the training wheels off or do you want him to be confident that you are ready? More funds available means more opportunities for us to truly understand what works.

The DAO should fund multiple grants programs outside of what PL does. We loved seeing the community support STIP and would love to see a gaming STIP! However, tightening iteration cycles and conducting directed experimentation requires programs where PL is in the approver seat.

The more experimentation that is centralized under the PL program, the more learning we can do. Yes, this seems counter-intuitive, but there would still be billions in the treasury which is not centralized in our program.

And again, the experimentation happens at the same time as DAO needs are getting funded. If Arbitrum DAO wants to go heavy on winners, we hope you will allow us to do the most we can to drive this ecosystem forward. You still have our promise to work ourselves out of a job!

Which of the following scenarios would you rather see (assuming the PL fee is acceptable)?

  1. 5x increase & 1 year term - A proposal to send 30 million ARB to the PL-ARB Grants Safety multisig which would roll over any funds left at the end of the year. (Lowest PL fee)
  2. 5x increase & 6 month terms - A proposal to send 15 million ARB to the PL-ARB Grants Safety multisig with another Tally vote to ask for more after 6 months if it is smart to do so. (Slightly higher PL fee)
  3. 1.5x increase to start w/ critical amounts earmarked and ability to “refill” - A proposal to send 10 million ARB with an amount (maybe 3 million ARB) earmarked for workstreams to have a full year of salaries available and PL simply run another Tally vote when more is needed. (Highest PL fee w/ bonus structure at refills)

Which of these seems like the best option?

  • Which of these seems like the best option?
  • 1
  • 2
  • 3
  • Other - Supportive of PL
  • Other - Not supportive of PL
0 voters

Here are a few of our top priorities for Milestone 2

The Work We Will Facilitate

All of our work will involve continuous improvement of our ability to sense and respond to enable growth. We’ve baselined our success metrics, but the number one metric of our success is the value of Arbitrum governance. However, we need to connect that value to being a citizen.

Artwork is a custom design made by Shawn Grubb!

SENSE | Improving Communication for Understanding the DAO’s Needs

Problem: It is hard to maintain context and awareness of what is happening in the DAO and how to participate. DAO members don’t have a place to go to find out:

  • What they need to know to maintain context
  • What the DAO needs them to do
  • What opportunities exist for them to offer value

Solution: Provide aggregated communications on segmented to personas. Reward those who consistently do their “civic duty” with reputation. Use ARB rewards to incentivize weekly check-ins. Increase the number of people with context and ready to mobilize.

  • Create Segmented User Personas (Delegate/Builder/Grantee/etc)
  • Aggregate Communications
  • Build a Weekly Communication Cadence
  • Measure Weekly Active Users (WAU) & Monthly Active Users (MAU)

RESPOND | Coordinating the DAO’s Response in Allocating Funds

Problem: The proper structures don’t exist to create positive-sum interactions and establish goals with long time horizons. The DAO does not currently have:

  • A process for hiring
  • A legal structure for efficient KYC/KYB
  • A governance structure for driving alignment & prioritizing how we spend
  • Known next steps or pathways for contributors
  • Onboarding contributors

Solution: Expand on the success of Firestarters to take the next steps in going from 0-1. Plurality Labs role includes being a facilitator for designing a well functioning DAO.

  • Design participation pathways. Firestarters to Workstreams to Campaigns
  • Design legal, technical & governance frameworks (in collaboration with foundation and delegates) that incentivize long term thinking, reduce circles of influence from forming, and provide checks & balances for accountability.
  • Involve delegates in the formation and testing of structures
  • By having Plurality Labs facilitating the emergent structure of how workstreams are formed and held accountable, we avoid a coalition taking power and not spreading it.
  • Ensuring the composability of other DAO grant and procurement structures

GROW | To Grow, We Need Effective Collaboration

Problem: The DAO hasn’t found a way to identify and mobilize expertise when it is needed to ensure positive sum outcomes. Frameworks focus on allowing anyone to participate rather than identifying outliers and continually elevating the performance of the DAO. The DAO is not currently capable of:

  • Identifying and validating expertise
  • Providing checks and balances for grant program manager decisions
  • Reasons for contributors who aren’t delegates to participate
  • Retaining collective knowledge about grantees & grant programs
  • The ability to assess impact in credibly neutral ways
  • Clear definitions of Arbitrum citizens

Solution: Build user profiles over time via a combination of survey techniques, data analysis, and testing to enable the DAO to identify expertise, double down on winners, and cut off poor performance in a credibly neutral and community led way.

  • Building user profiles over time using implicit and explicit information
  • Conduct community led review experiments alongside control groups
  • Identify engaged DAO members willing to provide adequate effort
  • Identify expertise in reviewing high value contributions
  • Make reputation data available on an open data substrate allowing open interpretation
  • Continuously evaluate & Iterate

DAOs are complex by nature, thus they require complexity aware solutions based on a continuous practice of sensing and responding in order to grow. This process is what we will facilitate to ensure the success of the Arbitrum ecosystem.

Priorities for Funding in 2024

In January we will host sessions with delegates to determine our funding priorities for 2024. We learned from the GovMonth experiment that we gathered good info and the community appreciated it, but we need better validation from the delegates as the decider.

This does not mean that the Thank ARB priorities are the overall priorities of the DAO. The delegates will determine the overall priorities and Thank ARB will fund within these priorities as a subset of the total acceptable funding areas. It will allow us to execute experimental funding faster knowing we have alignment.

Potential Big Bets

We think that we can support many of these big bets! For some we could directly fund the ideasand we can also help guide big proposals to figure out how to get the votes needed.

  • The big bets discovered in the Arbitrum working here
  • Making Arbitrum a home for builders with an Accelerator framework giving builders a mentor network, services, and a pathway to investment
  • Figuring out how to continuously fund the DAO operations and maybe even grants on yield and/or revenue

Better Contributor Pathways

Most DAO members we talk to want to be a good Arbitrum citizen. They simply don’t know how. In addition to defining the personas and pathways, we can extend our understanding of users to help the intrinsically motivated remove starting friction.

  • Onboarding services, dashboard & experiences
  • Clear onchain structures for different allocation mechanisms
  • tARB reputation for doing your “civic duty”
  • Partner with quest providers hired by protocols building on Arbitrum to drive new citizen traffic

Focus Areas for Experimentation in 2024

We aim to conduct experiments in these focus areas that we feel are most likely to yield results.

Heirarchy of Outcomes

A DAO attempts to use technically & politically decentralization to achieve a goal. That goal is the top of a hierarchy of outcomes which the DAO catalyzes. It will change incentives into action and the DAO will grow in the direction of the incentives. Capture-resistance happens when the pie gets bigger with more participants. This is only possible if the participants are incentive aligned.

Pending the delegates’ discussions in January, the Vision, Mission and Strategic Priorities can serve as this high level alignment. This must include a positive feedback loop between the value of ARB governance and the number of active governance participants.

Contributor Pathways

Our Firestarters program was an undeniable success having a hand in sparking STIP, Security Audit Providers RFP process, ARDC, and the Treasury and Sustainability working group. Now what? We need to provide the next steps and long term pathways for contributors to go from zero to 100. And it all needs to be onchain.

High Context Workstreams

Someone needs to maintain context over time. Workstreams can do this for us. There are known issues which need to be addressed such as:

  • Tendency to bloat or hire more and more
  • Inability to hold individuals accountable
  • Lack of continuous feedback and communication
  • Tendency to become silos of information

Elected Campaigns

Workstreams that don’t bloat would need some funds available to grant to the ecosystem to participate in discovery and building of solutions. Plurality Labs will be there to push the needle towards participatory and community led solutions when possible.

Meta-Allocation Governance

How might we involve the community in allocating how much should be available for each outcome campaign? How much should be available overall? How might we fund the entire DAO using yield from our treasury? How might the DAO agree on ways to allocate governance over the next year or two to intentionally rebalance power to provide truly neutral digital public infrastructure?

Competitive Governance

One of the best ways to spark results is competition. Could we allow a few service providers to construct governance frameworks that can be tested with smaller allocations?

Already Funded Work You Will See In January 2024

Strategic Priorities

When we conducted the #GovMonth experiment to derive a Vision, Mission, Values and Strategic Priorities. We made a mistake. Our process did not include a special consideration of the largest delegates. This is a disservice to the voters they represent. In January we hope to rectify this oversight via a delegate campaign to surface the outcomes they would like to see.

We did use our #GovMonth strategic priorities to guide our funding during milestone 1 along with consistent interaction with delegates during our weekly workshops (but not specific feedback about the derived priorities). These priorities should be regularly checked and updated.

In January, we will conduct a process to identify legitimate strategic priorities with these delegates. Once we are confident in the legitimacy of the strategic priorities, we can then being the process of allocating funding to workstreams and outcome campaigns.

January Project One - Delegates Thank ARB

This campaign will include user research talking to delegates about what information they need and how they need it delivered. Delegates will have the opportunity to be rewarded for their participation in surveys, workshops, and other key value creating activities which will help us reconcile their interests with the #GovMonth results.

January Project Two - Reviewing All Funded Grants

Every ARB that has been spent by the DAO will be reviewed by the community along with two independent control groups. As we identify those who are willing and capable of honestly reviewing grants, it will create more opportunities for the community to be involved. Imagine:

  • Decentralized and automated milestone payouts
  • Decentralized review of eligibility and eligibility criteria
  • Massively increasing the DAO ability to process grant applications
  • Dashboards allowing anyone to compare grants programs
  • One place to send someone interested in a grant
  • Automated process operations for delivering compliant grants
  • Continuous evaluation of impact relative to intended outcomes

Recommended Reply Format

The Things I Liked

The Things I Loved

The Things It Lacks

The Things I Longed For


My most high level feedback is there is not enough discussion of how you are working yourself out of a job and developing capture resistance governance, which was one of the selling points when the PL program was proposed 6 months ago (3 milestones and we are done).

After the 1st milestone, what programs if any can be now delegated to the DAO without Plurality Labs managing them? If none, which programs do you foresee the DAO managing by itself at the end of milestone 2?

I’ve voted for option 3, a 1.5x increase from milestone 1, on the grounds that more is not always better. As we now know which programs work better than others, I’d like details on what experiments will NOT be repeated in milestone 2 as they were deemed to be a low RoI in milestone 1. I’ve not seen enough reflections on failures from the 1st milestone and how flab will be cut by doubling down on those that worked at the expense of those which were found wanting.

1 Like

Interesting feedback. I had considered everything that we are discussing here to be part of that discussion. The opening statement to the problem we are discussing is:

Are you looking for exact experiments we will conduct in decentralizing components of the system? We intentionally left this vague at this point. Here is what we intended as stated in the discussion from AIP - 3.

We had to work with the foundation to create systems for compliance which ended up with us not being able to fund anything until November, most not until December. This meant that programs intended to start in Oct/Nov had delayed start dates and therefore finish dates.

I understand that it would be better if we had fully completed all programs and had a final review before asking for Milestone 2 funding, but we believe there is enough corpus of work for people to review here to be an intelligent decision about the direction they would like us to go.

Have you ever had someone ask you how much it will cost to do something and they want you to do it for less? This conversation isn’t about the amount Plurality Labs is charging. I’m directly asking delegates to consider allowing us to allocate an amount that I think would best position us for success in finding solutions for capture-resistance.

What would you like to see?

I’d say none right now which as was the intention of milestone 1. The programs I foresee being managed by the DAO by the end of Milestone 2 is almost all of them. For milestone 3 we would have more of an “admin switch” type of relationship to how we manage the funds, though there may still be some unresolved experimentation.

We won’t know which programs we won’t do in milestone 2 until we:

  • Review grant success/impact. This is happening in January, then will be continuous thereafter.
  • Review program success. This has to happen after programs end. Any that resolve before Jan 15th will be in our Milestone 1 report which will be posted in February.
  • Align strategic priorities with delegates

The point of this post is to gauge if we have earned enough trust from the DAO with the work that has been done so far to ask for the amount that we feel would best empower us to provide capture-resistance and would offer the DAO best results as far as unleashing us to make maximum impact.

We don’t want to post a proposal with an amount that the DAO doesn’t support. We do want to have this conversation to figure out what the best path forward is.


Thanks for the detailed comments, Joe!

I didn’t mean to cost cut for the sake of it, but probe more deeply into whether we’ve done the public exercise of cutting the flab by removing projects and activities deemed to have had relatively less impact.

But this is important context I missed, as the late start affects timelines for gauging success

Post Jan 15th I would be keen to go more into the projects you deem as being “low impact” and see what course corrections can be made there, and how that might affect overall budget before the onchain tally vote.

Not just me but all the other delegates too appreciate how involved you are with the Arbitrum ecosystem. Your daily involvement with the DAO has put you in an irreplaceable position of connecting capable people in your network with tasks that need to be done for the DAO.

You have earned the trust of the DAO for sure, we are just asking for more involvement into how the number of 30 million ARB came to be, or in your own words

Overall, it looks like we have to take a leap of faith with plurality labs for milestone 2; you have earned that in my view, and I’m going to hold you to the committment you are making for milestone 3!

1 Like

This timeline won’t work. The evaluations of all the grants happens throughout January. The Milestone 1 report will be in February. Many of the programs end in March or April. Any programs done before February will be evaluated in this way. However, the ones that finish after cannot be assessed and our team runs out of paychecks at the end of January.


I see, in that case i do wonder whether option 2 (5x increase for a 6 month extension) might be the most prudent option ?

I can see the advantage of taking a leap of faith and extending for a year, but also wonder whether DAO wide sensemaking & evaluation can happen more effectively when there’s an upcoming onchain vote with actual budgetary changes depending on the outcome.