Plurality Labs Milestone 1 Review

The below post reflects the views of L2BEAT’s governance team, composed of @krst and @Sinkas, and it’s based on the combined research, fact-checking and ideation of the two.

Intro

We want to start off by clarifying that by publishing our review of Milestone 1 of Plurality Labs’ Grants Program, our goal is to share our perspective on its outcome which will hopefully help delegates in their assessment and enable them to make an informed decision when it comes to the proposal of Milestone 2. We do not in any way intend to criticise or offend Plurality Labs or any of the associated parties.

We did our assessment in good faith when Milestone 2 proposal was published on the forum, based on the data available at that time. We have shared our findings with Plurality Labs and we also communicated both our concerns and some suggestion on how to proceed going forward. As the Milestone 2 vote went to the temperature check, we fill obliged to share our learnings with the broader community.

TL;DR

  • We spent more then a full week looking into Milestone 1 and we’re using this post to share our findings.
  • Plurality Labs has also produced their own Milestone 1 Review which they recently updated to include additional information, following discussion with and feedback from delegates
  • in our opinion Milestone 1 hasn’t been fully completed and its’ results are below our expectations
  • There are still numerous initiatives that are in progress (e.g. RnDAO Co.Lab, MasonDAO, Biggest Minigrants Yet), and in our opinion, those are some of the most meaningful ones from the point of view of the original proposal, we should probably wait for them to conclude or at least show some meaningful progress in order to be able to assess Milestone 1 better.
  • Most importantly, the deliverables outlined in the original proposal of Milestone 1 weren’t fully completed.
  • The biggest expected outcome of Milestone 1, which was to deliver a grants-framework, wasn’t met. As we understand it, a framework is a set of structures and procedures that can be followed without the involvement of the original team, and the process to do so as well as the expected results are repeatable and predictable.
  • We believe that right now, the best course of action would be to wrap up Milestone 1, compile the lessons learnt, and plot actionable next steps from there (ideally restarting Milestone 1 or doing a followup program to continue working on Milestone 1 vision, within similar budget).
  • We do not think that we should be talking about scaling at this moment because we a) do not have a framework as mentioned above, and b) allocating in Milestone 1 proved to be challenging, and we feel that some of the amounts were greater than what we’d feel comfortable given the experimental nature.
  • As things stand, and even though we appreciate the work of Plurality Labs, we do not feel comfortable supporting Milestone 2 for the proposed amount.
  • If a framework was in place and it had been ratified or somehow approved by the DAO via a vote, we’d be much more comfortable with supporting an increase in the budget available to Plurality Labs to continue the work on creating the grants framework for the ArbitrumDAO.
  • We will also be conducting and publishing a similar review for Questbook’s Grant Program in the coming weeks as well as for the STIP program.

Our Full Review

Approach

The way we approached the review of Milestone 1 was to go through the expected deliverables outlined in the original proposal and see whether or not they have been delivered, and to figure out how the envisioned grants framework works in light of chosen programs and funding decisions made.

To do so, we needed to go through all the individual programs PL funded, and check their respective deliverables, their progress, the impact they had (expected and actual), and verify all accompanying data, including financials.

The review proved challenging, since a lot of the data wasn’t readily available in the public domain, was not easy to verify, or was simply outright missing.

Overview

Our current understanding and belief is that Milestone 1 hasn’t been fully completed yet, and the results of the programs that have been completed weren’t up to the expectations that were set when we voted to fund Plurality Labs to deliver a ‘best-in-class’ grants framework.

First of all, when thinking about a grants ‘framework’, we think it’s safe to assume that most people, including us, envision structures and processes that are repeatable in a predictable manner, regardless of the future involvement of the team creating them. As a reference, you can think of the different frameworks being tested on Optimism, or even in different grants programs in the European Union — specific processes to follow, structures that make accountability and responsibility obvious, and predictable expectations.

We cannot say that we saw a framework like that delivered in Milestone 1, and we’re unaware of any such frameworks existing even internally within Plurality Labs as some of the experiments and programs funded seem arbitrary. Also, despite information about which projects got funded being available, there’s no information on which programs did not get funded and why.

Furthermore, it’s not clear how the different programs funded fit together in a greater, coherent, plan to create a grants framework. If the DAO was to stop funding Plurality Labs tomorrow, we do not think we’re left with any sort of ‘grants framework’ that the DAO could pick up and replicate.

Having said that, we want to clarify that we do believe that the work Plurality Labs has done, and continues to do, within Arbitrum DAO is valuable. We simply think that moving forward with Milestone 2 when Milestone 1 didn’t exactly deliver the expected results isn’t the right way to go.

Our understanding and assumption is that somewhere down the line, Plurality Labs pivoted from trying to create a grants framework to trying to create an Ops & Facilitation Program that reacts to current DAO events. Maybe that’s why most delegates only associate Plurality Labs with the “Firestarters” grants. These initiatives are definitely valuable and something we should double down on, but not without firstly creating a structure that doesn’t rely on a single person making decisions solely on their discretion.

Lastly, when trying to review Milestone 1, we started by going through Plurality Labs’ own review. We were disappointed to see that the report was more of an overview of the different programs funded, with crucial details for each omitted.

Deliverables

Before jumping in the specifics of each program, let’s first look at the deliverables of Milestone 1 (as described in original proposal). Since we first started our review, Plurality Labs has created and shared a Deliverables Tracker. At the time of writing this post, we’re going over our initial analysis and taking the new information into consideration.

Discover | Facilitate DAO native workshops

  • Conduct DAO native sense making to find the Arbitrum DAO Vision, Mission, Values

While we appreciate the efforts of trying to find the DAO’s vision, mission and values, we’re not confident that we got any meaningful and DAO-native insights. Both JokeRace and ThankARB initiatives (which are relevant here) seemed to be heavily botted as we’ll see further down this post. Furthermore, it seems that most of the active contributors to the DAO did not participate in these campaigns, thus making it difficult to claim the results to be DAO native. Lastly, we believe that any sort of results should be ratified by the DAO through a vote before being labelled as ‘official’ and used as the foundation on which we base different initiatives.

  • Clearly define funding priorities including short & long term goals & boundaries

We couldn’t find any clearly defined priorities, except for some vague insights in the GovMonth Report.

  • Establish and confirm key success metrics for the Grants Program

While some information around why a program was funded or the expected results are available, there’s no obvious key success metrics for each program.

  • Scope out requirements for a Gitcoin Grants round on Arbitrum

Given there were three Gitcoin Rounds included in the different programs, we consider this to have been delivered. It is worth to note though that there’s no written requirements specification for running Gitcoin Grants in the future.

  • Establish clear communications cadences & channels for all key stakeholders to engage with the program

As it became apparent to us after-the-fact while reviewing, there were no clear communication cadences & channels. While we appreciate Plurality Labs’ availability, engagement and proximity to the DAO, it doesn’t really constitute a communication channel through which delegates could regularly be updated in a comprehensible way regarding progress made.

Design | Construct best in class Pluralistic Grants Program

  • Identify suitable tools and technology to support a robust, secure and efficient grants program (i.e. Allo)

As far as we can tell, there isn’t a mention of the process used to identify the aforementioned tools and technology, there’s no clear mention of which tools were identified, neither a review of their pros and cons. We find it hard to agree that simply the fact that PL did use some tools and technology for Milestone 1 means that this deliverable has been completed.

  • Design and process map the end to end grant funding flows
  • Design approach, process and channels for sourcing high impact grants ideas
  • Design Grantee Registration process and grant pipeline management structure
  • Design Grant Program manager application process and assessment criteria

For the above 4 bullet-points, there have been some deliverables (like Miro boards), but we’re not entirely sure they reflect the points mentioned above. It’s not clear who was involved, what the process behind them was, or what the final structure should look like outside of the Plurality Labs’ programs.

  • Work with Gitcoin to set up and launch a Gitcoin Grants round on Arbitrum

This was delivered.

  • Design credibly neutral grant funding evaluation criteria, reporting structure and cadence

This wasn’t delivered at all.

Execute | Facilitate the successful execution of Pluralist Grants Programs

  • Onboard and coach Pluralist Program Managers in grant program best practices

What are the best practices for grants programs? Who are the Pluralist Program Managers being on boarded? The document provided in PL’s Deliverables Tracker doesn’t include any of that information.

  • Deploy 2.6 million ARB in funding to programs selected via the Pluralist Program Managers

At the time of writing, the multisig still contains 1,9M ARB. Of that, there seems to be 1,535M allocated but not paid yet (as found on a screenshot here). The same screenshot mentions that there’s 0.151 ARB unallocated. Which begs the question, what is the 0,214M ARB remaining? There’s a report from R3gen Finance that is due in January, but other than that, there’s little insight into the financial data around Milestone 1.

  • Deploy 300k ARB to Gitcoin Grants round recipients

Of the 300k ARB for Matching Pools, only 40k has been spent so far, which means this deliverable hasn’t concluded yet. Furthermore, if we also include Questbook Support Round, and Arbitrum Citizen Retro Funding, the total budget allocated to Gitcoin Grants rounds exceeds 500k ARB, which is almost double the initially planned amount.

  • Oversee grants rounds to ensure they are free from fraud or abuse

We understand that sybil protection is a difficult thing in crypto. However, when we went to check the ThankARB program, to which 350,000 ARB was allocated, it became easily apparent that the vast majority of contributions were bots. Why did we allocate such a significant amount (worth over $500,000 with the recent price increase of $ARB) to something that is difficult to protect from abuse. In contrast, Firestarters, which was arguably one of the most successful programs, was allocated 300,000 ARB.

Evaluate | Report back on grant funding outcomes

  • Publish financial & analytics reports on grant funding value, volume, outcomes and other relevant metrics requested by the community

We’re waiting for the financial report from R3gen Finance, but other than that, we haven’t seen a lot of information delivered on this front. Transparency and accountability is typically a prerequisite for any grantee, let alone a grants program itself.

  • Share key learnings and grants program best practices with Arbitrum DAO and the wider web3 community

While there has been a plethora of insights shared with the DAO, we’re not sure any of it constitutes ‘key learnings’ or ‘grants program best practices’ with any actionable and material impact. We’ve found that most of the insights shared (e.g. in the GovMonth report) to be vague and to not be at a point in which they can be used to inform the DAO’s decision-making mechanism.

  • Collate community feedback and input on grants programs efficacy and success
  • Evaluate, review and iterate based on this feedback to continually improve the overall impact of the Arbitrum DAO grants program

The last two items have not been delivered in the sense that community has not been asked to provide input to the programs, and therefore the programs have not been iterated based on that input. Furthermore, in the Milestone 1 review, PL stated that these deliverables will not be delivered at all, which is quite worrisome given that the DAO is being asked to do a 10x follow up in Milestone 2.

Our assessment of specific programs run by Plurality Labs

First of all, we need to note that the original proposal for Milestone 1 described a comprehensive process for sourcing and selection of program managers. Plurality Labs pitched an idea of validating different grant distribution mechanisms by running various different grant programs in parallel and comparing their results to choose those that perform best. Looking into the results of Milestone 1 we believe that this has been delivered only to a very limited extent, with most of actual grant programs starting only after Plurality Labs announced the completion of Milestone 1. The process of sourcing and selecting specific programs was barely communicated and is not documented.

The Milestone 1 review presented only the general assumptions behind each selected program but not the process behind the selection of those specific programs and it did not present any funding details, outcomes or lessons learned from these programs. Moreover in some cases while we understand the overall value of funded initiatives, we struggle to understand their relationship to the scope of original proposal.

Below we present our own assessment and our point of view on each individual program in relation to the Milestone 1 goals, as we’ve come to understand them. We’ve divided them in three categories, related to our assessment of their overall alignment to the goals of Milestone 1 and their potential value in a grants framework.

1. Programs that we find as valuable experiments in terms of future grants framework:

Questbook Support Rounds on Gitcoin

We find this program to be successful in the way it:

  1. Managed to distribute funds to builders and,
  2. helped provide more meaningful division of funds allocation between different domains for Questbook program.

It’s interesting to note though that final community allocation to projects in specific categories in the second round differed from community original allocation by category. For example “Developing Tooling on Nova” ranked 2nd in the category allocation but ranked last in the final allocation to specific projects. It’s also worth to note that the matching pool was significantly larger (6x higher) then the amount collected from the community.

What we found lacking is any kind of conclusion and lessons learned from this program, especially given that this was one program that was completed quite a long time ago.

Allo on Arbitrum Hackathon

We found this initiative a valuable experiment in grants distribution, however at the time of collecting the data for our review, the program was still in progress so we weren’t able to assess its outcomes.

One thing we found confusing is that the program was officially communicated as sponsored by Arbitrum Foundation and Allo Protocol. Moreover we couldn’t properly match the funds allocated by PL to this program and the funds distributed in the hackathon, so it’s hard to assess its overall effectiveness.

Arbitrum’s “Biggest Small Grants Yet”

At the time of collecting the information for this assessment, this program has not started yet, so it’s hard to say anything concrete about the results. However we find the overall structure of this program as a valuable experiment that might provide insights for the future grants framework.

Arbitrum Co-Lab by RnDAO

Again, this program has just started while we were collecting the materials for the assessment so we can’t tell much about its’ outcomes, but similarly as in a previous one, we find it promising in the scope of Milestone 1 goals.

One thing we noticed though is that even though the incubated projects are funded by the ArbitrumDAO, it is the RnDAO that gets equity in these projects through this program. We are not against this particular setup, but if this program is set to continue, we think it would be interesting to consider transferring some of the equity in accelerated projects to the ArbitrumDAO as well.

Grantships by DAO Masons

We can’t say much about this program as it is planned to start mid-February. However, from what we could learn, it seems as a likely valuable addition to the lessons learnt within the scope of Milestone 1.

2. Programs that we find as valuable initiatives but with unclear connection with the scope of original proposal

Firestarters

We see Firestarters as an answer to the DAO’s need for facilitation of workstreams that aim to solve specific problems. We believe it’s a very valuable initiative and we think many contributors perceive it as such. However it does not seem to be an organised program by itself as seen from a perspective which takes into account the goals of Milestone 1. As far as we can tell within this initiative Plurality Labs is funding workstreams that are needed by the ArbitrumDAO based on their own judgement and not based on any organised framework.

Some of the remarks we had while reviewing this program:

  • There’s lack of clarity in terms of what can and should be funded through a Firestarter grant. We couldn’t find any instructions on how to apply for it or any criteria on which future grantees are selected.
  • We see that there is a big disparity between the amounts of grants, ranging from 8k ARB for the work on Arbitrum DAO Procurement Committee to almost 80k ARB for the Treasury Sustainability working group.
  • We couldn’t find any info on the scope of some Firestarters grants.
  • We think that the program overall lacks oversight of the grantees work and their deliverables. From what we can tell these are rather “fire & forget” grants, which is understandable for an experiment, but not acceptable for long-term scaling

Open Data Community (OCD) Intelligence

We couldn’t find any specific information on that program in the forum. From the review post it seems that the outcomes of this program might be valuable but it’s hard to say what they will be and how they add to lessons learned for the grants framework.

Matching Match Pools on Gitcoin

While we find participation in Gitcoin matching pools valuable overall, it’s hard to frame this as an experiment in the grants framework design - at least from what we can tell this are just regular Gitcoin rounds that are well known in the crypto grants space and there’s not much we can learn from this experiment in terms of a grants framework design. In the forum we didn’t find any lessons learned from this experiment, just the post announcing it 3 months ago.

3. Projects in which we struggle to position in the scope of original proposal

MEV (Miner Extractable Value) Research

While we understand the value that lies in research of MEV, we don’t understand how does a MEV-related conference fit into grants framework experiments, especially in the amount that represents more than 10% of the whole budget for the program. Moreover, while MEV research is important in general, its implications to Arbitrum as a rollup are quite specific in nature. Therefore we feel that funding of this conference from ArbitrumDAO should be approved by the DAO itself and we should come with some expectations regarding the topics covered so that implications are relevant not just to the crypto industry in general but also specifically to Arbitrum.

Arbitrum Citizen Retro Funding

While we find rewarding active DAO participants important and valuable, we struggle to understand how this experiment in the form and structure in which it was executed, fits into the grants framework development. The eligibility criteria seemed quite arbitrary and we found the final results quite mixed - rewarding valuable contributors, rewarding some random projects and not rewarding some contributors that definitely had a huge impact on the DAO.

Thank ARB (Sense and Respond)

This is probably the most problematic program for us. It seems like the results are kind of random, with few hundred participants in some campaigns and dozen thousands in one of them. When looking at the contributor lists we get a strong feeling that they are composed mostly of bots and farmers. We tried finding any contributor who we know from the DAO on the list but the only familiar names were these of Plurality Labs or Thrivecoin team members.

We also got an impression that the program was abandoned after first few iterations (which was then kind of confirmed in discussions with Plurality Labs) but it hasn’t been communicated and in all the review materials the whole budget is still described as allocated to this program.

We couldn’t find any lessons learned or outcomes from this program, and the original introductory post on the forum has not been updated.

Keeping in mind that there was a big portion of the overall budget allocated to this program (350K ARB), that it serves as a backbone Milestone 2, we believe that there is a need for further discussion on the outcome of the program or at least a publication of the lessons learned.

Plurality GovBoost

This program seems to be just a (random from our perspective) set of open data platforms. While we do not deny that there is a significant value in having those platforms and having open data we do not fully understand what was the rationale behind choosing these particular platforms, how are they related to this particular grant program (the biggest grant here relates to STIP which is a separate proposal).

Conclusion

The original scope of Milestone 1 was to discover, design, execute and evaluate a best-in-class grants framework for the Arbitrum DAO. Based on our assessment above, we believe Milestone 1 has failed to deliver on its original scope so far, with the programs that are expected to finish in the following weeks to not be in a position to change this view regardless of whether or not they’re successful.

Although we believe there were valuable experiments during the course of Milestone 1, we do not believe these justify a vote of confidence in Plurality Labs’ plan to move forward with a proposal for Milestone 2. If anything, we should take a step back and collectively reassess what the mandate for creating a grants framework means. We should seek to double down on initiatives that have proven to be successful in allocating capital to the right contributors who are able to deliver value to the DAO in an impactful way.

Also, going through the process of reviewing the progress of Milestone 1 has made it clear to us that we need to do a better job as a DAO to incorporate accountability, transparency and reporting practices within proposals, and not have them as after-the-fact add-ons. We found the lack of data disturbing, especially given the amounts involved.

In their Milestone 1 Review, Plurality Labs actually shared 3 dimensions with which delegates could evaluate Milestone 1. These dimensions are captured through 3 questions delegates can ask themselves while assessing the impact Milestone 1 had. We tried answering these questions based on all of the information above.

  • Did they [Plurality Labs] do what they said they would do in Milestone 1?

Even if some deliverables were completed, we believe the ‘headline’ of Milestone 1 was to deliver a best-in-class grants framework. As we explained throughout this post we don’t believe that PL delivered the grants framework, so we can’t say that they did what they said they would in Milestone 1.

  • Did grant funding go to worthwhile programs & projects?

The response to this question is debatable. There were both worthwhile and questionable programs and projects funded, with the latter unfortunately being the majority.

  • Did they [Plurality Labs] prioritised experimentation & learning for the future?

While we can certainly say that experimentation was prioritized, we cannot confidently say that it was done in a way that equipped us with learnings for the future as a lot of the supposed learnings were not documented, or were not made publicly available, as mentioned above.

Milestone 2 Proposal

We will be commenting on Milestone 2 under its respective thread, but we want to take this opportunity and explain that we see Milestone 2 as an extension of Milestone 1. With all the points of concern raised through our assessment of Milestone 1, we do not feel comfortable voting for Milestone 2 to be funded with any amount. We’d much rather invite Plurality Labs to go back to the drawing board with the DAO and figure out how we can best leverage their experience and learnings so far to continue working on Milestone 1.

Invitation to Discussion

There’s a call being hosted to discuss Plurality Labs’ Milestone 2 proposal on Wednesday 31st of January at 4:30 pm UTC. We’d also like to invite all delegates and Arbitrum DAO participants to discuss Milestone 1 and our review during our Arbitrum Office Hours on Thursday 1st of February at 4 pm UTC.

11 Likes