TL;DR Response
- It generalizes opinions that we don’t think the DAO in general agrees with and then refers to their opinions as fact throughout the document. They sited the Treasury & Sustainability Working group as a failed grant. This was both an experiment and what we think was a successful grant. I was at Gitcoin when the treasury didn’t get diversified as GTC went from 15 to 3 before they took action. We got something moving with this grant. There are 3 key research artifacts, a proposal and next steps.
- The entire review is a cost analysis. There is no benefit analysis. Example: They don’t address that STIP likely wouldn’t have happened without us or the value of Open Block Labs grant.
- There are many assumptions made such as the importance of checking boxes against what we first said was likely work to assuming there shouldn’t be flexibility in our need to execute on deliverables which we determined had changed in importance during the campaign.
- They review the experimental programs as though success and failure are the outcome. In experimentation, there is nuance to which failures are successes based on learning. They call out that there weren’t learnings to some of the programs but there is quite literally a database in the comment above the one they posted with a section for each program titled “Lessons Learned”!
- Many of these reviews are critiquing the review we put out in December. In some places they add a sentence saying “they have since updated this” without a new evaluation of what was done. In most places they simply have factually incorrect info.
- They don’t seem to understand the core value proposition of our proposal. Delivering capture-resistance over three milestones based on experimenting with governance of grant programs and how they operate. The misunderstanding can be seen here:
How are we supposed to deliver capture-resistant grants governance without experimenting with the operations that support the governance which decides how to deliver resources?
We are currently the only program which has funding available for DAO operations. Please read my companion piece to the proposal which assesses this need and how we are directly addressing it.
Funding governance operations experiments is not a pivot, it is the primary deliverable across three milestones.
Overview Issues
This was mostly because we had a deliverable date for the building of the database to consolidate the information of Jan 31st. We did make this available upon your request and later in this document you acknowledge this.
We did take longer to get some aspects of the proposal moving. Compliance alignment was difficult to figure out. We didn’t get payments out to grantees until late November. Therefore some programs started later than intended and will run past the end date for milestone 1.
To learn quickly, our experimentation in milestone 1 was designed to be decided by Plurality Labs. We would then assess the programs and how each worked to then craft specific experimentation around the needs for solving capture-resistance. Our professional discretion was used to select the programs.
We needed an initial set of programs to develop the assessment framework. This is a principle of iterative design. We didn’t want to create a framework, then be constrained by an arbitrary set of self-imposed limitations.
While the expectation that a greenfield team could run the framework after 6 months, the reality is that we are designing the first ever pluralist program. We’ve learned about considerations we didn’t know about at the beginning. Any entrepreneur will recognize this - we learn as we go. We sense and respond. In this review, it seems that our sensing to the needs of the DAO and responding to them is a bad thing.
imho - There is no point to designing capture-resistant governance that doesn’t sense and respond to the actual needs of the DAO.
That said, we didn’t deliver a framework that can be handed off, but this is directly addressed in the milestone 2 proposal.
This was posted in December. We’ve discussed this with you multiple times that we posted a review with what we had at the time to start the discussion. It seems like multiple parts of your review here refer back to this point which is no longer true.
Deliverable Issues
As we explained and can share the data for, just because the bots were submitting items on jokerace did not mean that they got paid OR that the data from their submissions was used. In the govmonth report, we discussed the methodology used to remove these.
As for the farmers on Thank ARB. There is a big difference between “farmers” and sybils. We conducted a thorough with TrustaLabs to remove the sybils from the allowlist. This means they didn’t get paid, but not all applications are gated.
Now for the engagement farmers, if it is one human with one account, who are we to say they are illegitimate! Here is an article about the methodology TrustaLabs used to identify 96,000 sybils which got past the sybil detection efforts done for the airdrop. Here is an old article showing the nuance between farmers and sybils.
We hired TrustaLabs to repeat this review work in August and created the allowlist for GovMonth with sybils removed and only ARB holders eligible.
We addressed the fact that many delegates and active contributors didn’t participate as a failure on our part. We also tried to explain that this is an iterative process and since the findings are high level, we wouldn’t need to redo - we would continue to sense and respond through a variety of methods including the Tuesday workshops and progress from the IRL events like Istanbul.
We disagree that the findings need to be ratified for the whole DAO. In a pluralist model we can use these strategies to guide our program, but don’t need to be ratified to the entire DAO. Like all the governance mechanisms we are discovering and using, if they work well enough the DAO will opt-in to their usage.
We understand if you think these don’t count because they weren’t ratified. We do think they count.
We ran programs that we thought would be interesting using professional discretion. Afterwards, we will then learn what things are truly successful. This is innovation work.
There are clear runbook guides available on Gitcoin website. What our framework must figure out, is what “settings” to use in which situation based on how they compliment each other. Running one experiment and delivering end-to-end how to guides is unrealistic.
We agree. While we did have regular twitter spaces with the foundation, tweet threads, Tuesday workshops, all of our programs posting updates on the forum, and our only updates on the forum - we did not find the method that achieved the outcome of delegates being informed. We gave ourselves a red on this deliverable and consider it a challenge for milestone 2.
We do think this is not in a desired documentation format. We have hired a team member to own the public facing organization of this research.
Here we go:
What deliverable should there be other than a mural board?
We realize this happens at a program level. We are observing and documenting what works. It turns out this is an ongoing task at the program provider level.
At the pluralist framework level, we realized this is infrastructure, a database which is being jointly designed with the foundation. We shared this work with Krystof. There is a one link to “pre-apply” for grants and have it routed to the applicable programs. It automates some of the compliance comms. It also provides community dashboards and a database for review.
We picked using our judgement this time. When we see what works and what doesn’t, we can then assess what criteria to use.
For the last 4 weeks we’ve been onboarding grantees onto Karma GAP. We also funded Open Source Observer for reliable data which is now integrated with Karma.
For the pluralist framework, we are interested in decentralized review. Capture-resistance depends on removing single points of failure.
This is another clear example of how this review misses the forest for the trees. If you review us as a dept and a corporation or a government entity, this review will seem like quality work. However, there is a reason that those companies can’t access innovation. If you evaluate our work like a startup innovating highly valuable solutions, we think it is hard to say we shouldn’t follow up with a seed investment.
This was a bad deliverable to include and we learned that during the course of milestone 1. If we run an experiment, how can we know best practices before we clearly define the next stage with a hypothesis. A single instance cannot show best practices. Once enough programs finish, we can begin documenting the consistencies across programs.
We are paying out a lot everyday now. Would this explain it?
Yes, the matching program which brought the American Cancer Society first round to Arbitrum along with MetaGov and TokenEngineering Commons, is still going! We couldn’t start it until December and Gitcoin has postponed their GG20 round until April.
To include the questbook support rounds and citizen retrofunding here is disingenuous. Gitcoin has a plurality of mechanisms available now. Using their ready made smart contracts for deploying funds to Arbitrum community members is a totally different things. Especially since no fees were paid.
Again, this doesn’t include the discussions we have had addressing this. We did run sybil protection and you are mistaking many people airdrop farming for “bots” or bad actors. Now, there are some that may get through, but I’ve also had people thank me for the campaigns at IRL events.
This says “with the recent price increase in ARB” - it was budgeted before the recent price increase in ARB. We’ve only spent 87k of it so far with 100k earmarked for grant reviews.
The new thrive protocol addresses this problem with human validations. While still in an mvp stage, we do think we can get meaningful crowd validations and only payout the users who put in the effort to be aligned with the crowd. This is our problem to solve!
The information including finances we have is in the comment that is literally above yours. Yes, R3gen is contracted to do a full DAO financials report. We will be receiving the 1 year report of financials to date on the DAO bday (in March) and then we will receive monthly reports thereafter.
There is quite literally a field on lessons learned in the review that this is responding to and in the database with all the grant information.
We did conduct community reviews of what they thought about the selections of grant programs and this report will be out soon.
We realized that understanding the success of the grants programs we funded depended on understanding the success of the grants the program funded. As the decentralized review is underway, this dependent step cannot be started. It is a different deliverable which we would argue makes sense.
Program Assessment Review Issues
2/12 programs will have started during January or later. I don’t believe this constitutes as “most”.
Because of the delays with compliance, many programs which we would have hoped to end before our milestone are now extended beyond it.
AGAIN, this is assessing the post we put up in December!
We put this up in an attempt to begin communicating what we had knowing full well that our milestone wasn’t done and there should have been NO expectation of complete information. It was designed to be a start. All this info you mention is in the latest update.
Here is a review of the event which ended last week. Allo / Arbitrum Hackaton Hosted by BuidlBox, Allo & Arbitrum - 👋 News and Community - Gitcoin Governance
Here is the week 2 recap
Of course there was. We were defining it as we went. We quite clearly have stated that this was a response to DAO needs. In our milestone 2 proposal, we discuss needing to build accountability structures and pathways for success. We also discuss creating a role that has the power to initiate these grants.
We were figuring it out as we go. Yes, we need to structure now.
Exactly. Its an experiment and we clearly discuss next steps in the milestone 2 proposal.
This was an explicit part of the milestone 1 proposal. Gitcoin expedited deployment and integration of their protocol on Arbitrum.
You won’t understand without watching this to understand how we can improve our conversations. It is based on a paper by Puja, Vitalik, and Glen Weyl. It is the next step of protecting the information frontier.
MEV constitutes an existential crisis for Ethereum. Arbitrum is scaling Ethereum. Not to mention there are potential revenue opportunities.
We will not be able to find capture-resistance without being able to support these types of ideas from the best minds in the space.
The other programs
It seems like the critisism is undefined. Something like “I have preset assumptions about the scope of your program which everyone else should consider.”
Please point out a better pluralist program. It’s 6 months since approval and 2 months that we have been able to fund things. Remember when the DAO, partly driven by you, negotiated us down from 750k to 336k for our fee for milestone 1. Had we been properly staffed do you think we wouldn’t have buttoned up many of these issues you have?
There seems to be zero appreciation for any of the value we did add. I’d advise others in the DAO to think critically about the effect we have had on the DAO. The opinion that the majority were unworthy comes from a source which thinks the treasury and sustainability and open block labs grants were not worthy. If you think these were, then you would likely disagree with this assessment overall.
Additionally, the point of experimentation is that some will not work out. This critical point is missing any benefit analysis and focuses only on cost.
There is literally a database with every program and grant that has a section about what we learned!
It looks like this:
Then you click onto a program:
Then scroll down and there is LITERALLY a “lessons learned” section
Then you can see ALL the grants in that round.
Conclusion
L2Beats shared this information with us weeks ago. We asked for them to publicly share it so we could address it then. For some reason, it was decided to hold it until today.
I appreciate L2Beats. They dig in and find things no one else would. He’s like a coach that makes us all better and perform at the highest standard. We will look back and reminisce about how real he was.
However, I see problems here that delegates must take into account.
- The amount of old info which doesn’t consider the work we’ve done in the past month.
- The clear misunderstanding about the purpose and intention of our work overall.
- The unrealistic expectation to stick to deliverables stated 6 months ago in changing circumstances even if the deliverable is replaced or easily explained.
- The lack of empathy for the role of a founder pushing new innovative solutions to actual problems.
These considerations suggest that we need to take L2Beat criticism to heart, but we should not stop with critical work or cut funding to people committed to Arbitrum: who are going above and beyond to deliver a FIRST in class pluralist grants program. Of course there will be roadbumps along the way. To expect their wouldn’t be, especially after negotiating the rate to less than half of what we initially suggested it would take, is simply unrealistic.