As a member of the Supervisory Council, Entropy Advisors worked closely with the Arbitrum Foundation and OCL to review service provider deliverables, help scope report ideas, and, most importantly, ensure that the work delivered aligned with the DAO’s current needs and was actionable. Members of our team were involved with the ARDC V1 and sought to avoid a repeat of the situation where significant resources were spent on research that ultimately lacked tangible next steps and/or seemingly received little attention from the wider DAO. To increase the visibility of the ARDC V2 research, we’ve additionally heavily collaborated with the comms lead, @JuanRah, to promote reports both internally through bi-weekly DAO calls and forum updates as well as externally, mainly utilizing threads and spaces on X.
From Entropy’s perspective, the quality of delivered reports varied widely, but the strongest outputs all shared the following traits:
- Emphasis on detailed scoping: A vast amount of time was spent not just on writing the reports, but designing what exactly should be researched and, most importantly, whether it would deliver actionable recommendations.
- Early and regular screenings of reports: Entropy, along with the Arbitrum Foundation, actively reviewed research ideas & assisted in the outlining process. Weekly check-ins with the Service Providers also ensured that the Supervisory Council was able to provide earlier feedback and redirect if necessary.
- Input from Arbitrum entities that have deep context: Creating relevant research requires a vast amount of context into the current state of Arbitrum. As the ARDC V2’s term progressed, we found that the Service Providers were able to make progress in acquiring sufficient context, but the most valuable work came after several rounds of feedback with relevant stakeholders.
The tight scoping process has been the main driver for ARDC V2’s budget efficiency. Several high-level ideas and topics might appear worthwhile to research at first glance, but after digging deeper and scoping out actual deliverables, many lead to dead ends. Examples include a lack of sufficient onchain data, or ideas that are simply so general that existing research already covers them extensively, while it’s unlikely that any Arbitrum-tailored takeaways will surface. Rather than simply funding these ideas that first look good on paper, Entropy made a concerted effort to diligently perform initial screenings of potential deliverables and filter out research topics that were unlikely to deliver tangible value, avoiding a similar situation as the ARDC V1. High-quality research is not cheap, even when it is fairly priced. To be clear, we feel as though the DAO hasn’t been overcharged, and the fees that service providers invoiced for reports were generally fair. However, to further illustrate that reports should be accompanied by tangible next steps to justify the expenses, below are the costs of each report produced by the ARDC V2:
- Analysis of the Arbitrum DAO’s grants program (Questbook, DDA): $24,600
- Security Council Enhancement: $37,200
- Sustainability of Sequencer Revenue: $34,290
- Governance Attacks Part 1 & 2: $26,040
- Incentive Program Analysis: $54,350
- Analysis of Vote Buying Research: $16,200
- Suggestions To Improve the DAO’s Technical Decision-Making Process: $21,300
- Arbitrum Ecosystem Mapping: Estimated at $35,900
In our opinion, the way the ARDC model currently functions faces two main frictions:
-
Fixed-Term Research Commitments: With a specific number of service providers locked in for a set time and budget, there is somewhat of a pressure to “create work” just to justify the pre-allocated funding. Moreover, the DAO can’t assign providers or price reports based on demand, perceived importance of research, and competitive fit of researchers.
-
Lack of Defined Expectations: With the research request process being so open-ended, requests often aren’t accompanied by a laid-out path for who (or how) will turn possible research findings into deliverables. Given the high cost of quality research, reports that lead to no tangible recommendations or have no party attached to them to own the initiative are, in our opinion, an inefficient use of time & resources for the DAO.
When first contemplating the future of the ARDC after a few months into the initial 6-month term, our team felt that there was value in having researchers available as needed, and with the introduction of OpCo, a demand-based program could be internalized by the entity. In such a structure, delegates could make requests directly to OpCo, which could then utilize a vetted pool of service providers to facilitate the research.
However, in recent months, our thinking has evolved based on the introduction of the new AAE model. Using the DRIP and the Treasury Management Consolidation proposals as examples, it is the responsibility of the AAE to handle service provider engagements out of the requested budget. In this model, where the needs/mechanisms of the DAO’s interactions with service providers change, we believe that the entire concept of how research functions in the DAO should be reevaluated.
Entropy will be hosting a community call on Wednesday, June 18th, at 3 pm UTC to share our thoughts in more detail and allow for community input.
In closing, we’d like to state that overall, we’re happy with the output of ARDC V2 and believe the program meaningfully improved on its predecessor. However, given the frictions laid out above, we think it makes sense for the DAO to take some time to reevaluate its stance on the initiative before voting on the six-month extension. We’d like to thank the service providers for their contributions and the time they invested in creating meaningful research for the DAO.