Building on my prior post addressing concerns related to the suggested program, I have conducted research into Questbook’s DDA program’s success in the past and wish to share my discoveries with the community.
I’d like to begin by expressing my gratitude to @Saurabh and the Questbook team for supplying me with data and being willing to discuss the details. I also wish to clarify that my reason for delving into this program’s particulars is solely to foster potential improvements through dialogue. I harbor no ill will or hostility towards the Questbook team. In fact, I desire to see them administer this program to the highest possible standard.
I strongly believe that the opportunity cost of not doing any grants program is higher than the actual cost of this program. Thus I really want to vote in favor of this proposal. However, I have some concerns, whichI am raising in this post, so that we can address them early on and ensure that this program is more likely to succeed. I also raise these concerns to show that we may not want to rely solely on this grant framework and it should not be seen as an argument to dismiss any other grant program proposals submitted in the forum in the future.
I undertook an analysis of DDA’s performance across different ecosystems. Questbook highlighted two programs that they have been running, specifically in Polygon and Compound. From my conversations with Saurabh, it was evident that Polygon was only utilizing Questbook’s platform, without implementing DDA as a structured program. There is no available information on the efficacy of Polygon’s program, and it is no longer available for review on the Questbook platform.
That leaves us with the Compound’s program. Fortunately, its structure is very similar to the one proposed for Arbitrum, which allows us to draw some inferences.
Here is the link to the proposal introducing CGP 2.0: Compound | Proposal Detail #136
We can note that the structure mirrors the proposal for Arbitrum:
- Duration of 6 months, budget of $1M.
- $200k management fees, divided as follows: Program Manager: $60k, Domain Allocators $30k each (though one Domain Allocator, who happened to be Questbook’s CEO, worked pro-bono, so the actual distribution remains uncertain), $20k for miscellaneous expenses (presumably KYC and operational costs).
- $800k to be divided by 4 Domain Allocators.
- The $800k was allocated as follows:
- $410k for Compound DApps and protocol ideas.
- $179k for Compound Multichain & Cross-chain.
- $83k for Compound III Dev Tooling.
- $212k for Compound III Security Tooling.
- Each Domain Allocator published their unique vision for their domain and their assessment methodology for proposals.
There were a few differences to Arbitrum proposal, though:
- Domain Allocators were appointed by Questbook, not by the DAO.
- A specific KYC procedure and operator were mentioned in the proposal.
The program concluded recently (it was a six-month program, approved and started last December), thus enabling an analysis of the results. Of the planned $800k, only $255k has been paid out so far, as per data from Saurabh (this information is not available on the site), and a total of $451k was allocated throughout the program with the following breakdown:
- Protocol ideas and DApps: $247K out of budgeted $410k (with $163k paid out).
- Multichain Strategy: $82K out of budgeted $179k (with $50k paid out).
- Dev Tooling by Madhavan: $22K out of budgeted $83k (with $16k paid out).
- Security tooling: $100K out of budgeted $212k (with $26k paid out).
It is worth mentioning that the program only received 75 proposals in total across all four domains (Saurabh’s data indicates this, although the website lists a higher number, 117 proposals, possibly due to duplicate counting). Of these 75 proposals, 29 were accepted and funded (again, as per Saurabh, although the website cites a higher number of 48 accepted proposals). Around 80% of these applications were submitted to Protocol ideas and DApps, with the remaining 20% spread amongst the other Domains.
Having reviewed several proposals, I question how Questbook calculates a Total Turnaround Time of less than 48 hours, as some applications suggest a TAT closer to a month. Despite this, I believe a TAT of 2-3 weeks is more than sufficient for both the proposer and reviewer to ensure that a proposal merits funding.
Before we consent to fund the Arbitrum program, certain aspects may require clarification:
-
I would appreciate a summary of lessons learned from the program from the Questbook team. As they have the most intimate knowledge of this program, it is crucial that we understand some key points, such as what has been funded and what was rejected, the minimum, average, median, and maximum grant amounts, the performance of funded projects, why some projects haven’t performed as expected, why the budget wasn’t fully distributed, and what measures were taken to increase the number of applications.
-
I recognize that GCP faced challenges with both building an applications pipeline and monitoring/managing the delivery of proposal outcomes. I want to understand Questbook’s mitigation strategy for these issues in the Arbitrum program.
-
Since the Domain Allocators’ tenure is theoretically over, who will oversee the projects to ensure they meet their milestones in the Arbitrum program?
-
I question the use of review metrics in the process, as many projects lack this assessment (especially those that were not funded). How are these scores factored into the grant approval process? How will they be used in the Arbitrum program?
-
The method of calculating payouts is unclear - Questbook’s platform counts everything in USD, but the program itself pays out in COMP (similar to how the Arbitrum program is intended to be paid out in ARB). How is the conversion performed? How is price fluctuation accounted for?
-
What happens with funds in the multisig wallet? Presumably, any unallocated tokens will revert to the treasury, but what about tokens that were allocated (grant accepted) but not distributed (milestones unfinished)? How will this work in the Arbitrum program?
-
In the “Domain Allocator Roles & Responsibilities” section, a ‘Grants program’ is mentioned, alongside certain key responsibilities. Who is specifically accountable for these?
-
The “What does success look like?” section contains items that I find confusing. For example, “Increase in the homegrown leadership to run grant programs (measured by the number of people running grant programs)” – how is this related to the grant program? How can it be a measure of success for this grant program? The other metrics also appear ambiguous and more subjective than objective.
-
Why isn’t legal compliance included in the proposal? It was explicitly mentioned in the Compound proposal with specific contractors (BLG) chosen for this purpose. There is no mention of legal coverage in Arbitrum.
In conclusion:
-
I believe that the DDA program in Compound did not perform satisfactorily. Although the aim is not to fund indiscriminately, the ultimate goal is to distribute all funds and support many promising projects. If there is a risk that not all funds will be distributed, I would expect the team to take action.
-
Due to that, the cost (management fees) of this program in Compound turned out to be somewhere between 50% to 100% of the distributed funds (depending on how many funds will be finally distributed).
-
It is not clear who will monitor the program’s effectiveness and collaborate with the Program Manager to address any issues that might arise. In the proposal it is mentioned that Program Manager will work “with the Arbitrum team”, but it’s not clear if that should be the Foundation (and if the Foundation is willing to do this) or if this is Arbitrum DAO (and in this case, I’m a little concerned about the diffusion of responsibility)
-
Although I believe it is worthwhile to test this program, we should not rely solely on it, but instead pursue several different grant program approaches.
-
This grant framework does not seem to fit recent proposals posted on the forum (such as Camelot, Rari Foundation, etc.), suggesting we need alternative solutions as well.