Thanks for raising these concerns.
We could frame this decision as one of assessing the project. However, there are two reasons why that framing can be problematic. I’ll unpack the three reasons why we disagree with the framing below before and then aim provide an answer to your questions:
- The PR impact of the DAO being unable to pay service providers what was agreed upon sets a terrible precedent. Already, we (RnDAO) are abstaining from making any additional funded proposals to the DAO until a better system is in place, as having to do this top-up proposal is a huge overhead that we didn’t budget for, so it’s affecting us a lot. Other service providers take notice, and this deters them from working with Arbitrum. Showing that the DAO can manage basic operations is critical, and approving this proposal goes to show that there’s some continuity in DAO decisions (the program was already approved by the DAO) as opposed to being at the mercy of the market and changes in sentiment.
- The program is halfway, basically the chef just added some butter to the pan and is about to put the stake in, but we’re still far away from the customer being able to taste the steak so we know if it’s good. As such, there are no ‘results’ yet. The projects are deep in market research and in phase 2 they will build an MVP based on said research. After they have built and market-tested the MVP, we’ll have solid data to assess the projects (i.e. we can only assess after Phase 2).
As per the original proposal, out of the 4 projects that started phase 1, 2 will be selected for Phase 2. You can see more details about them here Hackathon Continuation Program (HCP) - updates
I believe this question comes from a misunderstanding of the program, as we’re talking about idea stage projects that have contracted equity/tokens to Arbitrum Foundation via a SAFE+Token Warrant. Arbitrum would fail to deliver on the totality of this contract if the program stops. The SAFE+TW can be valuable only if the projects build a product (Phase 2) and gain traction (after Phase 2), so stopping the program halfway through leads to a net loss of the investment already deployed.
The continuation program is a critical experiment for Arbitrum in doing investments instead of grants. In addition to this, the projects are likely to generate transaction fees, etc. But the big value is testing investments as a methodology for ecosystem development. Investments could solve a critical sustainability/ROI challenge for Arbitrum with grants.
Having the program completed is the key result. As an outcome, the projects will be able to build an MVP and market test it. The impact of that is that the projects are able to fundraise*.
If the projects can fundraise, we’re then proving that the program works and are able to value Arbitrum’s SAFE contract, so we can do a whole ROI calculation on the exercise.
It will likely take another 6-9 months from now before we know whether the projects succeed in fundraising (they first need to complete the program).
*Note: in some cases, the projects might generate revenue and hence not fundraise. This is also a desirable outcome and one we can quantify to assess the program’s success or not.
The way we suggest assessing success here is about operational execution and viability of the type of programs. A bit like how startups first validate demand at the Seed stage, and only for Series A they can validate unit economics.
Assessing ROI for venture investments requires a larger-scale experiment (new funds can sometimes raise $5-50mn for the first fund before performance can be assessed and LPs decide whether to invest in Fund 2. This can take 3-4 years). The reason why such large experiments are required is because startups are often a power-law game (most fail but the ones that succeed can deliver 1000x) and early stage companies take years to mature. So it’s impossible to know in a matter of a month and a half and with only 4 ventures what the ROI will be. What we do know is whether all the components of the program are working (talent funnel, incubation methodology, etc.) So we’re able to validate operationally but not yet the outcomes.
Having startups generate traction (fundraise, revenue, or significant user growth) are useful interim metrics. The program has been designed to take this into account and present them to the DAO (that’s the checkpoint after Phase 2)
I hope the above can clarify why we believe it’s essential that the program is continued, so we actually know whether the experiment had any success or not. And stopping the experiment halfway through would mean we just don’t know.