Hackathon Continuation Program

Thanks @danielo for a very clear answer, so I voted “For” on Tally too.

1 Like

I’m curious, why is that? Do you also believe any proposal to support builders leads to the same outcomes? and if not, what about this proposal makes you to think that?

The funds are not transferred all at once but bit by bit based on milestones, so this is a non-issue.

With every activity we do, we’re learning more about how to support builders in Arbitrum and develop sustainable ventures, so every time we’re refining and iterating what we propose. As such, we’re unlikely to propose the same thing again but we’re likely to propose something after this one.

Typically 9/10 startups fail and here we’re only funding 2, making this statistically insignificant to be able to judge ROI properly. We’re hoping we’ll get good ROI but to really know we’d need at least another 8 projects funded, likely more (typical funds can invest in 50+). The objective of the pilot is primarily to validate execution and accumulate learnings. We can validate a lot about the methodology and use that to inform future iterations and other programs too.

So here’s how we’re thinking about it:

Framework (borrowing from @DisruptionJoe ):

  1. Output - 1 to 6 months (program execution. This is the core of what can be evaluated with this pilot)
  2. Outcome - 6 to 12 months (short-term results. Statistically insignificant so need to be taken with a pinch of salt)
  3. Impact - 1 to 3 years (medium-term results. Statistically insignificant and more variables outside our control so need to be taken with a big pinch of salt)

Output:

  • #. of ventures that complete program phases
  • ventures achieving certain milestones and traditional traction metrics (user research, then product launched, first users, first revenue, etc.)

Outcome:

  • survival and traction (fundraising/revenue/user growth) of ventures post-program

Impact:

  • Primarily, valuation of the portfolio of ventures.
  • Additionally other forms of value generation for Arbitrum over the medium term
    • protocol fees
    • new users attracted to Arbitrum/onchain
    • services provided (i.e. facilitating Ops & Governance) to Arbitrum projects/DAO
3 Likes

@danielo any news on this program? has Phase 1 already started and are the founders receiving the $2k a month already in January? if so, what teams are actually participating in this hackathon continuation program?

1 Like

No. We haven’t received the funds yet.
Negotiations with the AF about the Term Sheet are ongoing and due to delays in swapping the tokens there are some hurdles to work on.

We’re projecting a start around mid feb now but this depends in good part on the AF so I can’t speak for them. We’ll anounce when ready.

1 Like

@danielo any news on this program? 2 months have passed since the onchain proposal was executed.

2 Likes

We have succesfully negotiated the Term Sheet with the Arb Foundation, have selected the projects and are in the process of signing contracts with them.

We’ll soon make a formal update once KYC and Contract Signing are completed. This update will include the rationale for selecting each project.

The program has also started, although note we’re doing all this at our own risk as we have yet to receive the Kick-off funding… This is in part due to a token price drop and we were hoping for a price rebund before the AF exchanged the funds, alas not the case but this is a widespread problem across the DAO.

5 Likes

The AF has converted the 251,930 ARB received from the MSS on January 24th, into $86,600 USDC (at an average price of $0.34 per ARB). This amount is enough for Phase 1 of this proposal and has been sent back to the MSS.

We were unable to convert the ARB received into the full $187,980 USDC required for this proposal, because upon receiving the funds from the MSS, market conditions meant that the requirement could not be met. At that time, there was a hope that the market might soon recover to achieve the desired conversion. Unfortunately, this has not yet happened and the Hackathon Continuation Program requires the funds for Phase 1 ASAP.

We hope that the remaining portion of funds for Phase 2 of this proposal can be sourced from the TMC’s ‘checking account’. We are aware that @danielo, @entropy and @krst are working on how this can be made possible. However, it is worth noting that the TMC still needs to define how to withdraw stablecoins from that allocation once converted, which might take some time.

3 Likes

Copying here the record of the transaction fo the funds sent to the MSS by the AF

With $86,600 USDC

We are happy to confirm the first payment has been received by RnDAO and we have begun the program.

Further communication will happen through our updates thread in the DAO Programs & Initiatives - Arbitrum category (first post pending moderator approval)

Looking at this proposal more critically, I have concerns about the underlying value proposition.

While I understand the intention to support builders, ARB’s downward price movement appears correlated with sell pressure from our various funding initiatives. Before approving additional top-ups, we should address this fundamental issue.

What’s missing from this proposal is essential context - which specific projects from the previous round are actually bringing demonstrable value to the ecosystem? The proposal assumes delegates have all these details, but to make an informed vote, we need a comprehensive breakdown of:

  1. Which projects would receive continued funding
  2. What specific value they’ve delivered so far
  3. How their continued development aligns with ecosystem priorities
  4. What measurable outcomes we should expect from additional funding

Without this clear accounting of potential projects and expected returns, it’s difficult to justify additional treasury expenditure. I’d appreciate a detailed breakdown of these elements to facilitate more informed voting decisions.

As stewards of the treasury, we need to ensure funding flows to initiatives that create sustainable value.

2 Likes

Thanks for raising these concerns.

We could frame this decision as one of assessing the project. However, there are two reasons why that framing can be problematic. I’ll unpack the three reasons why we disagree with the framing below before and then aim provide an answer to your questions:

  • The PR impact of the DAO being unable to pay service providers what was agreed upon sets a terrible precedent. Already, we (RnDAO) are abstaining from making any additional funded proposals to the DAO until a better system is in place, as having to do this top-up proposal is a huge overhead that we didn’t budget for, so it’s affecting us a lot. Other service providers take notice, and this deters them from working with Arbitrum. Showing that the DAO can manage basic operations is critical, and approving this proposal goes to show that there’s some continuity in DAO decisions (the program was already approved by the DAO) as opposed to being at the mercy of the market and changes in sentiment.
  • The program is halfway, basically the chef just added some butter to the pan and is about to put the stake in, but we’re still far away from the customer being able to taste the steak so we know if it’s good. As such, there are no ‘results’ yet. The projects are deep in market research and in phase 2 they will build an MVP based on said research. After they have built and market-tested the MVP, we’ll have solid data to assess the projects (i.e. we can only assess after Phase 2).

As per the original proposal, out of the 4 projects that started phase 1, 2 will be selected for Phase 2. You can see more details about them here Hackathon Continuation Program (HCP) - updates

I believe this question comes from a misunderstanding of the program, as we’re talking about idea stage projects that have contracted equity/tokens to Arbitrum Foundation via a SAFE+Token Warrant. Arbitrum would fail to deliver on the totality of this contract if the program stops. The SAFE+TW can be valuable only if the projects build a product (Phase 2) and gain traction (after Phase 2), so stopping the program halfway through leads to a net loss of the investment already deployed.

The continuation program is a critical experiment for Arbitrum in doing investments instead of grants. In addition to this, the projects are likely to generate transaction fees, etc. But the big value is testing investments as a methodology for ecosystem development. Investments could solve a critical sustainability/ROI challenge for Arbitrum with grants.

Having the program completed is the key result. As an outcome, the projects will be able to build an MVP and market test it. The impact of that is that the projects are able to fundraise*.
If the projects can fundraise, we’re then proving that the program works and are able to value Arbitrum’s SAFE contract, so we can do a whole ROI calculation on the exercise.
It will likely take another 6-9 months from now before we know whether the projects succeed in fundraising (they first need to complete the program).

*Note: in some cases, the projects might generate revenue and hence not fundraise. This is also a desirable outcome and one we can quantify to assess the program’s success or not.

The way we suggest assessing success here is about operational execution and viability of the type of programs. A bit like how startups first validate demand at the Seed stage, and only for Series A they can validate unit economics.
Assessing ROI for venture investments requires a larger-scale experiment (new funds can sometimes raise $5-50mn for the first fund before performance can be assessed and LPs decide whether to invest in Fund 2. This can take 3-4 years). The reason why such large experiments are required is because startups are often a power-law game (most fail but the ones that succeed can deliver 1000x) and early stage companies take years to mature. So it’s impossible to know in a matter of a month and a half and with only 4 ventures what the ROI will be. What we do know is whether all the components of the program are working (talent funnel, incubation methodology, etc.) So we’re able to validate operationally but not yet the outcomes.

Having startups generate traction (fundraise, revenue, or significant user growth) are useful interim metrics. The program has been designed to take this into account and present them to the DAO (that’s the checkpoint after Phase 2)

I hope the above can clarify why we believe it’s essential that the program is continued, so we actually know whether the experiment had any success or not. And stopping the experiment halfway through would mean we just don’t know.

3 Likes