Hackathon Continuation Program (HCP) - updates

Hackathon Continuation Program - Report with Outcome, Learnings, and Recommendations

The Hackathon Continuation Program ran for 6 months, taking 4 hackathon teams through customer validation (Phase 1) and selecting 2 for MVP development (Phase 2). This report consolidates key learnings from the program operators to inform future iterations and provide actionable insights that can position Arbitrum as leader in the space. There will be a final report on Impact in 3 months, once we have more data on the trajectory of the ventures.

Venture is a long-term game, often taking 7+ years before ROI can be counted. Portfolios diversify across 50+ investments and even the best allocators see high failure rate in their portfolios, as venture is a game where most good bets fail but a small percentage succeeds big. Our pilot program, with only 4 ventures at the start and only two slots for Phase 2, is statistically insignificant to properly assess the ROI a full program could yield. However, the pilot is highly useful to identify challenges and collect leading indicators.

Program Value

Multiple Web3 ecosystems suffer from deal flow shortages for accelerators and VC investment (e.g. SAFE and Polygon closing down accelerators, Web3 VC complaining of lack of investable projects, our own experience running the Arbitrum Ecosystem Pitchday). With AI hype sucking talent out of Web3, and strong competition between blockchain ecosystems, there’s significant pressure to figure out better ROI methods to originate or attract applications and protocols than providing larger grants.

This program tested a methodology to originate venture in Arbitrum and secure ROI via originating apps/protocols that increase usage of the Arbitrum network (network fees), and equity and tokens in the corresponding ventures. Finding a systematic way to originate ventures is a hard challenge, but very valuable for Arbitrum. We are committed to play long-term games with long-term people.

Pilot’s Outcomes and Recommendations

The investment in this program resulted in the creation of one revenue-generating venture (FairAI) and one venture with users but no revenue yet (Contribo) on Arbitrum. The Arbitrum Foundation holds a SAFE and Token Warrant on these ventures.

  • FairAI has now contracts for $10k+ MRR and is advancing their second enterprise deal, showing satisfactory performance.

  • Contribo has had to iterate on the original product hypothesis and lost a cofounder but is now making progress with their first pilot.

In addition to the ventures, significant data on program performance was generated and concrete opportunities to address program limitations were identified.

Given the high value of developing a robust venture building methodology in Arbitrum and non-negligible but moderate signs of program success, our recommendation is to NOT scale the program, but to conduct another ‘pilot’ to implement the learnings at a small scale and re-assess.

Scaling the program to a statistically-significant scale could happen after, if the second pilot succeeds in addressing the talent bottleneck of the first and program outcomes improve as a result.

Program Phase 1 Results (3 months):

Started with 4 teams, terminated 1 early due to team capability issues and a second one was terminated at the end of Phase 1 due to lower fit for Arbitrum.

  • WeLivedIt completed validation but was not advanced due to a lack of Arbitrum alignment, thus saving the Phase 2 funds that were under AF custody and due to be returned to the DAO.

  • Nightly was terminated early, saving the earmarked funds for Phases 1 and 2 that were under AF custody and due to be returned to the DAO.

  • Contribo and FairAI advanced to Phase 2 based on positive customer signals.

Phase 2 Results (6 months total):

  • FairAI: Secured first enterprise client with 2-year contract worth $120k+ annually, achieving ~$10k MRR within 6 months. Validated MVP concept and architecture, now in implementation phase, targeting the manufacturing sector’s knowledge retention and optimisation.

  • Contribo: Launched MVP pilot (contribo.xyz/pilot) with two prototypes to validate core hiring hypothesis and acquire initial users. Currently conducting outreach for additional design partners. Recruiting a business-side co-founder to accelerate go-to-market, as the initial cofounder dropped.

Improvement Opportunities for Future Programs

1. Talent Attraction Limitations

The key bottleneck to address for any second pilot resides in talent attraction. The current program had multiple severe limitations in this area:

  • VERY limited marketing budget to attract participants into the hackathon ($6,880. The original budget requested was reduced after a request from the grant program manager).

  • Hackathon format attracting the wrong profiles (low commitment).

  • When the Hackathon was run, we couldn’t advertise a continuation program (it wasn’t yet approved) thus reducing the appeal of the program to only the hackathon price and not the more significant continuation program investment.

  • A relatively low budget for the program ($50k per venture, excluding hands-on support, compared to venture building programs offering $100k+ plus hands-on support e.g. Soonami Venturethon that’s now copying some of our approach with backing from Aptos and Solana’s Superteam Germany).

  • Additionally, in our interactions with builders, Arbitrum is considered to have less “community” (referring to entrepreneurial talent attraction) compared with e.g. Base and Solana.

The program’s competition for spots was lower than intended. While some were strong, overall options were limited and we couldn’t fill empty spots fast enough, leading to capital returned to the DAO after Nightly showed commitment issues instead of replacing them by another team.

Given these significant limitations, the program outcomes are considerably good and there’s a clear path for improvement. We recommend an iteration that includes funding for a variety of entrepreneurial talent attraction experiments connected to a re-run of the program (with adjustments).

2. Cash Management and Arbitrum Coordination

It took a significant amount of time to align on the fund management system and legal contracts with the Arbitrum Foundation and other parties. Market volatility resulted in the program being underfunded. As a result, the program team was severely distracted from our role with the ventures, instead engaging with multiple internal parties to find a solution and ultimately having to campaign for a DAO proposal to top-up the program funds.

The silver lining is that we now have a template (both contracts and fund management processes) as a foundation for future programs.

3. Hackathon Mindset Incompatible with Startup Mindset

Hackathon teams were focused on building, but lacked defined customer problems or validated markets. This was expected to some degree and accounted for in the Phase 1 program design to rectify. However, the challenges were deeper than expected, resulting in slower progress and additional demands on venture support.

The Hackathon format attracts talent but primes them with the wrong mindset, ultimately being counterproductive. We recommend reducing the role of Hackathons in future venture programs.

4. Selecting a Problem Vs Building a Solution

We were forced to discard applicants who had the right profile but hadn’t already identified a suitable idea. Multiple of the best founders were still in the research stage (instead of rushing to build the wrong idea). Conversely, many candidates were too attached to ideas with low potential. A similar problem is experienced by grant programs and has resulted in a practice of publishing idea lists. However, most of these idea lists lack market assessments and a deeper understanding of the viability of these opportunities. As such, they’re of limited value and can even be counterproductive.

RnDAO has tested two approaches to address this challenge. Through the 2024 Research Fellowship program, we invalidated the concept of mentoring founders to do foundational research. And now with the Hackathon Continuation Program, we tested mentoring founders on validation skills after they have picked a problem. The results are improving, but we still see a gap in supporting founders to select the right problem. We recommend moving further towards the venture studio model, where the Studio research team can do foundational research and shortlist opportunities in collaboration with founders.

What Worked Well

1.Expert hands-on support:

Particularly valuable as per ventures’ feedback:

  • Customer development mentorship: Highlighted by participants as most beneficial

  • “Learning in public” approach: Created accountability and transparency. Also highly appreciated by participants.

  • Flexible, customised delivery of support: Essential for addressing deep individual business issues. Leading to a tangible impact of the program over tick boxing workshop completion.

The expert hands-on support was more valuable than peer-to-peer learning as per program participants’ feedback and behaviour. However, the peer-to-peer aspect was mentioned as an attractive feature pre-program, suggesting it positively impacts the talent attraction funnel and consumes relatively low program resources. We recommend continuing with both peer-to-peer learning and hands-on support.

2. Stage gates and hands-on monitoring:

Data from the venture support team allowed for a deeper understanding of venture potential than what’s regularly available in an accelerator. These insights combined with a legal contract based on staged deployment of capital allowed us to cut funding early when needed and thus optimise capital allocation. One limitation was that startups can occasionally benefit from more time in the current stage, instead of a funding cut or graduation into the next stage. Such a system would have likely improved outcomes for Contribo. We recommend keeping the stage gates and hands-on monitoring, and adding flexibility to extend Phase 1 for an additional 3 months (thus continue with the basic stipend to support further work on validation before more significant funding).

Major Recommendations

1. Abandon Hackathon-First Approach

  • Use short hackathons only for community building pre-program. Don’t enforce hackathon participation for eligibility.

  • Replace with research-driven “Opportunity Briefs” as RFPs for entrepreneurs

    • . Entrepreneurs can optionally collaborate with the program team to develop opportunity briefs.
  • Focus on business validation rather than product building from the start

  • Do not enforce hackathon participation for program eligibility.

2. Restructure Talent Acquisition

  • Invest significantly more in marketing and community building focused on attracting aspiring-entrepreneurs. e.g, speed networking events, aspiring entrepreneurs content program, hacker houses, cofounder matchmaking programs, etc.

  • Implement higher investment amounts ($100-150K initial, $250-500K potential follow-on) to attract higher caliber talent.

  • Refine screening for founder mindset vs. technical skills

3. Program Design

  • Expand the candidate pool beyond hackathon participants.

  • Maintain expert hands-on support.

  • Maintain P2P support but only as a secondary system.

  • Implement AI automation training for customer outreach earlier in the program.

  • Improve program flexibility by enabling Phase extensions and maintaining stage-gates in contracting format.

  • Add problem research and opportunities mapping pre-program.

Conclusion

The program revealed that hackathons are poorly aligned with venture development needs. While the hands-on support model proved valuable, the talent pipeline requires a redesign and addressing underinvestment in talent attraction. This is a common challenge across Web3; addressing it would position Arbitrum in a leading role.

Despite important limitations, the program showed moderate success, making us hopeful of Arbitrum becoming highly capable of originating ventures.

Our recommendation is to NOT scale the program, but to conduct another ‘pilot’ to implement the learnings at a small scale and re-assess before scaling.

3 Likes