Kuiclub has decided to vote “Against” on this proposal on Tally.
Key Points:
The proposal is well-articulated with detailed viewpoints. If executed properly, this program could lay the groundwork for the ecosystem’s long-term development, delivering value beyond the initial investment. This aligns with Arbitrum’s overarching mission of fostering innovation and attracting developers.
However, I personally believe that in critical early stages such as customer validation and achieving product-market fit, relying solely on RnDAO may not yield the desired results. Additionally, concerns about cost management and the potential for fund misuse remain significant.
Blockworks Advisory will be voting AGAINST this proposal on Tally.
We acknowledge the considerable effort and thought that have gone into designing a program intended to support post-hackathon projects. Originally, we were under the impression that this proposal was to be iterated upon heavily post-Snapshot, but instead it seemed rushed into Tally. While we appreciate the intent to support ecosystem growth and align with DAO initiatives, this proposal lacks the necessary detail and refinement to give us confidence in its execution and ROI.
We were supportive of the original proposal under the idea that this would see more detail and planning. Specifically, we found the Fund Management and Investment Agreement sections to be especially underdeveloped. We’re left with many questions and criticisms:
What exactly is meant by customer validation? What does systematic customer validation look like for RnDAO? There is a statement to achieve customer validation, but there is no clear definition of validation here. There are no explicit targets or metrics given for phase 1 to be targeted, nor is there a suggested number of user interviews for completion, conversion metrics, milestone validation, etc. The methodology for Phase 1 also appears nondescript, the DAO should have insight into what the structured customer validation and thinking methods look like to add critique there. The mention of weekly sessions and user research is helpful, but doesn’t clarify the frameworks, tools, etc that a team should follow or expect (problem/solution interviews, survey criteria, etc).
Also, the funding amounts at each stage need more justification. Why specifically $2,000 per month per cofounder?
There is a list of possible operators here, but who should we expect to lead what? And more importantly, where is the DAO’s own point of contact in this?
What’s the pipeline for these projects to gain support from the DAO, what does mentorship look like under this model?
There’s no mention of a pipeline for this, as Hackathon winners do not necessarily mean good investments instantly. For example, some of the projects highlighted as Hackathon winners, while impressive from a hackathon standpoint which focuses on developer and product building skill, may not be the most investable. Some of the projects listed are also more so developer/DAO tooling rather than something with sizable TAM. The listed project winners at the head of the proposal also aren’t necessarily startups in sizable markets.
This proposal reads more like an outline than a complete plan. There is a “6-month, two-phase program” which you describe, but…
As many others have mentioned, what is the due diligence process for this Hackathon to investment pipeline? What criteria are we to expect them to be evaluated on?
Phase 1 doesn’t specify fund distribution, management, or give a detailed reporting framework/schedule for these projects.
What does this investment committee look like? Where is the DAO’s involvement here? What are the labor costs (we are aware that your salaries are private, but some metric should be provided here). It appears RnDAO does not currently have an investment committee and would need to form one. This could be acceptable with more time and elaboration; otherwise, it’s just a promise.
The transition phase between Phase 1 and Phase 2 needs significant elaboration. Is there to be a milestone here where RnDAO loops back in with the DAO?
Phase 2 starts for projects that “successfully complete Phase 1”. What are the terms of success here? What are the KPIs that allow the project to move forward? Is it just completing the customer validation research? If so, then there should be an outline here of what indicators should be measured and improved upon (and by what measure of improvement) prior to moving forward.
If advisors/mentors hold large roles at other startups, how much time can they realistically dedicate to the applicant pool?
While it is listed in the Docusend backgrounds, we would like to see participation in prior hackathons, success stories, and given RnDAO’s small size and commitments elsewhere, is there sufficient bandwidth to execute effectively?
Importantly, how is this program to evolve once post-program reporting has been accomplished?
There does not seem to be a timeline implemented either, and so it seems as though it could use much more planning before any DAO funds should be transferred.
Supporting hackathon projects and fostering innovative ventures are admirable goals, but the scale of the proposed grant and the lack of definition concern about the feasibility of this continuation. The lack of a demonstrated track record in venture support, investment success, and the push to Tally without further iterations add further hesitation from our end. Building a robust venture program is a complex and resource-intensive endeavor, as evidenced by similar initiatives within the DAO. We appreciate the intent behind this proposal, as supporting early-stage teams and nurturing innovation within the Arbitrum ecosystem is a valuable goal; however, without stronger assurances of operations and a clearer roadmap with provided frameworks and definitions, we are unable to support this proposal at this time, but would like to see it return with more careful planning.
This is a nonsensical comparison. You’re arbitrarily taking away 50% of the investment we’re making into these companies, and also value everything we do at cost instead of value which is assuming Venture Studios as a norm don’t make sense while the data shows it’s the opposite.
Additionally, the YC thesis is constructed very differently. Their talent attraction funnel is #1 in the world by a large margin, and it’s not possible for Arbitrum to replicate that right now (maybe ever). Additionally YC is based on a very mature support ecosystem in terms of networks, talent, etc. that knows how to operate all kind of functions to deploy $500k fast and effectively
So we can’t invest with the same thesis. If we put $500k into blockchain Hackathon projects, I don’t think that’s going to work as our ecosystem doesn’t have the level of clarity on methodologies, pathways, talent available, etc. So applying that would likely lead to $0 as projects burn money.
Finally, it’s about the stage of the venture. We’re starting here at a pre-pitchdeck level and working with the projects to refine that. So again, this is a misguided comparison.
@BlockworksResearch while we appreciate the extended comment, we have open channels of communication and can easily jump on a call or reply to your concerns in the forum, there’s no need to rush to a negative vote simply because you have open questions.
I invite any delegate with concerns to first ask us, as any complex program has a myriad of details that would be impractical to fully explain in a proposal. We’re available to address those quickly over a call, telegram DMs, or here in the forum.
Pre-purchase, letters of intent, or strong indication of interest (only the later would require more significant numbers). This varies with the ticket size and nature of every product, hence why we don’t use a one size fits all approach but instead rely on expertise. Our research lead has been doing this for 20 years+ with products like Google Suite, Asana, as Innovation Catalyst at Inuit (Quickbooks), etc.
Customer Development methodology. In leayman terms: user research followed by pitching. The implementation of this looks slightly different in every case but hence why we have experts teaching it. Reducing the flexibility to look at things case by case by mandating something like # of interviews to be a metric would lead to optimisation for poor indicators. Early stage startups are messy, so the value comes more from having in the trenches those who have been through this process before (like our COO who’s been head of product 10+ teams and exited a couple also as part of a venture builder).
We engage with builders daily from across the globe and often have finance-related conversations. $2,000 works for a majority of them to unlock the focus needed for the program. We already tested this with the CoLab and have continued to receive validation.
Drea leads on research, Cori leads on marketing, I lead on venture strategy, and Gokhan leads on product. We’re also onboarding a CTO with deep cryptography and VC due diligence expertise amongst other areas.
I’m available as a general point of contact.
The projects don’t necessarily require additional support from the DAO. If they are relevant for the DAO, I’ll be there to help them navigate it.
The point of a venture builders approach is not to take ready-made projects but to forge them internally. And that’s precisely why there’s a Phase 1, where as mentioned the market research is done in addition to customer validation. When you consider the investment amounts, we’re in the early stage angel range, so the projects shouldn’t be compared to those raising VC funding which comes later.
Please see the previous answers we have provided:
We already specified that they get $2k per month per cofounder in Phase 1, and then the lump sum in phase 2. And also specified that such funds are managed for a multisig exclusively dedicated to this purpose. Said multisig, as per the proposal, receives a payment for phase 1 and a second payment for phase 2.
We already explained what this looks like. The costs are included within the Program Costs in the budget, together with overhead costs. There’s already an investment committee in place in RnDAO composed by the leadership that’s on the proposal plus additional advisors as needed.
Again, this is already included in the proposal that outlines there will be a report to the DAO with the rationale for project selection.
This has already been explained and is not based on ‘indicators to improve’ but on an assessment about the viability of the project at that point in time. For added clarity, it’s akin to how angel investors evaluate a project and follows the standard template of the areas that are reflected on a pitchdeck for the business thesis: clear problem, credible solution, market size, why now, usp/competition, team. Adding quantitative KPIs that are fixed for all projects would be very counterproductive.
Most of the value of our program doesn’t come from the mentorship but from the hands-on support as already explained. The team that works with the projects does not hold large roles anywhere else.
We additionally connect the startups to mentors for ad-hoc support, so consider the mentorship a bonus and no different from regular accelerators/incubators where mentors often have full-time roles somewhere else.
RnDAO has very limited commitment elsewhere and any commitment we undertake outside of this program will be done with the bulk of the work executed by other team members (e.g. the User Research proposal does not include any of the team members in this proposal). It’s important to know that although our core team is small, RnDAO specialises in collaboration and we have a community of 1,500+ people we have engaged with in multiple forms for a variety of projects. Those are precisely some of the capabilities that we’re trying to productise through our work building better collab tech tools.
We plan to continue engaging with Arbitrum delivering incubation programs as we have been for a year+ now. We’re continuously improving the methodology and hope to scale up as we accumulate data and grow the network effects of the CollabTech cluster.
We have already suggested that phase 1 takes 3 months, phase 2 another 3 and provided a timeline for the assessments post-program. The exact start date depends on alignment with the foundation hence it’s not provided but we already started coordinating with them and hope to start in January.
I’m confirming my snapshot vote on tally.
Would like to add few things for @danielo
first thanks for posting a small table with breakdown of costs, this is what we spoke privately, helps to better understand costs vs invstment
a suggestion for the future. I understand the holiday pause can be annoying and somehow disruptive for the flow, but in future you shouldn’t rush any proposal to tally this much. I am voting in favour only cause I think the model is worth experimenting upon and there is monetary commitment from RnDao, and don’t want to kill the initiative, but if it wasn’t for these 2 I would have likely abstained or even vote against. A lot of other proposals are in this very same situation, with different flavour of course, so please let’s avoid this happening again.
Agreed. The reason for moving so fast is not the break but the impact the break would have on the hackahton projects (1.5months extra delay to what’s already almost 2 months proposal cycle).
We’ll aim to avoid this in the future. And although we can’t edit the proposal, we’re still very open to feedback and ideas for improvement. The idea here is to pilot this approach after all
I dont understand your insistence on the contrary. We’re investing into this program. It would be imposible to deliver it for the 60k that Arbitrum is putting towards program management and venture support (we have a team of 7 on this working for 6+ months).
Maybe you value institutional investors more than community-led investments? I don’t know but it’s honstly bizarre.
when you say, “this is costing us $200k”, do you mean that there will be outgoing transactions worth $200,000 USD from RnDAO’s treasury to RnDAO contributors, mentors, advisors, and everybody involved in executing this program?
Of course not.
This proposal is looking for Arbitrum DAO to spend $187,900 USD of its treasury, and for RnDAO to spend $0 USD of its treasury.
As I mentioned before, I’m strongly aligned with investing in builders and supporting the promising projects that emerge from hackathons.
While I understand many of the raised concerns about the program, I believe the experiment is worthwhile. Viewing this venture as a pilot could pave the way for a larger investment mechanism in the future. I trust that RnDAO will fully capitalize on this opportunity, delivering value by supporting projects and generating key learnings that could help scale such initiatives down the line.
Additionally, I see this as a great opportunity for the DAO to signal that it invests in builders. This message will motivate more projects to build on Arbitrum and participate in the initiatives proposed by the DAO.
We have said from the beginning that we’re covering about half the cost, and that we’re basically deploying the support program and part of the ops costs. We never said we’ll deploy cash from RnDAO’s treasury to the venturess.
DAOplomats is voting in favor of this proposal on Tally.
We do agree the move to onchain voting seems rushed but we are confident in the team to execute on deliverables. Thus, we are maintaining our stance from the temp check.