Hello, one of our team has experience in communication and strategic projects, so we think this proposal could be really useful for Arbitrum. Anyways, we’d like to share a few questions and observations:
We appreciate the clarity with which you have detailed the budget breakdown, but could you specify what happens if the Stakeholder Council does not approve the discretionary bonus? Would this have any impact on the deliverables?
Regarding the scoping options, have you considered whether it is possible to adjust the number of interviews per option (e.g., fewer external interviews in “Option 2”) to reduce costs while maintaining valuable benchmarking?
Good to see you have detailed profiles (we have been looking at the proposal since the first draft). How will you select and contact participants from external ecosystems? Do you have a process to ensure representative responses?
Do you have examples of how the “Jobs to be Done” method has generated useful insights in previous Web3 projects? To avoid bias in the interviews, have you considered including an anonymous approach for certain sensitive questions?
Just to give you a quick overview of the Stakeholder Council:
I think the Stakeholder Council is a great idea. But how will you deal with disagreements among the council members about research priorities or final conclusions? Is there a set process for dealing with these differences?
We think this study could help us find out where there’s a mismatch between what people think the Arbitrum ecosystem can do and what it can actually do. Would you be up for including this (or similar) as a sub-objective? We could also look at how branding and marketing affect developers’ decisions. This could go hand-in-hand with the technical and functional analysis.
The timeline seems well structured - could you please confirm if the preliminary deliverables from week 4 onwards will include key actionable insights, or will they just be internal progress reports? thx
I think 10 projects per chain is enough to be honest… 30 definitely seems too much and will probably get a bunch of duplicated insights after the 15th interview.
I think that if there would be only 2 additional chains, they should be Base (not Optimism) and Solana.
And if there would be an option for more chains, it should be Arbitrum (30), Base (15), Polygon (15), Solana (15), Sui (15), or something along these lines.
Revision of the budget with the team. There will be a lot of coordination complexity and the team doesn’t feel they can deliver great outcomes without the budget being fair (there’s still a discount being applied).
Voted AGAINST the proposal as I don’t see enough information on how the research would tie into larger initiatives in the DAO.
I’m not a believer in the DAO funding research for its own sake but only as a means to some defined end. For example, firestarter research into treasury diversification led to the STEP and token swap proposals. My biggest fear with research for its own sake is that it just sits somewhere on the forum with a few 100 or 1000 people actually reading it.
I want research whose end output is a snapshot proposal. This gives it an actionable thrust and also ensures that people actually engage with what is researched, since they need to vote for it. I hope to be proven wrong but as it stands I see a high probability of low engagement with the research that is funded.
Thank you very much for the proposal @danielo . Honestly, I really like the idea. I am very supportive of funding research, as the more data we have on how users and builders perceive the technology and the ecosystem, the better we can focus the DAO’s efforts and resources to sustain its growth.
That said, I can’t help but draw a parallel between this proposal and the ETH Bucharest proposal, for which I recommended submitting such initiatives through the funding channels the DAO has made available.
The DAO is currently in a provider selection process where not only skills but also pricing are being evaluated. I see no reason why this research request shouldn’t go through the ARDC. It’s a shame you withdrew RnDAO’s candidacy, although I would like you to reconsider and compete for the opportunity.
Since you’ve moved this discussion to Snapshot (somewhat prematurely, as it could have waited for the ARDC’s establishment and the determination of priorities), I will take the opportunity to note the DAO’s opinions. Should I be fortunate enough to join the ARDC, I will consider the delegate’s opinion on this for the future.
The following reflects the views of the Lampros DAO (formerly ‘Lampros Labs DAO’) governance team, composed of Chain_L (@Blueweb), @Euphoria, and Hirangi Pandya (@Nyx), based on our combined research, analysis, and ideation.
Thank you for sharing the proposal. The comprehensive approach to understanding our developer ecosystem is great.
We appreciate the direct interviews with builders and entrepreneurs is an excellent step forward. This approach will definitely provide actionable insights far beyond what surveys or desk research could achieve.
We agree with what @thedevanshmehta has also mentioned. While the research report promises valuable insights, there’s a possibility that the findings may not translate into actionable outcomes. How will the DAO ensure that the recommendations are implemented effectively and not just sit as a theoretical report? There should be additional steps beyond just submitting the research report; the process should not end with the submission.
While the success bonus incentivizes high-quality work, it introduces subjectivity. Could you clarify how the criteria for awarding the bonus will be structured to ensure fairness and transparency?
Apart from the questions, we would like to share some feedback about the timing of this proposal. The proposal was posted on Friday, November 22, and today, November 29 (also a Friday), it has already been moved to a Snapshot vote for a temperature check. This feels a bit rushed and doesn’t allow enough time for everyone to discuss and share their thoughts if we see the current engagement in this proposal.
According to the Delegate Code of Conduct, it’s recommended that Snapshot votes should start on Thursdays at 12 p.m. UTC to give enough time for proper discussion and avoid rushing decisions. But this proposal went up on Snapshot on Friday. We hope this can be kept in mind for future proposals so that there’s enough time for everyone to review, provide feedback and then vote on proposals.
I think research initiatives like this are valuable because they give us practical insights into the Arbitrum ecosystem. The goals and deliverables in this proposal could really complement other research efforts within the DAO.
The budget seems reasonable to me, and we should welcome any information that adds value. I understand @thedevanshmehta’s point about wanting research to lead to concrete actions, but I believe that even if the research doesn’t result in immediate proposals, it still helps inform future decisions. This information could also complement other research deliverables.
If there’s a lack of engagement with the research results, I feel that’s more about how we communicate and share the findings rather than the research itself. With the right delivery, we can make sure the insights make an impact.
Although this initiative could have been submitted through the grants process, I see no issue with applying in this manner. In fact, presenting it here may provide greater visibility for the proposal.
For these reasons, I am VOTING FOR supporting Option 2 of this proposal.
A good attempt to find out what the commitment to Arbitrum is.
On the one hand, such studies will be useful in any case, regardless of the results. It is possible that it will turn out that there are no objective reasons for the choice of blockchain by developers, but only their habits and past experience.
On the other hand, paying bonuses based on the results of someone’s vote is a very opaque idea. Therefore, I do not support this part of the proposal.
However, in general, the budget is very small and if it were 2 times larger and without bonuses, I would have no questions about it.
thanks for this comment. We withdrew our candidacy because only one provider is selected, which forces a single generalist to be selected as opposed to having multiple specialists. We have concerns with the candidates User Research capabilities, but RnDAO is not equipped to do quantiative blockchain data research.
The design of the ARDC V2 forced candidates to create alliances but that has high coordination costs of having to create a full partnership, share reputational risk, and having to agree how to split budgets. We’re primarily focused on the Hackahton Continuation and didn’t understand the new shape of the ARDC until way later, so we didn’t have the bandwidth to broker a parntership with the other candidates.
Hence now suggesting here a separate proposal.
For context, the vote is to gather feedback quickly given the DAO end of year break.
I think it would be a good idea @Larva ; that would help us identify critical points. Having a conversation with them would bring Arbitrum back to their radar and encourage them to consider returning to evaluate the improvements that have been implemented in a future.
I see no reason not to support this proposal. I believe it is necessary to periodically conduct surveys or in-person focus groups with key figures who can provide valuable insights on improving Arbitrum.
The one-on-one interviews could help gather information, but bringing many people to the table could spark highly valuable conversations that provide better clarity on what needs improvement and what we might be doing wrong.
That said, my vote on Snapshot also includes SOL and OP, as this will offer a broader perspective, resulting in more comprehensive research.
Thank you for the proposal. We are leaning toward voting for option 2, which includes Arbitrum + 2 ecosystems. However, we would have preferred leaving the selection of the other ecosystems open to the community.
Since the proposal has already reached Snapshot, we would like to know what factors were considered in selecting Solana and Optimism. Additionally, would you be open to modifying these options before it moves to Tally?
Hi, I vote ‘for’ the proposal. I think it’s crucial we have a deep understanding of what is making users switch or stay in the chain. I also think the bonus is a bit high.
@danielo Do you have any non-subjective way of deciding the percentage granted after completion?
hmmm, we structured the bonus for delivering quality research, and less on what the ultimate impact of using the research could be. The very nature of research, especially the one proposed, is that it’s exploratory so we don’t know what we’ll uncover before doing it and hence we can’t fully know the impact. We’re making an educated guess that these insights are critical for strategy across a large number of decisions, but we won’t know how much until months after the research has concluded. Note that this is not a new problem, it’s inherent to research and part of the reason why it’s often underfunded in organisations. It could be interesting to explore a retroactive mechanism say 12 months after to provide an additional retroactive payment, but at this stage of maturity of the DAO in R&D, it would probably be prohibitively complex to design and operate.
We also don’t have any precedents of user research as a DAO so we put the bonus to create an incentive alignment to deliver high quality. And we’re hoping to set a strong precedent that future initiatives could be benchmarked against.
I don’t mind this research, on several fronts, so I am voting in favour with the 2 other ecosystems: it makes sense to do a research that even if scoped, or preliminary, can give us some insight.
Few things tho:
agree with @pedrob, this could have stayed on the forum for a while more to be honest
also agree with @0x_ultra that methodology to select builders will be key. You might be for sure more expert than us in this topic, top of my mind selecting exclusive builders vs cross chain ones, and builders that started at day 1 here vs new builders approaching, can give us not only info on the why, but also how this why and the general perception of the chain has evolved over time
it obviously make sense to coordinate with ardc as you posted or in general any other entity that might arise in the dao that can have connections or interest in this topic. In general this is something that I hope for every initiative.
The proposal clearly outlines its objectives, including understanding why developers choose Arbitrum and identifying areas for future improvement. It offers a user research-based approach to better comprehend developer needs, supporting the optimization of the technical roadmap and the expansion of the ecosystem. Additionally, the proposal provides “user profiles” that will help target the right developer audience effectively.
However,
The proposal lacks clarity on how the research findings will be directly translated into actionable steps, such as new technical features, project support programs, or incentive mechanisms.
We are voting against this proposal because while understanding builder preferences is valuable, the budget and scope seem excessive without clear assurance of actionable outcomes. Additionally, similar insights might be achieved through existing resources or more cost-effective methods.
Hi @danielo , thanks for putting forward this proposal
We’re always in favor of initiatives that prioritize research, and this one feels like exactly what Arbitrum needs right now. Understanding why builders choose our ecosystem and what we can improve is critical for helping us focus our efforts and prioritize effectively.
There’s so much potential here, especially in terms of onboarding more developers and strengthening the ecosystem overall. We’re happy to support this proposal and plan to vote in favor of it.
That said, we do have a question regarding the bonus. Could you provide a bit more clarity on how it will be determined?