We were thinking of it to be the median vote of the stakeholder council. So basically every member of the stakeholder council can say what % of the bonus should be awarded, and we get the median (mechanism based on crowd intelligence).
let me know if that provides enough clarity
I see this is becoming a point of tension we didn’t anticipate. The rationale was that most top delegates usually only engage once a proposal is in Snapshot, so while the forum feedback by smaller delegates is useful to refine proposals, it provides little insight into the overall appetite from the DAO for the proposal. So the risk is to spend a lot of time incorporating feedback for something that’s going to fail anyway, or otherwise where the most critical feedback is missing.
I see how the delegates incentive design clashes with this, and I’m not sure how to solve the issue but I’ll be thinking about it cc @SEEDGov
Agreed. The current plan is to start work as soon as we have a positive Snapshot (assuming at least 80-90mn in favour), as otherwise the initiative gets pushed back to mid-February for kickoff. One of the first steps will be defining the exact approach to source builders and we’ll make sure to share this publicly and get some feedback before we start actioning it (the stakeholder council is also expected to play a role in providing feedback on the research plan).
Thank you for the comment. I have added an FAQ section for this. Please let me know if any doubts remain.
cc @danielM
DanielM, I’m curious, what method do you see that would be more cost-effective?
I don’t think is necessarily a point of tension. More like: proposal has been in the forum for a while, so people can read it. Even if I can agree that sometimes is more like smaller delegates (like me kek) who provides a feedback, doesn’t mean others don’t read it and don’t get acquainted to the idea. Is more about giving people the opportunity, if they want, to speak on the proposal, more than having specific feedbacks in there, just that. Also this is a bit OT here but wanted to clarify the point.
Coming back from the holidays, so forgive us if this was already asked, but why wouldn’t Arbitrum just pay a professional organization that does this? User studies are a thing, and doesn’t necessarily require any kind of council. Just have Entropy or someone already on the payroll be their liaison for questions.
That’s essentially what this proposal is. RnDAO was born a couple years ago seeing this gap, and we have world-class user researchers on staff who also understand web3.
The Stakeholder Council proposed is as I see it that mechanism that you suggest for Entropy as liaison (the proposed format includes Entropy plus a couple others from AF, GCP, ARDC, etc). Maybe the name council was confusing when the intention here was to have a couple of people to liaise with on the granular research questions and to assess the deliverable.
Some comments and thoughts:
- The emphasis on live, one-on-one interviews with builders ensures nuanced which is good. The “Jobs to Be Done” framework is proven so in favor of smtg like this implemented.
- Comparing Arbitrum with other ecosystems (like Solana and Optimism) is particularly interesting to contextualize Arbitrum’s position in the broader landscape.
- The proposed budget is reasonable for the scope of work, particularly considering the discounted rate and the inclusion of incentives for interviewees.
- We’re worried that the feedback will be valuable but aren’t sold on how they would be translated directly into concrete actions.
- Could the final deliverables include specific recommendations tied to potential DAO proposals or program improvements?
- Perhaps a follow-up initiative could focus on translating insights into actionable strategies.
- This move to Snapshot feels slightly rushed, as some key details (like builder selection criteria) is still being refined.
- While the bonus incentivizes high-quality work, it still a bit too subjective imo. This stuff is hard to frame but a more transparent framework for awarding the bonus (e.g., predefined deliverable quality benchmarks) would help ensure fairness and reduce ambiguity.
Thank you for your comments
Agreed. We plan to do both. As operators of a builder-support program ourselves we need these insights to pitch builders “why Arbitrum” effectively. We also want to incorporate the insights to refine our program, and we’re creating the Stakeholder Council so we tailor the research to enable the same for the other key organisations and programs in Arbitrum.
We are also likely to make follow up proposal (continuing our record of frequently proposing ways to advance Arbitrum).
This is by design actually, as the builder selection criteria will need to be refined with the Stakeholder Council. Also, defining this criteria is significant work on top of all the proposing, commenting, etc. So without some expression of interest for the DAO we can’t afford the extra risk.
I’m curious how you see this. Without a history of user research in Arbitrum I’m not sure how we can provide such a framework. However, after this proposal, we’ll have an initial benchmark. I’d love to hear if you have any suggestions about how to create such a framework benchmark being absent
I have voted For this proposal, as I believe it is crucial for Arbitrum to understand why builders choose one chain over others. In my view, now is the right time to invest in such a topic, and the budget is appropriate.
We have chosen Arbitrum as our home chain for Kleros 2.0 that will allow to perform cross-chain dispute resolution and we would be more than happy to provide feedbacks as a new builder on Arbitrum ecosystem.
Thanks for the proposal. I think this is definitely an important topic to discuss to enhance Arbitrum. Well, a lot of developers choose blockchains based on grants rather than the potential of the ecosystem. Once they get the grant, they move on to other blockchains :). So I think this proposal is a good move to find projects that truly want to build on Arbitrum, not just developers looking for a quick fix.
I voted for Arbitrum + 2 others (SOL+OP) because when comparing with Solana and Optimism will help us understand the differences in user needs, habits, and behaviors between L1 and L2 ecosystems. Arbitrum can learn from the strengths and weaknesses of them stay competitive.
However, I feel the plan could be a bit more detailed. How will methods like personas and job-to-be-done ensure that the research findings will be actually applied to improve features and support for the Arbitrum ecosystem?
The following reflects the views of the Lampros DAO (formerly ‘Lampros Labs DAO’) governance team, composed of Chain_L (@Blueweb), @Euphoria, and Hirangi Pandya (@Nyx), based on our combined research, analysis, and ideation.
We are voting FOR the “Arbitrum + 2 others (SOL + OP)” option in the Snapshot voting.
This can provide valuable insights to enhance Arbitrum’s ecosystem. Many developers often select blockchains based not just on the ecosystem’s capabilities, but also on the availability of grants and other short-term incentives. Understanding these underlying motivations and comparing ecosystems like Solana and Optimism with Arbitrum is a critical step toward strengthening our platform’s long-term appeal.
We agree with this and want to see what changes can be made to this proposal before it moves to Tally. It might also be helpful to consider including other ecosystems along with Solana and Optimism to give a broader comparison.
We’re Voting for the option to research Arbitrim plus SOL and OP
The proposal presents a strong case for conducting user research to understand builder motivations and preferences. The proposed methodology, team, and focus on actionable insights are all positive aspects.
- Rationale
- Need for Understanding Builder Motivations
- The proposal effectively highlights the lack of understanding regarding why builders choose Arbitrum over its competitors.
- This research aims to fill that gap and provide insights to leverage strengths and address weaknesses.
- Actionable Insights
- The proposal emphasizes that the research will be conducted in close collaboration with key stakeholders, ensuring the findings are relevant and can be used to improve Arbitrum and attract more builders.
- Experienced Research Team
- The team comprises experienced user researchers with a strong track record in conducting similar studies.
- Need for Understanding Builder Motivations
I voted “AGAINST” this proposal.
This is the reasoning:
- As mentioned by others previously, this is a good fit for Questbook, reducing the workload of the DAO structure (MSS, etc)
- One of the reasons stated for this proposal is that the ARDC only accepted one entity per category. This is not the case, as we saw joint applications by DefiLlama Research & Castle Capital, for instance.
- It is also mentioned that the applicants does not possess the necessary skills to conduct this type of research. This question was raised in the ARDC v2 TG group and several applicants provided a different point of view.
- It was also mentioned by a few in this thread that this topic should have be discussed more before moving to snapshot. I agree with this, specially in light of the 2 previous items.
Thanks for the comments. A couple of clarifications
Which track would you suggest is a fit? Because to my understanding the Events and Communtiy track doesn’t fund research. Nor does the Gaming, Developer Tooling, nor New Apps/Protocols tracks.
The DAO could vote for a new track to be added to Questbook but that’s basically the ARDC role already.
To clarify, the DAO only accepts multiple entities if the entities share the budget, reputational risk, accountability for execution, coordinate to have a joint proposal made, and a single scope of research initiatives proposed. That’s forcing a joint venture, so really a lot to ask of independent suppliers and not correlated with having research capabilities, which is really the objective here.
A better design would have allowed, at least for the research position, to have different suppliers whitelisted, and then having both RFPs and supplier-led proposals for more granular research initiatives. Then few of these would be selected for every cycle. Alas, that’s not the design so multiple of us had to withdraw our application.
I voted against the proposal, primarily due to concerns about the high budget and broad research scope, leaning towards a more conservative approach. However, I suggest breaking the proposal into two phases: initially focusing on developer research within the Arbitrum ecosystem to reduce the budget for the first phase, while gaining experience before expanding to comparative studies with other ecosystems.
After thoroughly reviewing the proposal, it is very detailed overall, but some questions remain:
1. What are the criteria for the discretionary bonus mentioned in the proposal? How is “success” defined?
2. What is the role and actual authority of the stakeholder committee? Can they influence the research direction?
3. If the research results identify issues with Arbitrum’s core technology or policies, will these findings be prioritized and acted upon?
I’m voting “FOR + two others” the proposal because we truly lack a deeper understanding of how developers see Arbitrum and building on it. While I am one of these developers, I’d like to see how others think about it and how their views differ from mine.
@danielo Does the “+ two others (SOL & OP)” include all chains within the Optimism Superchain, or just OP Mainnet? If the latter, then I would suggest doing a research on Base instead of OP Mainnet. My observation is that Base is having much more developer activity than OP Mainnet, and is also the largest chain by TVL within the Superchain.
Thanks for the comment. W might make the final ecosystems selection a snapshot vote (accompanying the Tally vote) as there are a lot of different opinions here. TBC
We’re supportive of this proposal. General comments before posting our rationale:
We would like to signal our support for @PGov’s suggestion of including specific recommendations in the final completion phase, aiming to make it easier to utilize the outcomes of the research, including user personas and insights.
Regarding the chains selected, in case it was an option, we recommend focusing on specific chains (e.g., OP Mainnet, Base) rather than a broader ecosystem of chains (such as the Superchain) to ensure a clearer comparison.
I think we can still using a detailed survey of builders instead of expensive interviews. Surveys are faster and cheaper. After that, we could run a few focused interviews only with key participants to explore deeper questions.
After consideration, the @SEEDgov delegation has decided to “AGAINST” on this proposal at the Snapshot Vote.
Rationale
We would like to start by mentioning that the proposed research is highly interesting, and we believe it makes sense to establish comparison points with other ecosystems such as Solana and Optimism. If we were to vote in favor, it would be for the option that includes both, with the caveat that within Optimism, Base should also be covered at the very least.
That said, while reviewing this proposal, we asked ourselves, “Why shouldn’t the ARDC handle this?”
We came across this explanation from the proposer:
This explanation contains a claim (already mentioned by @JamesKBH in his rationale) justifying funding this initiative because “most of the candidates for the research position don’t have significant User Research expertise.” When we inquired about this in the ARDC’s channel within the Delegate Group, we found that several of the applicants provided a different perspective (e.g., @PYOR, Ryan from DL (joint application with @CastleCapital), and @Alice1123 from The Block).
Being 100% honest, if the applicants for the position believe they have the necessary expertise to carry out this initiative, we would prefer it to be one of many that will go on-demand under the new structure approved for ARDCv2. This preference can be justified for several reasons:
- We have already funded ARDCv2 to have a structure (both Service Providers and Human Resources) that facilitates both the research and the operational/management aspects involved. This way, the ARDC would be perfectly capable of conducting the research, avoiding double spending on tasks related to project management and coordination (again, ARDC will already have a structure for this). The point is to avoid creating repetitive structures that lead to unnecessary expenses.
- If it is an on-demand research requested by the DAO, subcontractors could be incorporated as needed. Since payments in this case are milestone-based, we see no issue as long as the DAO approves it in Snapshot.
- We prefer not to set a precedent of funding proposals that the ARDC could handle. Instead, the approach should be reversed: funding such proposals externally only when the ARDC explicitly declines to handle them due to a lack of expertise or resources.
- As a “minor” consideration, handling it through the ARDC implies one less multisig for the MSS to manage. While this should not dictate our decisions, it is important to manage resources efficiently and avoid overburdening them with unnecessary operational requirements.
There are other factors do not convince us either:
- The proposal seems rushed. As other delegates have mentioned, while there is no strict minimum discussion period, seven days of discussion is on the edge of what is “socially accepted.” In this case, we see no need for such a short discussion period, despite the justification provided by the proposer.
- We struggle to understand why a Council is being incorporated. It has been assigned a series of tasks, but as we understand, no compensation has been planned for its members. Additionally, it is unclear who the members will be, as only a “preliminary” composition has been provided without specific names.
In summary, we would be happy to help push this research within the ARDC’s scope, following the process already established for on-demand work: