Open Discussion: AI Tooling for Governance via Tally

Hey guys, hope it’s going to be a productive and happy week for you all.

I’m part of the team at ORA. We’ve been chatting with Tally about how best to integrate AI Tooling into their governance platform. Our priority is supercharging the individual voter experience for Arbitrum DAO in one location (Tally) as a way to drive greater voter participation and thus, decentralization for the DAO.

I’d like to use this post to set up a discussion with you, the community, to explore questions like:

What are some aspects of the governance platform that still feel slow, or information-heavy?
Are there any specific AI features you wish were integrated?
Have you thought about what AI tooling for DAOs could look like in one year from now?

I ask these questions to get you thinking, because so much tooling is here now, and so much is still yet to come with continued improvement. Most importantly, the infra is here to ensure any integration of AI is secure, decentralized and verifiable.

I’d like to suggest some of my thoughts on some AI tools that could immediately benefit the DAO.

  1. Proposal Summary

The addition of a new Summary tab to Tally’s existing proposal UX, providing a variety of concise information to keep voters informed without information fatigue.

What information would you love to see here?

  1. Arbitrum DAO Weekly

The addition of a Weekly Updates UX section on Tally’s Arbitrum DAO homepage. A combination of information from proposals submitted, discussion in the forum and AI-search results for any headline updates.

  1. Community Sentiment

A summary of forum discussions related to a proposal. Tally has a UX section to show forum posts already, but we propose adding a Summarize function to pick out information relevant to the proposal at hand and avoid voters having to sift through individual forum posts.


We’d also like to take the time to chat about how we envision the future. There is a rapidly growing ability for groups to have their own trained or fine-tuned models. Why shouldn’t every DAO have a personal model?

Let us know what you think about these possible tools or extensions of these!

  1. Domain Experts

Smaller models that observe DAO operations in specific domains for a period of time before becoming active. These ‘domain’ experts can be considered aggregates of all the information, action, knowledge and opinions of community members in ‘Arbitrum DAO Operations’ for example. Domain experts could provide a variety of functions, but primarily I see them adding value in informing voters in a way that is a neutral aggregation of all DAO contributors. Each proposal could have a rating or comments about the proposal’s possible impact, from each domain expert. This would allow individual voters to reduce their personal information gap in voting on diverse proposals relating to different areas of DAO operations.

More voters feeling more comfortable voting on more proposals is the goal here.

  1. Personal Delegation Agents

Finally, in order to prevent reliance on delegates (a good model, but one which tends towards power concentration), we at ORA can offer voters a second option: the personalized AI delegate agent. Each tokenholder would receive an agent, which they can personalize via prompt and data to vote on their behalf. Customizable to the tokenholder’s preferences, this agent could be set to “Vote yes on issues that don’t relate to the treasury”, for example.

This is a fascinating new option for diversifying voter power concentration and improving efficiency.


To wrap up, I’d just like to say I’m very excited to get a sense of DAO opinions on these solutions. Let’s refine these together and continue to work to improve the voting experience, driving greater involvement in DAO governance and therefore, greater security for Arbitrum DAO!

A lot will change over the coming years because of AI – it’s worth examining what the future could look like now.

3 Likes

Hey James! Hope you are doing well. I think the multilingual proposal summary are necessary because there are quite a few Delegates in the community who are not native English speakers.

1 Like

Nowadays, many voters do give up because of information overload or unfamiliarity. If there are concise “Summary” and “Community Sentiment” features to help people quickly understand the core content of the proposal, the turnout rate should increase a lot.
For example, if you are applying for a proxy and are faced with so many governance proposals, which are in English on the one hand, and involve a lot of specialized content on the other hand, from my personal perspective, I hope that these AI tools can act as a “neutral summary advisor”, which will allow voters to have a clearer understanding of the impacts of each proposal without having to go through a lot of information from the beginning. Such a tool would allow voters to participate more confidently and efficiently.

2 Likes

Hey Larva! What a great opportunity you’ve identified. I think this is key to ensuring accessibility to Arbitrum DAO governance across the world. LLMs are definitely suited for this, and this could be implemented as part of a Summarize UX feature for proposals and community discussion.

Thank you for your thoughts! Super valuable for our thinking.

Definitely. In fact, you hit on an important point about the ‘neutral summary advisor’. Verifiable AI is a critical piece of infrastructure here to ensure any AI features are credibly neutral, transparent and auditable.

As Larva mentioned and you touch on, an AI feature seems necessary to bridge the language barrier for involvement in governance and related opportunities. Additionally, summary features can bridge knowledge gaps with regard to specialized content.

What did you think about my suggestion of setting up ‘Domain Expert’ AI agents that could provide ratings, summaries and evaluations of proposals based on their perspective of a DAO’s operations? I personally think this will give voters a broader perspective about how a proposal may impact different aspects of a DAO differently.

Thanks for creating this thread, Alec. Excited to see what folks are interested in.

1 Like

I think the idea of introducing “domain experts” is a good one, as it will enable delegates to be better informed when casting their votes, especially on proposals of a more specialized nature. In my opinion, the neutral evaluation and impact analysis of the proposals by domain experts can help community members better understand the content of the proposals, narrow the information gap, and improve the participation and quality of voting.

How will these domain experts be selected? Will there be objective selection criteria and qualification requirements to ensure their professionalism and neutrality? Will the domain experts’ evaluations and analyses be made public after each proposal? It is recommended that they publish regular reports summarizing the impact of voting and DAO operations to give delegates a more intuitive understanding of the experts’ work product.

Also will there be support from non-English speaking domain experts, I personally come from a non-English speaking country, and I have to resort to translations for oral expressions and normal working drilling terminology, but we love the ARB ecosystem, well, just a personal discussion, and I hope to see more interesting feedback on this.

1 Like

Good thread! We at x23.ai would also be interested in this, and being involved / collaborating where possible. We already provide summaries (among other features) for Arbitrum delegates.

The concept of AI delegates is also interesting and an area of active research. If there were a small group of delegates that would want to experiment with this, I believe there would be a number of potential providers that the delegates could choose from (e.g. @zeugh from Blockful has done some experiments here). One potential concern here is transparency with decision making of these AI agents, as power and influence will consolidate into the agent providers (as opposed to the delegates themselves if they rely too heavily on the agents).

1 Like

I concur, specially regarding information overload; Information must be concise/precise; Kind of presenting KPI´s with a sort of “one pager” type of deal which indicates the most relevant and key information or the “core content” as you cleverly mention. Also I am very interested in the Turnout rate monitoring in order to analyze the the improvement rate. I just love the part which focuses the AI tools in generating a better experience for voters, aiming for better understanding at each proposal to generate efficiency and confidence within the community :star_struck:

1 Like

Thank you for starting this topic. I am from SimScore (an RnDAO unit). We’ve developed a POC for large communites (25+) to make big decisions.

Problem Statement

For large communities (25+), the group dynamics of open discussion lead to herding, groupthink and anchoring. Centralization, poor decisions or no decisions are the result.

Active discussions in the Arbitrum forum reflect these issues:

  1. The influence of who speaks first.
  2. Herding, or relying on others’ statements.
  3. Groupthink, where members avoid dissent to sidestep confrontation.

The primary reason decision-making in DAOs is challenging lies in the reliance on open discussion forums like Discourse. Effective decision-making is not aligned with these settings, as they amplify unproductive group dynamics.

Research confirms that accurate judgments require independent thought and unbiased aggregation, particularly in settings like DAOs. These methods ensure that each voice is heard without interference from others’ views.

Our approach is to use NLP Centroid and Similarity Score code and token weight to identify priorities and groupings within the replies from community members. The output is

  1. Priority list
  2. Relationship Graph
  3. Groupings List

The resulting unbiased aggregation is transparent, repeatable, auditable and trustless.

Each change to a proposal that is attributable to community feedback is logged and available for all to see.

This approach will lead to Better Proposals. It also will reduce cognitive load and unlock communication bandwidth so more time is available to act sooner, faster with higher quality. if interested i can link a loom

Here’s an output from a proposal at ZK Nation - Ignite

Proposers can read replies in priority order by category and decide which replies to act on by choosing options from the Proposal Update Decision Column. Green for change, red for no change and yellow for under consideration.

Yes, I recommend voters check out their platform to get a better sense of what I’m suggesting for Tally. Let’s discuss this further after collecting all the input here.

For delegates, you hit on a super relevant point which highlights the need for decentralization AND verifiability, to ensure that any agents are acting in a way that isn’t manipulatable or dependent on one central provider. This is really where ORA can come in and shine, by providing this decentralization and verifiability to ensure security and transparency for any AI features/applications used by DAOs.

I would also add that it might be more relevant to test out this ‘AI Delegate’ concept with individual tokenholders. Current delegates already have the responsibility of voting on behalf of other tokenholders. If we recruit a few individual tokenholders, the ‘AI Delegate’ would have less impact potential and responsibility. I believe this would be a better test environment.

To finish, I think this would be an interesting proposal to make to the DAO: a pathway to testing, refining and integrating some of these ideas. In that we can set up the ideal environment.

Thanks for your comments!

Wow, a new approach! Love to see it. Thanks for sharing this. I think this could be interesting to further inform voters about the relationships/composition of forum activity for specific proposals, to help them weight their evaluation of that community information.

I think this would require a very specific UX implementation though, in the name of reducing information overload. From what you describe, this might be confusing/distracting to most voters. The goal of this discussion is to evaluate how best to get more regular voters involved, to complement the delegate voting model, which is why I raise this point.

This may be too much info for your average voter that doesn’t delegate but doesn’t have the capacity to absorb all information. Again, something to test!

Hi Alecjames

The concept is more about structure. New Draft Proposal. Wait a few days, then run SimScore and post it. Proposer now has a prioritized list of responses. The proposal can be revised based on that list without all the back and forth debates and discussions.

When Rev 1 Proposal is presented, run another cycle. Except this time get wallet IDs for token weighting. Run Simscore with token weight. Post. Now proposer has list of top ranked replies that is both unbiased aggregated and token weighted.

continue untll community is satisfied.

Final proposal can be sent to SnapShot or Tally.

This would be transparent to contributors. As all responses would have priority # and consensus score. But more importantly the proposer would be upgrading their proposal based on the most relevant and important feedback.