Agentic Governance Initiative [AGI]

Motivation

There is plenty of needless labor in Arbitrum governance. The process from ideation to proposal passing and assessment is full of inefficiencies and needless friction. Delegates are stretched thin both in mental bandwidth and time. New proposals face immense hurdles in grabbing the attention and feedback of these delegates and in crowdsourcing iterative improvements from the community. Communication takes weeks or even months to accumulate, not counting the processes of consensus-making. It’s often unclear what a marker of success would look like for a proposal, and it’s all too common for transparent assessments to be reported to the DAO, making it difficult to know if grant money has been used productively. Finally, if a proposal has attracted some traction, it’s often hard to get one’s bearings straight in that discussion without wading through 50+ replies of mixed quality with several external links and data points to consider.

With all these points of friction, current technological developments can act as a WD-40 for the entire governance process. Event Horizon aims to build upon existing infrastructure funded by the DAO to more effectively operationalize delegates. While there is still a long way to go before DAOs can be run fully autonomously, there are several places in which AI tooling can help improve the processes for human delegates today. The following proposal is a collaborative effort and contains several product requests sourced from various delegates and members of the AGI working group.

Event Horizon’s AI Solution Today

Event Horizon has already begun building for Arbitrum. Today, anyone can access the following features on our website.

Shipped Features

Automated Voting: Users can create their own AI agent which gets its own micro-delegation from the Event Horizon voting pool, which then votes on this user’s behalf. This agent votes in line with the general preferences of the user. Users can ask the agent to update its preferences, to explain its reasoning, and of course, the users can override the agent at any time.

Forum suggestions, summaries, and sentiment analysis: Event Horizon personal agents have forum data live updated in a RAG database. This additional context goes beyond voting. Users can prompt their agent to provide summaries of the discussion and general sentiment from other delegates.

Multilingual support: Users can use their agents to better understand proposals and forum discussions in their own, non-English, native language. All agent communication can now be done in various other languages.

Proposal suggestions: Agents can offer helpful suggestions for how an actively discussed proposal can be improved. These suggestions take into account active forum discussions and delegate conversations.

Rational generation: Agents can provide rationales for why they voted a certain way shining light inside the black box that is traditional AIs. Debate can also be struck with these agents to help refine human delegates’ own rationales.

Forum Comment Writing: Agents can also craft informed comments on active proposal discussions, enhancing pre-voting governance deliberation.

Much of these features required a non-trivial investment into underlying infrastructure to make this possible. We want to build on the success of what we’ve shipped over the past few months to create more and better tools to help human delegates make decisions and track governance progress.

Requested features

After much discussion with the AGI Working group the following features were requested:

Delegate Modeling for Proposal Crafting (requested by L2beat)

Event Horizon will fine-tune models to provide voting patterns and rationales in the style of various top delegates. This provides several benefits to the DAO:

  1. Infinite Attention: Models don’t sleep. Every proposal, forum post, and amendment gets processed in real-time without delay.
  2. Skill-Cloning at Scale: Synthetic agents capture the heuristics, red-flag detectors, and institutional memory of the top-tier delegates once, then duplicate it across every sub-DAO or working group
  3. Latent Knowledge Distillation: Fine-tuning allows us to formalize unwritten norms and decision criteria. The resulting model is a governance advisor with infinite patience.
  4. Proposal simulation/forecast modeling: Spin up 100 synthetic delegates with slightly different temperature settings to Monte-Carlo vote outcomes before shipping a proposal resulting in free risk proposal analysis.
  5. Wisdom preservation: Delegates often drop out. Synthetic delegates allow us to preserve a time capsule of reasoning style and governance wisdom so that once a delegate does leave the space, these agents remain a source of wisdom for the DAO.

One of the greatest limitations to agentic governance today is the lack of differentiation. Though prompt engineering alone can make subtle differences in agent reasoning, we’ve found that it requires true tuning to drive more meaningful differentiation from agent to agent. This differentiation is important for both personalization and emulation.

Swarm Coordination and Sensemaking

Just as we’re able to use these synthetic agents as a source of instantaneous feedback and wisdom, we can also use these same delegates and allow for synthetic debate. These simulated Socratic dialogues can help us find the source of fundamental disagreements, be it value-based or empirical, and can therefore suggest intelligent and valuable proposal suggestions to actively debated proposals. Various benefits include:

  1. Argument compression: Debates distill sprawling forum threads into easily digestible pro/con lists; human delegates can digest the TL;DR.
  2. Sentiment topography: Measuring vector divergence during the debate maps faction clusters, giving you a heat-map of where consensus is achievable.
  3. Bias counter-balancing – Diverse synthetic personas cancel individual blind spots (public goods, legal, operations), yielding a more holistic recommendation than any single source of truth.

Select forum communication with SimScore

There was unanimous agreement between all delegates during the AGI working group calls that AI forum comments ought to be handled with care. No one wants to read AI slop or summaries interjected randomly into live debates. It ruins the flow of conversation and is often unoriginal. Thankfully, we can leverage the SimScore API developed by @ma3ts23to post only original, value-add comments to the forum. Our agents already have forum comments as context in their RAG databases. We can use this information, plus the outputs of synthetic delegates and/or swarm debate summaries to then filter for the most original comments via SimScore and post these comments to the forum.To ensure originality, our agents will filter forum comments using SimScore. These filtered comments will be derived from existing RAG databases (which contain forum discussions), augmented by insights from synthetic delegates and/or swarm debate summaries, and then posted to the forum. This will be done sparingly so as to not spam the forums. This allows us to provide the following benefits to the DAO:

  1. Redundancy filter: High-overlap takes die in the SimScore filter, so threads only show new insights that human agents haven’t yet considered
  2. Automated redundancy analytics: Rejected near-dupes become a heat-map of over-indexed ideas, giving delegates and DIP actionable data
  3. Transparency: Thresholds + logs prove each AI post cleared an originality bar which is useful for program assessment.

Flexible Voting (ScopeLift) (suggested by Ben DiFrancesco):

The Event Horizon delegation currently votes in a winner take all model. With the help of ScopeLift tooling we can break this up into chunks to better represent the underlying preferences of the users who have their own AI delegate. This would be an upgrade to the governor contract, which would allow users to fractionally delegate to various delegates. Whales would finally be able to:

  1. Beyond all-or-nothing: Delegate to more than one delegate
  2. Lower touch: should the DAO choose to delegate from the treasury to, say, underrepresented delegates, this can be done from a single wallet or multi-sig, rather than spinning up a new wallet or multi-sig for each delegation going out.
  3. Precision block voting: Voting blocks such as Event Horizon can split votes in a more fine-grained manner.

Full Bespoke DAO Events and News Feed:

A holistic personalized digest of the latest DAO proposals, discussions, and relevant events to users based on their unique preferences. This helps delegates in various ways:

  1. Relevancy filter: delegates can stay informed in a one-stop shop for any DAO-related chatter for areas they’re specifically interested in
  2. Lower barrier to entry: having a single place where delegates can catch up on the latest DAO-related matters reduces the activation energy required to get back into voting, thereby helping reach quorum
  3. New delegate onramp: this tool also allows newer delegates to get up to speed faster rather than foraging through various, several page-long forum threads. Users can also ask for additional context via chat.
  4. Spam filter: low value-add DIP-gaming comments get filtered out.

KPI Suggestions for Proposals:

It’s far too common for there to be a lack of helpful markers of success in assessing new DAO initiatives.

Agents proactively recommend Key Performance Indicators (KPIs) tailored specifically to enhance clarity, accountability, and measurable outcomes within proposals. This allows delegates to:

  1. Consider options: Readily accessible KPI generation lowers the bar significantly to quantifying proposals. Rather than back and forth debate over qualitative assessments, delegates can easily summon KPI suggestions, which they can then offer as a way of measuring the future success of a given initiative.
  2. Iterate: With multiple sets of KPI suggestions, debate can be had around whether a given set is useful or not, rather than on generating KPIs from scratch.

Post-Proposal Data Collection, Analysis, and Reporting:

Key to the success of any grants program, DAO-related or not, is to be able to evaluate whether a given grant applicant successfully executed on their task. Once KPIs are selected with the above tool paired with human delegate discernment, autonomous data collecting agents can automatically collect and post monthly or even weekly reports to the DAO. This enable various benefits to the delegates:

  1. Tight feedback loops: Programs, such as DRIP, can get immediate feedback from data shared with the DAO. If something isn’t working, an AAE can cut the funding without having to wait through 6 months of emissions for a bi-anual report. If something is working, grantees can double down.
  2. Less operational overhead: Less time is spent by grantees compiling data for and writing large research reports. This infrastructure can run continuously in the background for the DAO.
  3. Greater transparency: Given data will be pulled from publicly available datasets, 0 trust is required on the part of the grantees to report accurately.
  4. Benchmark canonization: After several successful reports, a set of shared standards for key KPIs reporting formats can be unified DAO-wide.
  5. Faster effort: Given reports will be posted weekly or monthly, grantees have strong incentive to get the program up and running right away, rather than waiting and cramming their efforts last minute before the usual bi-anual review.
  6. Meta-analysis: After several proposals are tracked using this system, the DAO can easily compile multi-grant data to find which categories of grants are successful and which aren’t.

Global Chat:

Introduce a real-time global communication channel alongside personal agents, facilitating user collaboration, information sharing, and creative interaction methods. This allows users to prompt the agent swarm whenever and to see what previous users have found useful to ask the swarm. Benefits include:

  1. Reduced redundancy: Users with similar queries don’t have to reprompt the models
  2. Practical teaching: New users can learn how to use these tools by example by seeing how other users are using the AI tooling and swarm intelligence to be better delegates.

AI Scoring and Rationales for Grants:

Agents autonomously evaluate proposals and grant applications, assigning objective scores accompanied by detailed rationales, enhancing transparency and trust in funding decisions. This tool is:

  1. Transparent: No more subjective, black box assessment for grants.
  2. Efficient: Once built, this tool costs the DAO nothing. No need to pay committees. Turnaround time is instant.

Conclusion

Agentic governance is not speculative; it’s an emerging standard that will define DAO participation. Arbitrum has a unique chance to lead this transformation. Event Horizon is committed to co-creating this initiative, guided by community input. This proposal will be brought to vote via Snapshot as a validation of the DAO’s desire to see these initiatives built.

4 Likes

is this… going to a vote? if so, to decide what?
if not, why is this topic in the Proposals category?

3 Likes

This is a good initiative and I agree with the premise of agentic governance. However it’s important that Arbitrum governance does not get ā€˜captured’ by one platform, but maintains its permissionless nature in regards to AI agents and platforms.

I believe Optimism have taken a good approach with their recent mission request, creating early experiments with AI delegates, where multiple parties are contributing to the same goal: Foundation Mission Request: AI Delegate Development Ā· Issue #277 Ā· ethereum-optimism/ecosystem-contributions Ā· GitHub (We, x23.ai, were one of the selected teams).

It would be good to see this initiative mirror certain parts of Optimism’s mission request, namely allocating ARB voting power to multiple agentic governance providers, and evaluating performance over time in a competitive environment.

6 Likes

hey @daveytea how would you think this can be done, in practice? what is considered a good performance of an agentic governor?

1 Like

Hey Davey, we appreciate your feedback as a fellow builder in the AI space. We want to clarify that it is quite common for various products to lead sectors within the ecosystem (Tally for Voting/Staking, Karma for metrics, Questbook for grants). That said, ā€œcaptureā€ certainly is not in the cards any more than the above teams have ā€œcapturedā€ their respective verticals, and we welcome broader investment into AI expansion. We would work with you on a separate plan for AI grant funding, but we do not want to blur the intention of this current one.

For context of this proposal, the Event Horizon team has been building agentic governance tooling for the Arbitrum ecosystem for several months: Event Horizon Updates - #6 by EventHorizonDAO.

This specific proposal is a call to better align how Event Horizon specifically can continue building value-added tooling. We are focused on build path, and are not endeavoring to champion a broader grants competition, as that isn’t our domain as a team and we are squarely focused on build.

Though, again, should the OP program yield material benefit for the OP ecosystem in the months to come, we would be open to being a collaborator with you in a similar, separate approach. And we’d gladly connect if you’re available in the coming weeks: Calendly

I think initially it would be measured similar to human delegates - participation, contribution, alignment of Arb’s mission. It’d be very similar to how you measure the performance of the top ARB delegates.

Count me in. The future of governance isn’t manual, it’s agentic. The idea of co-creating a system where AI faithfully amplifies human intent, streamlines participation, and scales inclusion is exactly the kind of frontier thinking DAOs need. I’m excited to help shape the final AGI proposal and contribute to the long-term vision of intelligent, responsive governance in the Arbitrum ecosystem. Let’s build the blueprint for the next era together.

4 Likes

Thanks for the clarifications. My concern is more on the ā€˜capture’ of delegated ARB from the foundation. If a massive amount of delegated ARB goes to one ā€˜provider’, that makes it less likely for other experiments to receive a similar amount of delegated ARB, reducing competition and experimentation in the space.

I don’t want to seem too negative here, so I’ll make clear that I support the work your team does as i think it is beneficial experimentation in general. It’s been great to see how it has evolved and the discussions it has catalysed!

More than happy to contribute (as a potential future delegate or just someone interested) to this initiative and other future experiments.

4 Likes

Can we please avoid the AGI acronym? AGI is already a widely used acronym in the context of AI, and Arbitrum choosing that for a workgroup/initative is really going to confuse people.

Also agreeing with @daveytea that avoiding capture by a single provider is important here. What makes sense to me with this initiative is designing an RFP with multiple awarded submissions.

3 Likes

Hey @danielo we understand your propensity toward generalized grant programs and venture studios as your business model.

However, again, this is a call for how Event Horizon specifically can continue building the products it has already built and has in motion for Arbitrum — there is no need for EH to construct a broad grants program for this. We are not a grants studio and in fact it would be a large time and effort suck when compared to actually building product.

Rest assured, this by no means prevents another person from creating their own grants program. However, we do ask that the intent of this proposal be respected. This a call for those who want to join the EH working group to continue what EH has been building, is building today and will be building in the future. No programs, program managers, studios, etc necessary (for this).

3 Likes

I think that AI-agent governance is a bad development of DAO.
Initially, the meaning of DAO was decentralization, i.e. each vote had to make its contribution.
This proposal is an attempt to replace a person in the fundamental procedure, voting.
People who create AI only form the input data and the structure of interaction of neurons (if you can call them that), no one guarantees that the output will be the same option that a person would have formed with the same input data.

Yes, a person can make mistakes, but he is responsible for this.
I think AI is a very important assistant in many areas of activity. It can collect all the pros and cons for each proposal, but it is a person who must make a decision.

1 Like

One small comment. The Agentic Governance Initiative proposes the creation of a new AAE, but this falls squarely under the domain of governance, which is already a focus of OpCo.

As explained in my comment for the Strategic Objectives proposal, I think we should avoid the temptation to create new AAEs for each initiative, and instead aim to align new efforts within the scope of existing AAEs wherever possible. In this case, expanding the mandate of OpCo to include oversight of the Agentic Governance Initiative seems preferable to creating an entirely new AAE.

1 Like

Do all governance infrastructure improvements fall under the purview of OpCo? Even if it means building an entirely new tech stack? I may be misunderstanding the scope of OpCo but this seems either excessive or not the case. I’m open to being corrected though.

2 Likes

I see massive potential here for addressing Arbitrum’s core governance bottlenecks.

Our current governance is painfully bureaucratic and slow. Agents can support meaningful participation from key contributors while dramatically reducing governance overhead costs.

The infrastructure around AI governance could make our entire SOS process significantly smoother. There’s substantial value waiting to be captured through better coordination and decision-making tools.

I see two distinct paths here and both are very interesting to me:

  • AI Voting Agents: I agree with @daveytea that we should deploy multiple agents with lower voting power rather than single high-power agents. This approach allows us to better understand decision variance, prevents single points of failure, and creates a more robust governance signal by observing how different AI approaches converge or diverge on complex issues. Those Agents shouldnt be paid as humans from the delegate incentive programs, but there should be a similar process to reward the most outstanding ones.

  • AI Decision Support: This infrastructure should be developed as a service provider model to make participation radically easier for key stakeholders. The goal is delivering what busy contributors need - visually clear, digestible information that allows them to engage meaningfully in minimal time.

Our governance has such high barriers to entry that crucial stakeholders simply can’t participate. This is exactly why SOS is paused. We need infrastructure that lets busy, high-value contributors spend 10 minutes and deliver massive governance value rather than requiring hours of research just to understand basic proposals.

This seems like the kind of blocking that we don’t want. OpCo isn’t even set up yet, never mind its mandate clearly established with regards to proposals like this.

3 Likes

Hi cp0x,

Your reply is very interesting.

The bridge between decentralization and one person making a decision (vote).

Looking at DAO’s the collective makes the decision. Not the individual person. Together as a group the decision is made. Yes/No- onchain.

That is decentralization.

However not the proposal revisions. Proposer’s are charged with that work. This is where I feel there is a gap.

The DAO members exerted effort to comment, meet, discuss and debate proposals. Yet the proposer decides which ideas to add and which ideas to ignore.

This is not decentralized.

Unfortunately I think it’s clear that AI sucks in some areas and is superhuman in the right niche.

What we need is a mechanism of aggregation of all holders / delegates comments that is auditable and transparent, not blackbox. The aggregation becomes basis of proposal revisions.

This way holders / delegates can ā€œvoteā€ with their ideas as well as their yes/no decisions

3 Likes

I agree that as an assistant AI is very useful and I will support the decision to use it for the benefit of the community to sort out all the branches of voting and discussions. But as soon as AI makes any decisions, I will be against it

Thank you Event Horizon for introducing this initiative.

Regarding Agentic Governance, we are really supportive of initiating discussions to create guidelines and boundaries on how would the Arbitrum DAO like to integrate agentic governance to serve a useful purpose in a collaborative way. We would be happy to join this discussion.

Regarding EH becoming an AAE to act in behalf of the DAO in the development of agentic governance initiatives, we believe further clarity is still needed to solidify the case for AAE designation:

Mission-Critical Alignment. We encourage Event Horizon to articulate more explicitly how Arbitrum’s success is mission-critical to your organization. For example, aligning your growth strategy, sustainability model, or core KPIs with DAO adoption metrics would reinforce this commitment.
Defined Scope of Work. The AGI post outlines an ambitious roadmap. As part of a potential AAE onboarding, we suggest scoping a focused mandate (e.g., ā€œagentic governance infrastructure and retail participationā€) to avoid overlaps and enable OpCo and the OAT to assess progress effectively.
Operational Integration. If Event Horizon pursues AAE status, we recommend working closely with OpCo and other AAEs to define how your infrastructure can plug into Arbitrum’s governance lifecycle (e.g., interface with proposal development, accountability pipelines, or ecosystem education).

2 Likes

Not sure if you have looked at this proposal. An Integration to convert Discourse Forum Discussions into Clear Proposal Revisions with Community-Sourced Justifications - Technical Discussion - Arbitrum I wonder if this is something you would consider?

1 Like

Some interesting points here. Going to weigh in having trained and deployed LLMs from scratch circa GPT-2 era.

One thing I would like to point out should Arbitrum pursue this further. I would hope to make sure that they do so in the spirit of creating a level playing field amongst vendors. This ultimately comes down to OPs point on 12. Data Collection and Model Improvement

Getting this right is non-trivial and will ultimately be the difference between how the various agents/vendor teams perform.

Proposal data, voting data, forum data — while this is all in the public domain, scraping it will most likely be prohibitive to even the most sophisticated ML/AI data scientists.

I propose a key idea to support an open-agentic ecosystem

  • Support multiple data vendors to provide clean datasets of:
    • proposal data, voting data, forum data

This immediately levels the playing field for anyone who would like to build their own personal agent. No paying for expensive indexers, cleaning data, knowledge of solidity etc.

Most importantly public data is not a moat!

This would allow nearly anyone with a basic computer science understand to ā€œdrop and unpackā€ insights though whatever means they wish.

As tooling matures, in theory anyone can build a personal agent that is aligned with their views.
Most of all we are all working from verified, scrutinised data sets.

This also leads into a big concern as we start relying more on AI systems for information. Being able to source references for the information that is presented will be critical.

2 Likes