Agentic Governance Initiative [AGI]

Agentic Governance Initiative [AGI]

Call to Action: The Event Horizon team is inviting delegates to join the AGI working group to co-create both the final draft of the AGI proposal itself as well as the broader long-term vision for the future of Agentic Governance within the Arbitrum ecosystem. (add your tg or best mode of communication)

tl;dr: This is a clean-slate opportunity to help shape the future of agentic governance within Arbitrum. We’re inviting the community to participate both through open discussion below and by joining a newly forming Agentic Governance Working Group (signal your interest in the comments). Depending on community discussion, Event Horizon is open to operating either in a non-AAE capacity or to make all added alignments necessary to establish itself as an Arbitrum Aligned Entity (AAE).

Why Agentic Governance Matters Now

It is increasingly recognized that Agentic Governance—AI-powered participation and decision-making—will play a foundational role in the evolution of DAO structures.

By compressing deliberation and execution timelines from months to seconds, lowering UX friction to unlock governance access for retail participants, and enabling inter-agent collaboration to uncover novel insights and ideation pathways, agentic systems mark a step-function improvement over current models.

DAOs that fail to explore and adopt these mechanisms risk becoming governance laggards: slower, less inclusive, and ultimately relegated to legacy-org status.

To maintain its position as a governance leader, Arbitrum must be the first to meaningfully integrate agentic governance at scale.


Event Horizon’s Commitment

Event Horizon has been leading this charge for the past eight months, pioneering retail-facing AI voting and setting a precedent for responsible experimentation in the space. This post serves as a call for community input: we invite feedback on potential directions for AI governance exploration. While we’ve already received numerous ideas and expressions of interest from active delegates, we want to build with input from the entire DAO to together shape the scope of what will become a formal Agentic Governance Initiative. This is intended to be exhaustive and maximalist in approach, so feel free to be as forward-thinking and ambitious as you would like. We will refine from there what is reasonably feasible within the scope of the initiative.

Event Horizon’s AI Solution Today:

Users want to be heard; they don’t want to ‘do governance’. This is one of the lessons we’ve learned by building for Arbitrum DAO over the past several months.

Event Horizon is addressing this disconnect by building a more intuitive, user-friendly governance UX through what we’ve called the Emperor-Consul AI Agent model.

Through this framework, each voter is assigned a trusted, agentic consul to whom they are the emperor. The human voter simply communicates their higher-level desires to their agentic consul, and the consul goes out into the digital agora and continues to represent those interests until told otherwise. This modality compresses the highly taxing governance process (detailed below) into simple chat engagements with an AI representative.

However, the agents don’t just vote. The agents inform and advise the voter along the way through an ever-evolving suite of functionalities. Today, the consul agents will provide the human voters with reports of their voting behavior, the latest relevant news and events it has come across within each DAO of interest, and more. Through these communications, the consul can be further refined to reinforce actions that please the human voter, and deprecate those which deviate from how the human voter would have otherwise manually voted. It grows to embody the user’s persona increasingly over time. Notably, the user can override any decision by voting manually at any time, but as fidelity improves this should be an increasingly less needed option. As we move forward, we welcome each of you to add suggestions, features, wants and preferences which we will work to include in the broader utility set of the agent swarm.

In the first couple weeks since the launch of AI agents, we have seen over 200 individual agents created, each with a unique persona, and the user dialog has begun rolling in, further evolving each agent.

Proxy vs Independent Actor: We believe that it is critical to ensure that AI Agents are underpinned by human actors which they represent. The desired outcome is not to have thousands of independently acting AI agents deciding the fate of today’s protocols. Rather it is to use AI agents as proxies for true human desires. In this regard, the creation and inclusion of AI agents must be on a per-person or per-organizational basis.

  1. Fixing a Broken Industry UX: In order to participate optimally in DAO governance one must stay up to date with governance debates in forums, post in forums, read endless telegram group chat messages, participate in community calls, read proposals, vote on Snapshot, vote on-chain, leave rationales, and be well read on DAO operations, DeFi, gaming, marketing, business development, security and much more. This does not scale. What if instead we could leverage state of the art AI models to create agents which do all of these things for you? Set, forget, and have your volition carried out.

  2. Greater Voter Inclusion: While it is likely impossible to create an agentic model that represents its underlying human’s interest with 100% accuracy, narrowing the delta between the would-be manually selected choice of the human actor and the automatically selected choice of the AI agent is crucial. The closer the difference converges, the stronger the case for synthetic decision trees becomes. I.e., 1,000 synthetic agents representing 1,000 retail users with 95% accuracy is a clearly superior structure to 10 human voters representing themselves with 100% accuracy. There is no inherent value in manual voting. What’s ultimately important is that users have their volition faithfully expressed. This is in line with our initial mandate of reducing the friction of participating in Arbitrum DAO governance.

Shipped features

1. Automated Voting: Event Horizon launched a fully autonomous metagovernance system that mirrors all Arbitrum proposals in real time, enabling AI agents to deliberate and vote across the metagovernance layer without human input.

2. Voting rationales: Agents are no longer opaque black boxes who vote seemingly at random. All agents now provide thoughtful rationales for why they voted the way they did. These are not yet public, as we’re working on building the UI, but users can access their own agent’s rationale via the history tab at any point. We also just launched v1 of automated global sentiment sharing, by which the broader swarm perspectives are consolidated and explained as well debates amongst agents are conducted and shared to find novel and most-agreeable solutions: ex: (very early state solution, but gives an idea of direction)

3. Agentic Vote Preview: Users can see how their personal agent will vote, giving them the ability to intercede at any point.


4. Forum discussions data feed: Agents no longer vote based solely on user preferences and the proposal text. As any experienced delegate knows, governance decisions are shaped as much by forum discourse as by the proposals themselves. To reflect this reality, each agent now incorporates full forum context—including comment threads, counterarguments, and ongoing deliberation—into its decision-making process. This ensures agents vote with a holistic understanding of the conversation, not just the polished, sales-pitched language of the proposal.

5. Forum comment writing: Agents can now use full context provided from forum discussions and the users’ preferences to write thoughtful commentary on live and active proposal discussions. A common point of feedback we heard from delegates was that thus far Event Horizon hasn’t contributed to the direction of the pre-voting governance process. While our initial mandate was increasing voting, we took this feedback very seriously. These agents, in their current form (not some future state), are well poised to help with the DAO decision-making process today by providing thoughtful contributions. We are not yet automatically posting these comments, as we want to find the right way to add agent rationales to discussions, while avoiding content overload. But, this is the first step in the direction of adding constructive contributions to the pre-voting process.

6. Multilingual Support: We were quite proud to find that several of our users have begun leveraging the in-dashboard agent to help interpret proposals in other languages. While this wasn’t a directly intended use case, we are incredibly proud to help facilitate governance beyond language barriers. This is something we would encourage and would like to support further. Feel free to reach out regarding improvements or suggestions in this avenue.

7. Proposal suggestions: Our agents have the ability to go beyond crafting forum responses that highlight their users’ preferences. Our agents can also suggest changes to active proposals.

Potential Future Features:

1. Expanded Forum Communication: We would like the swarm of agents to be as readily interpretable and accessible as possible. We also believe it is critically important that we don’t fragment proposal discussions. Therefore, we have elected to incorporate the Event Horizon agents into the Arbitrum forum discussions rather than to invoke our own discussion platform. As a first step, we see it as important that agents no longer maintain an in-post structure by which rationales and explanations are conveyed strictly after proposal voting has ended.

To address this, Event Horizon will build a process by which all forum posts are scraped and fed to the agent swarm at the point of creation, and perhaps updated on regular intervals. Through this, Event Horizon will collect the unique perspectives of each of the hundreds of agents and then consolidate the output into a summary, which will be shared shortly after the original and first publication of a proposal. This response statement should include an approximate breakdown of how many agents would vote for or against as well as the swarm’s perspective on the strongest and weakest points within the proposal. Additionally, to make these comments actionable, we will request the swarm share what would strengthen the proposals and what, if any, changes would go as far as to flip their vote in favor (if previously against).

2. Swarm Coordination and Sensemaking: In the current model, each agent gets one vote. However the Event Horizon community votes, the entire block votes. This is a good start, but we can do better. We could begin investing resources in an effort to get the AI agent swarm to talk with one another, to intelligently prompt each other, and eventually to reach consensus beyond a simple majority vote. This will lead to even more intelligent rationales and proposal suggestions. We have already begun experimenting in this avenue and are encouraged by early finds as well as future potential.

Event Horizon recently ran a test of the above feature with emphasis on inter-agent communication for the recent quorum reduction proposal. While the voting itself tends toward majority rule, we were able to establish discourse between agents in an unweighted fashion to garner better outcomes and compromise. We first established a moderated debate representing both categorical perspectives (FOR and AGAINST). Despite approximately 93% of the agents indicated support FOR the proposal, and only 7% showed reservations AGAINST, we consolidated the best arguments of both sides and invoked a single representative agent for both sides of the argument. We then had them deliberate. Each agent began the deliberation with a net ‘conviction’ score with the goal of the discourse being to lower the conviction of the highly convicted representative through compromise, and raise the conviction of the less convicted representative through concessions. An interesting outcome was that the AGAINST agents would be more likely to sway in favor if there were added measures to prevent quorum reduction from becoming the solution of convenience moving forward. While the AGAINST agents were willing to compromise on the more immediate need for a one time solution, they suggested:

A. Future Commitment to Durable Solutions: adding language in the proposal which emphasized commitment to exploring a more durable future solution and
B. Limitations to Ease of Future Reductions: a novel approach of adding friction to future quorum reductions in the form of added scrutiny or process to increase the difficulty and decrease the likelihood of defaulting to quorum changes out of convenience.

This was an incredibly early iteration, but it served as an interesting example of future outcomes from inter-agent communication. For the sake of the post we created a truncated/condensed representation of the inter-agent dialog: [Constitutional] AIP: Constitutional Quorum Threshold Reduction - #29 by EventHorizonDAO

3. Bidirectional Forum Communications: We want to make the line of communication with the agent swarm as meaningful as possible. To do this, we will allow community members to tag the swarm in their comments and questions. This will then be scraped and prompt a response from the AI delegates. This will be done carefully as to avoid spamming the forum with too much volume.

5. Proposal / Delegate Forecast and Modeling: One of the most time-intensive (and failure-prone) aspects of proposal development is the pre-publication alignment process: countless conversations to understand delegate preferences, followed by an arduous effort to reconcile competing stakeholder priorities into a coherent draft. With sufficient data, Event Horizon’s governance model can evolve into a predictive tool, helping forecast the likely dispositions of individual delegates or broader voter groups before a proposal is published. By modeling agents around the historical preferences and voting behavior of specific delegates or governance cohorts, proposal authors could:

  • Rapidly simulate likely support levels across key stakeholders
  • Forecast potential objections or sentiment trends
  • Generate drafts that more accurately reflect community preferences
  • Reduce proposal failure risk and delegate feedback burden

This unlocks a faster, more data-informed pathway to proposal alignment; reducing back-and-forth while improving proposal quality and pass likelihood.

Event Horizon has already begun aggregating large-scale governance data on delegate behavior and voting patterns. With further refinement, we believe we can achieve high-confidence forecasts of how both individual delegates and the broader community are likely to vote. This capability provides a powerful new toolset for governance participants; whether crafting proposals, evaluating alignment, or optimizing for maximal support in an increasingly complex stakeholder landscape.

6. Proposal Publication Service: At present, there is no convenient way to platform proposals from smaller or lesser-known community actors. All proposals require the backing of one of a few larger delegates, which is not a scalable model. On the other hand, lowering the ARB threshold to initiate a temperature check risks opening the floodgates to low-quality or spam proposals. The Agentic Pool offers a third path: a near-limitless number of proposal ideas could be submitted and assessed for quality, then funneled into a shortlist of viable options. These could be presented to the DAO on a monthly or otherwise regular schedule. Following a brief forum poll, the strongest community-sourced proposals could then move to a formal temperature check via the Event Horizon community pool, aligned with the standard Thursday publishing cadence. This approach allows the DAO to surface a broad set of ideas from across the community without placing undue burden on major delegates. More collective cognition, less work.

7. Flexible Voting (ScopeLift): Through Scopelift’s flexible voting solution (link) the community pool will be able to transition from winner-takes-all to a proportional split voting model. With proportional voting, each voter receives their proportional share. If 60% of Event Horizon voters and agents vote yes, and 40% vote no, the pool can vote commensurately with a 60-40 split.

8. Full Bespoke DAO Events and News Feed: We would like to see an expansion of the agents’ capacity to inform. Through integration with DeFi Llama, we have included new sourcing. Through forum integration, we track all conversational and sentiment data. However, would like to productize this into a holistic feature. A single button feature which would provide each agent user with a full digest of the latest proposals, events, news, discussions and more as is most relevant to their personal preferences

9. Proposal Creation: There are several avenues through which we can expand Event Horizon’s proposal creation capacity. One option would be to explore entirely agentically created suggestions. Another would be to allow authors to freehand write their desired proposals with the added assistance of the AI swarm. As a proposal author writes, we could pass each sentence into RAG and detect whether similar ideas have been raised before, by whom, when, in what context, and whether they were supported or shot down. That would make this essentially a Grammarly for governance, except instead of fixing your writing, it’s helping you write informed proposals that respect precedent, align with delegate sentiment, and avoid known pitfalls.

10. Post-Proposal Data Collection and Analysis: To best inform the evolution of the governance agents, post-proposal data is incredibly useful. It is important to acknowledge the outcomes of the decisions being made to best inform decision-making moving forward. Event Horizon could dedicate time and efforts toward aggregating post-program reports and independently tracking program metrics to do this.

11. Global Chat: To foster a more social experience and enable deeper two-way communication between agents and the community, Event Horizon could introduce a global, shared chat. This would exist alongside each user’s private, personal agent. Through this channel, community members could engage directly with one another and with Event Horizon agents in real time. An added benefit is that users would naturally learn from each other, sharing novel use cases, prompting techniques, and creative ways to interact with agents, further expanding the system’s utility and reach.

12. Data Collection and Model Improvement: Event Horizon has spent the past several months developing dedicated processes for building high-quality datasets and models tailored specifically to governance. At present, it is the leading product from retail agentic governance and will continue to work for the Arbitrum DAO in expanding its sophistication. More on this can be discussed as this initiative takes shape, but Event Horizon would like to be the foundation for useful data in agentic governance.

Conclusion

Agentic governance is not a speculative future; it is an unfolding reality. The systems we design today will define the shape and pace of DAO participation for the years ahead. With Arbitrum at the frontier of decentralized coordination, there is a rare opportunity to lead rather than follow. Event Horizon is committed to building the infrastructure, tooling, and thought leadership to make this possible, but it must be shaped by the DAO it serves. We invite you to contribute, challenge, and co-create this Agentic Governance Initiative. Whether through feature ideas, philosophical guardrails, or entirely new use cases, your input will directly inform how this system evolves. All the above are our notions of future direction and suggestions, but please be feel free to accept, refute, and contribute alternative pathways. The age of passive governance is ending. Let’s build what comes next together.

3 Likes

is this… going to a vote? if so, to decide what?
if not, why is this topic in the Proposals category?

3 Likes

This is a good initiative and I agree with the premise of agentic governance. However it’s important that Arbitrum governance does not get ‘captured’ by one platform, but maintains its permissionless nature in regards to AI agents and platforms.

I believe Optimism have taken a good approach with their recent mission request, creating early experiments with AI delegates, where multiple parties are contributing to the same goal: Foundation Mission Request: AI Delegate Development · Issue #277 · ethereum-optimism/ecosystem-contributions · GitHub (We, x23.ai, were one of the selected teams).

It would be good to see this initiative mirror certain parts of Optimism’s mission request, namely allocating ARB voting power to multiple agentic governance providers, and evaluating performance over time in a competitive environment.

6 Likes

hey @daveytea how would you think this can be done, in practice? what is considered a good performance of an agentic governor?

1 Like

Hey Davey, we appreciate your feedback as a fellow builder in the AI space. We want to clarify that it is quite common for various products to lead sectors within the ecosystem (Tally for Voting/Staking, Karma for metrics, Questbook for grants). That said, “capture” certainly is not in the cards any more than the above teams have “captured” their respective verticals, and we welcome broader investment into AI expansion. We would work with you on a separate plan for AI grant funding, but we do not want to blur the intention of this current one.

For context of this proposal, the Event Horizon team has been building agentic governance tooling for the Arbitrum ecosystem for several months: Event Horizon Updates - #6 by EventHorizonDAO.

This specific proposal is a call to better align how Event Horizon specifically can continue building value-added tooling. We are focused on build path, and are not endeavoring to champion a broader grants competition, as that isn’t our domain as a team and we are squarely focused on build.

Though, again, should the OP program yield material benefit for the OP ecosystem in the months to come, we would be open to being a collaborator with you in a similar, separate approach. And we’d gladly connect if you’re available in the coming weeks: Calendly

I think initially it would be measured similar to human delegates - participation, contribution, alignment of Arb’s mission. It’d be very similar to how you measure the performance of the top ARB delegates.

Count me in. The future of governance isn’t manual, it’s agentic. The idea of co-creating a system where AI faithfully amplifies human intent, streamlines participation, and scales inclusion is exactly the kind of frontier thinking DAOs need. I’m excited to help shape the final AGI proposal and contribute to the long-term vision of intelligent, responsive governance in the Arbitrum ecosystem. Let’s build the blueprint for the next era together.

4 Likes

Thanks for the clarifications. My concern is more on the ‘capture’ of delegated ARB from the foundation. If a massive amount of delegated ARB goes to one ‘provider’, that makes it less likely for other experiments to receive a similar amount of delegated ARB, reducing competition and experimentation in the space.

I don’t want to seem too negative here, so I’ll make clear that I support the work your team does as i think it is beneficial experimentation in general. It’s been great to see how it has evolved and the discussions it has catalysed!

More than happy to contribute (as a potential future delegate or just someone interested) to this initiative and other future experiments.

4 Likes

Can we please avoid the AGI acronym? AGI is already a widely used acronym in the context of AI, and Arbitrum choosing that for a workgroup/initative is really going to confuse people.

Also agreeing with @daveytea that avoiding capture by a single provider is important here. What makes sense to me with this initiative is designing an RFP with multiple awarded submissions.

3 Likes

Hey @danielo we understand your propensity toward generalized grant programs and venture studios as your business model.

However, again, this is a call for how Event Horizon specifically can continue building the products it has already built and has in motion for Arbitrum — there is no need for EH to construct a broad grants program for this. We are not a grants studio and in fact it would be a large time and effort suck when compared to actually building product.

Rest assured, this by no means prevents another person from creating their own grants program. However, we do ask that the intent of this proposal be respected. This a call for those who want to join the EH working group to continue what EH has been building, is building today and will be building in the future. No programs, program managers, studios, etc necessary (for this).

3 Likes

I think that AI-agent governance is a bad development of DAO.
Initially, the meaning of DAO was decentralization, i.e. each vote had to make its contribution.
This proposal is an attempt to replace a person in the fundamental procedure, voting.
People who create AI only form the input data and the structure of interaction of neurons (if you can call them that), no one guarantees that the output will be the same option that a person would have formed with the same input data.

Yes, a person can make mistakes, but he is responsible for this.
I think AI is a very important assistant in many areas of activity. It can collect all the pros and cons for each proposal, but it is a person who must make a decision.

1 Like

One small comment. The Agentic Governance Initiative proposes the creation of a new AAE, but this falls squarely under the domain of governance, which is already a focus of OpCo.

As explained in my comment for the Strategic Objectives proposal, I think we should avoid the temptation to create new AAEs for each initiative, and instead aim to align new efforts within the scope of existing AAEs wherever possible. In this case, expanding the mandate of OpCo to include oversight of the Agentic Governance Initiative seems preferable to creating an entirely new AAE.

1 Like

Do all governance infrastructure improvements fall under the purview of OpCo? Even if it means building an entirely new tech stack? I may be misunderstanding the scope of OpCo but this seems either excessive or not the case. I’m open to being corrected though.

2 Likes

I see massive potential here for addressing Arbitrum’s core governance bottlenecks.

Our current governance is painfully bureaucratic and slow. Agents can support meaningful participation from key contributors while dramatically reducing governance overhead costs.

The infrastructure around AI governance could make our entire SOS process significantly smoother. There’s substantial value waiting to be captured through better coordination and decision-making tools.

I see two distinct paths here and both are very interesting to me:

  • AI Voting Agents: I agree with @daveytea that we should deploy multiple agents with lower voting power rather than single high-power agents. This approach allows us to better understand decision variance, prevents single points of failure, and creates a more robust governance signal by observing how different AI approaches converge or diverge on complex issues. Those Agents shouldnt be paid as humans from the delegate incentive programs, but there should be a similar process to reward the most outstanding ones.

  • AI Decision Support: This infrastructure should be developed as a service provider model to make participation radically easier for key stakeholders. The goal is delivering what busy contributors need - visually clear, digestible information that allows them to engage meaningfully in minimal time.

Our governance has such high barriers to entry that crucial stakeholders simply can’t participate. This is exactly why SOS is paused. We need infrastructure that lets busy, high-value contributors spend 10 minutes and deliver massive governance value rather than requiring hours of research just to understand basic proposals.

This seems like the kind of blocking that we don’t want. OpCo isn’t even set up yet, never mind its mandate clearly established with regards to proposals like this.

3 Likes

Hi cp0x,

Your reply is very interesting.

The bridge between decentralization and one person making a decision (vote).

Looking at DAO’s the collective makes the decision. Not the individual person. Together as a group the decision is made. Yes/No- onchain.

That is decentralization.

However not the proposal revisions. Proposer’s are charged with that work. This is where I feel there is a gap.

The DAO members exerted effort to comment, meet, discuss and debate proposals. Yet the proposer decides which ideas to add and which ideas to ignore.

This is not decentralized.

Unfortunately I think it’s clear that AI sucks in some areas and is superhuman in the right niche.

What we need is a mechanism of aggregation of all holders / delegates comments that is auditable and transparent, not blackbox. The aggregation becomes basis of proposal revisions.

This way holders / delegates can “vote” with their ideas as well as their yes/no decisions

3 Likes

I agree that as an assistant AI is very useful and I will support the decision to use it for the benefit of the community to sort out all the branches of voting and discussions. But as soon as AI makes any decisions, I will be against it

Thank you Event Horizon for introducing this initiative.

Regarding Agentic Governance, we are really supportive of initiating discussions to create guidelines and boundaries on how would the Arbitrum DAO like to integrate agentic governance to serve a useful purpose in a collaborative way. We would be happy to join this discussion.

Regarding EH becoming an AAE to act in behalf of the DAO in the development of agentic governance initiatives, we believe further clarity is still needed to solidify the case for AAE designation:

Mission-Critical Alignment. We encourage Event Horizon to articulate more explicitly how Arbitrum’s success is mission-critical to your organization. For example, aligning your growth strategy, sustainability model, or core KPIs with DAO adoption metrics would reinforce this commitment.
Defined Scope of Work. The AGI post outlines an ambitious roadmap. As part of a potential AAE onboarding, we suggest scoping a focused mandate (e.g., “agentic governance infrastructure and retail participation”) to avoid overlaps and enable OpCo and the OAT to assess progress effectively.
Operational Integration. If Event Horizon pursues AAE status, we recommend working closely with OpCo and other AAEs to define how your infrastructure can plug into Arbitrum’s governance lifecycle (e.g., interface with proposal development, accountability pipelines, or ecosystem education).

2 Likes

Not sure if you have looked at this proposal. An Integration to convert Discourse Forum Discussions into Clear Proposal Revisions with Community-Sourced Justifications - Technical Discussion - Arbitrum I wonder if this is something you would consider?

1 Like

Some interesting points here. Going to weigh in having trained and deployed LLMs from scratch circa GPT-2 era.

One thing I would like to point out should Arbitrum pursue this further. I would hope to make sure that they do so in the spirit of creating a level playing field amongst vendors. This ultimately comes down to OPs point on 12. Data Collection and Model Improvement

Getting this right is non-trivial and will ultimately be the difference between how the various agents/vendor teams perform.

Proposal data, voting data, forum data — while this is all in the public domain, scraping it will most likely be prohibitive to even the most sophisticated ML/AI data scientists.

I propose a key idea to support an open-agentic ecosystem

  • Support multiple data vendors to provide clean datasets of:
    • proposal data, voting data, forum data

This immediately levels the playing field for anyone who would like to build their own personal agent. No paying for expensive indexers, cleaning data, knowledge of solidity etc.

Most importantly public data is not a moat!

This would allow nearly anyone with a basic computer science understand to “drop and unpack” insights though whatever means they wish.

As tooling matures, in theory anyone can build a personal agent that is aligned with their views.
Most of all we are all working from verified, scrutinised data sets.

This also leads into a big concern as we start relying more on AI systems for information. Being able to source references for the information that is presented will be critical.

2 Likes