Agentic Governance Initiative [AGI]

In the interest of moving forward with the initiative, the DAO can consider creating a Working Group to manage the initial phases. The WG can then be either moved under OpCo or evolved into an independent AAE, depending on the initial outcomes. Immediately creating a new AAE for the initiative is counterproductive in my opinion.
Alternatively, the initiative could be overseen by another of the existing AAEs and then moved under OpCo once it is ready.

We’re working on spinning up a WG right now. Will post more info once details are set.

2 Likes

Hey Event Horizon team, really appreciate the forward-thinking vision behind the AGI—super exciting stuff! I’m all for pushing the boundaries of what DAOs can do with AI, but I gotta say, it feels a bit early to go full-on autonomous AI governance in Arbitrum. The ecosystem is still mostly driven by human decisions—voters, delegates, devs, you name it—and expecting AI agents to run the show when the economy isn’t AI-native yet seems like a stretch. Without AI agents dominating the economic side (like trading, DeFi, or treasury management), governance agents might struggle to sync up properly and could end up misaligned with the messy, human-driven reality.

That said, I’m totally sold on using AI as a tool right now to tackle voter and delegate fatigue. The UX issues you highlighted—endless forum threads, Telegram chats, and proposal slog—are so real. AI agents helping with automated voting, summarizing discussions, or drafting comments could be a game-changer for retail users and burned-out delegates. It’s like giving people a governance superpower without the time suck.

I see a few parts of this initiative—like setting up the working group, becoming an AAE, or rolling out new voting mechanisms—that’ll need governance votes to move forward. I’d love for Entropy to step up and formally publish separate proposals for each of these, so we can get clear community feedback and keep things transparent. Keep building those tools, and maybe focus on getting more AI-driven economic activity in Arbitrum first to set the stage for autonomous governance later. Loving the direction, just think we need that hybrid approach for now!

1 Like

Just wanted to follow up—really think a feature like an ā€œArb-Brainā€ chat tool could be a game-changer for tracking comments, sentiments, proposals, and Fdn/OpCo/grantee reports in one place. It’d help us keep tabs on funding balances and hold initiatives accountable without the governance grind to dig in obscure spreadsheets. I’d encourage Entropy to propose this as part of AGI’s next steps. Still think full AI autonomy is a ways off, but tools like this are perfect for now to encourage more participation in the human side of the governance process.

Thanks for the link to the other post.
I think I wrote an opinion on exactly what Event Horizon says, which goes beyond your main proposal.
I’m saying that an agent can consolidate some opinions and present them in a convenient form, but deciding which of these proposals is better based on how people react to them seems wrong to me. We need to let people make decisions, without cutting off any options that agents find unpopular - the majority can also be wrong

Hey everyone,

We’re spinning up a working group to go over this AI governance tooling push. We welcome any and all voices to contribute the direction we take together as a DAO. Champions and skeptics alike, all are encouraged to join.

Please like this comment to receive a DM to be invited to the working group telegram group chat. We will also be hosting community calls after aligning schedules.

The working group thread is live here.

9 Likes

Looking back at the work Event Horizon has done, they pivoted significantly after being awarded 7 million ARB from the treasury. One example of this was during the DDA election. Their actions affected us even after we had reached out to the community. That said, they made the right decision by choosing not to participate in future election votes. The Agentic voting preview and forum parsing already feel like solid UX wins, especially for busy delegates or newer participants. We are also working with Plutus on a similar solution for GMX governance. As agents evolve and adapt, will there be a way to audit or rate agent behavior? Some kind of trust or quality scoring. Can we get more data and dashboards to verify the data?

Danielo and Daveytea brought up a valid point about turning this into an RFP or opening it up to multiple providers. The downside is that going the RFP route could lead to higher costs and more complexity, rather than making use of an already capable team. The New teams likely won’t build this unless their developer costs are covered. A more practical approach might be to start with a small delegation to Event Horizon. If it proves successful, we can then scale it with more teams.

2 Likes

My point is not about a grant program and this has nothing to do with RnDAO. Rather it’s about Arbitrum avoiding single-vendor lock in for what is an important initiative. There are many others in the ecosystem who should be allowed to participate in developing Arbitrum governance with AI.
Having a vendor own this creates conflicts of interest and is detrimental for Arbitrum

Again, this sounds like a point for a separate thread. Anyone is welcome to participate (RnDAO included), just as Event Horizon is here. That hasn’t changed. We are a build team working to create the best possible product. That doesn’t stop anyone else. We won’t stop collecting community input and building the best possible product, just because other teams aren’t right now.

Continuing to repeat that there should be more generalized initiatives for these hypothetical products in a conversation dedicated to building this initiative is simply irrelevant and not productive to the advancement of what is available to the Arbitrum community today, Event Horizon.

2 Likes
  1. While this is intriguing, does it really belong in ā€œProposalsā€? There doesn’t appear to be any component that requires voting by the DAO. I’m not seeing any funding requests, and it appears the system (as I envision it) can be built to completion and be launched live completely without DAO approval.
    That is not to say that this is of no interest to the DAO, don’t get me wrong, just that there appears to be no part that requires us to say ā€œYesā€ nor any part that would be feasibly blocked by us saying"No".

  2. Here’s how I’m envisioning the overall system, is this accurate or am I misunderstanding?
    a) An end-user would in some manner delegate ARB to AGI as part of ā€œthe admissionā€
    b) The user’s agent would then work within the AGI-agent-crowd to serve user’s interest, swaying the overall AGI-crowd vote
    c) The AGI-crowd would then reach a consensus and apply all of the ARB delegated to the AGI to vote accordingly

  1. The link appears to be missing, I’m assuming it is meant to reference GitHub - ScopeLift/flexible-voting: šŸ’ŖšŸ—³ļø Flexible Voting – A Powerful Building Block for DAO Governance .

Arbitrum is growing fast: more voices, more ideas, and ideally, more decentralization. But how exactly will AGI help Arbitrum become even more decentralized?

When AI is involved, we will tend to rely on it too much. We start giving it commands, and little by little, we lose the human touch and lean too heavily on Swarm. Then no various voices any more :slight_smile:

Also, IBM reports that 62% of supply chain leaders already see agentic AI as a critical accelerator for operational speed. However, this speed comes with complexity, and that requires stronger oversight, transparency, and risk management.

That why if the AI doesn’t learn from good data, it might create noise and confusion within the DAO So overall, I’m not fully convinced by this proposal yet. But I’d support using AGI as a tool to assist DAO governance such as filtering comments, votes, or spotting AI generated inputs.

P/S: The DIP program has anti AI rules. Does bringing AGI into Arbitrum go against that?

1 Like

Thanks to the @EventHorizonDAO team for taking charge of experimentation into AI governance. While we think there are certainly areas for AI involvement in governance workflows (translation is a great example), there are some worrisome long-tail issues here that worth considering.
A few areas of concern:

  • It’s unclear what the attack vectors for LLMs are, and giving them significant voting power could introduce an unwarranted risk.

  • Noise is already an issue in the forum, so it’s unclear if adding more voices (especially multiple agents) would be of value directly in the forum. However, perhaps there is a way to integrate the feedback of LLMs and AI swarms in a way that still provides insights or alternative views delegates have overlooked.

  • Lastly, we believe that governance in its most resilient form is immutable and self-executing contracts that receive their robustness from aligning the incentives of independent actors in a decentralized manner. We should strive to leverage the characteristics of blockchains in governance by streamlining menial operations with onchain tooling that allows delegates and stakeholders to focus on fewer, and more strategic goals, rather than relying on outsourcing the decision-making process.

  • Meaningful governance occurs off the forum, and between stakeholders who eventually bring mature ideas to the forum. We think there’s value in smaller stakeholders participating, but if they passively defer their power to agents to do so, the value-add remains unclear.

This is likely more appropriately positioned as a grant request to the AF or Questbook to explore agentic governance rather than an AAE. That said, we’re open to experimentation on how Event Horizon approaches the introduction of agentic governance.

1 Like