Proposal: Not missing the AI train

Agree heavily with JoJO here. I don’t think its the right time or team for this proposal. It needs to be a lot more structured, organized and led by the right orgs to have the effect desired.

I do think arb can do a much better job here but I rather have a more organized plan here

The proposal’s overall title is literally jumping on the bandwagon. As others have said, there are no KPIs nor clear objectives, and the lack of concrete goals only justification appears to be that it will all be done under an Agile methodology.

I am not an AI buff, and though I understand the FOMO for Arbitrum as a whole, I am sure there are far better ways to pitch this to garner support.

2 Likes

not yet commenting on the theme of this proposal but commenting on the shape of it, I think it is paternalistic to try to pass a proposal that enforces a methodology (agile) to be followed by a team that will get elected afterwards and might not agree with that methodology that at that point was already approved by a DAO vote.
so I would recommend for this proposal to be much much less opinionated about the methodology to be followed, and focus on solely approving the budget for a team to form around it and drive it in the way they see fit.

So we discussed this a bit on the call as @AlexLumley had proposed precisely that. However, waiting for the SOS creates a big risk of missing this train so we ended up converging on having this setup sooner rather than later.
The agile setup of the team can still allow them to adapt to a good degree to whatever strategy is decided.

@Bobbay and @dragonawr the call is precisely for delegates to collaborate in building up this proposal together. I’d love to hear what you’d add/improve so it’s a go rather than just suggesting it needs more work. We’re just one of multiple delegates and this is an ecosystem need, not something that only benefits us…

1 Like

This proposal is about having support for AI builders (dev tooling, documentation, points of contact, etc.) a Hackathon is simply some prizes (so a bit of money for builders) but doesn’t address the support gap. So this is complimentary to the hackathon and grant program.

The extra devs won’t be hired unless the team needs them. And indeed there’s a PM to coordinate with the other orgs (OCL, Questbook, Foundation, etc.). So basically we’re setting up the resources to be able to do the research+coordination you suggest and then act on the findings.

AI is evolving very fast, and will continue to do so. That’s precisely why I’m suggesting an Agile team that can explore that and adapt as needed.
Because otherwise we’re paralysed with all the questions but no one is funded nor committed to answering them. This proposal solves that gap

What we mean by agile is not Scrum but simply the idea of the team having autonomy and operating on short sprints that they prioritise.
So basically the team can decide how they work as you suggest.

1 Like

Reading this proposal I’m unsure what is actually being proposed here. It is a high cost and a large team given the unclear or undefined output. It seems to rely on the premise that AI agents are not being built on Arbitrum because there is a lack of support yet it presents no argument or evidence for this and no definition of what support is needed. If you’ve identified it as a critical need as stated in the introduction could you expand on what exactly was identified as a critical need and how this was identified? Was there research or data analysis carried out that supports this claim?

3 Likes

Hi @danielo, this looks like an interesting proposal but has some gaps.

While the goal behind the proposal is applaudable, we believe that the overall goal of the proposal was unclear, but we’ve seen that you mentioned the goal was for the proposed team to work on the items below.

We believe the proposal needs more alignment with AF and OCL for better clarity: e.g. how does it fit within the Trailblazer AI Grant program? What other resources are already available to support teams? Are there other plans for future grant programs?

The goal of this proposal needs to be made clearer, whether its to support and accelerate available tooling or provide devrel support for AI teams building on Arbitrum. At the moment its not very clear what the goal is but we agree its an area of opportunity for Arbitrum. @JoJo’s feedback was very on point, especially on his assessment below.

Last but not least, we believe that it would be better to avoid sharing these in proposal format, and more as temperature checks to gauge interest beforehand.

1 Like

The following reflects the views of L2BEAT’s governance team, composed of @krst and @Sinkas. It’s based on their combined research, fact-checking, and ideation.

While this proposal is more of an idea than a fully developed plan, we appreciate the initiative to explore how AI could be more systematically integrated into the Arbitrum ecosystem and supported by the DAO. We understand where the need for agility comes from, and even though we support creating an agile team, we need to ensure that we are not just throwing money chasing rabbits.

We do not see a plan for how this initiative will move from an idea to an executable strategy. Funding a team is a step, but the first step should be a high-level plan of what we want to achieve. Although we could discuss KPIs and details for months, this goes against the very spirit of the proposal. In our mind, a better approach to ensuring that we will not be spending money without seeing any meaningful impact is to have clear-set red flags that urge us to wind down the initiative if needed. In that way, even if we are to fund a team without any specific goal ahead of time, and even if there are no KPIs set given the ever-changing landscape of AI x Blockchain, we will have some concrete signal that things aren’t working, and we can pull the plug.

1 Like

I like the idea of ​​making the Arbitrum a home for AI agents.

However, as other delegates have rightly written, this proposal is too abstract.
I will not repeat their arguments (I fully agree with them), but I would like to see this proposal not completely stalled due to high resistance.
It seems to me that in order to concretize this proposal, we first need to formulate the tasks that the Arbitrum will perform.

Actually, the proposal should be divided into 2 parts, and you have outlined the second part (but which will require adjustment when/if we implement the first)
The first part is to formulate the tasks that users need. How to do this is an open question, but I assume that there are many AI agent specialists who can formulate such tasks and themselves be participants in the second phase as coordinators or managers.

After all, we want to make what users need. And also understand what competitive advantage our product will have compared to those already on the market.

1 Like

Thank you, @danielo, for sharing this proposal. While we support the idea of fostering on-chain AI innovation on Arbitrum, we have some concerns regarding the proposed implementation, similar to those expressed by other delegates.

The proposal lacks clarity on how the team will achieve its goals. While the context and team roles are explained, there is little detail on what the team will actually do in practice. For instance, how will DevRel identify and engage AI builders? Will they rely on hackathons, partnerships, or direct outreach? Without specifying clear strategies, there is a risk of overlapping efforts with the Arbitrum Foundation, which already has dedicated teams for ecosystem growth. Similarly, the Technical Lead’s responsibilities—such as scoping tasks and RFPs—are vague, without specifying concrete deliverables or priorities.

The absence of a defined roadmap or KPIs further complicates assessing the feasibility of this initiative. Without these critical elements, seeing how this program will deliver a meaningful impact or avoid resource duplication is difficult.

We believe this proposal could be valuable if refined to include a more structured implementation plan with clear objectives, milestones, and differentiation from existing initiatives.

Great to see this proposal @danielo! I think that Stylus certainly offers some really unique possibilities for on-chain AI agents. I am no longer at the Arbitrum Foundation, but recently developed a demo and workshop specifically on this topic:

Which got a lot of great response:

And also a very good route to bring devs over to Arbitrum from other ecosystems e.g.:

I actually was coming here to the forum to propose something very similar to what you have proposed – some kind of AI-agent focussed team that can move more rapidly than the Foundation can. It would be great to join efforts. I understand the concerns put forth here by people on the details of the proposal, but as you have pointed out, part of the reason for that is it is such a rapidly developing topic.

@karpatkey I understand your concerns about risk of overlapping with the Foundation. Whilst they have dedicated teams for ecosystem growth and BD, they don’t really have much/any DevRel capacity and certainly no-one focussing on this specific topic of AI.

4 Likes

Apologies for the late response. So my number 1 gripe with the proposal is the “Motivation” part. I particularly don’t feel too inclined to commit funds on AI for the sake of AI, and even if I try to do away with those reservations I can see why others wouldn’t.

I don’t think it’s clear enough for users and delegates that we NEED an AI track, and the proposal doesn’t seem to have convinced many others that this the case or how it translates into a W. If I try to think neutrally, that should probably be your main concern ahead of putting it up for a vote.

We find the underlying premise of this proposal to be strategically compelling - positioning Arbitrum as a key infrastructure layer for AI agent coordination represents a meaningful opportunity that warrants focused attention and resources.

However, while the proposal effectively identifies the strategic importance of this initiative, in our view, its execution framework requires further refinement and solidification.

The core challenge lies in the proposal’s lack of concrete implementation specifics:

  • The team’s operational approach and specific deliverables remain largely undefined
  • The interaction model with existing Arbitrum entities (Foundation, OCL) needs clearer delineation to avoid duplicative efforts
  • The absence of defined KPIs and success metrics makes it difficult to evaluate effectiveness

The agile approach to development makes sense given the rapidly evolving nature of AI technology. However, this shouldn’t preclude establishing clear initial priorities and concrete success metrics.

We believe this proposal addresses a crucial strategic opportunity but would benefit significantly from a more detailed implementation roadmap, specific initial focus areas and deliverables, clear success metrics and KPIs and explicit coordination mechanisms with existing Arbitrum entities.

We would be happy to support a refined version of this proposal that addresses these elements, as ensuring Arbitrum’s positioning in the AI infrastructure space represents a meaningful opportunity.

After a discussion with OCL, we’re pausing this proposal.

OCL is already working on enabling an AI framework and has some capabilities to serve as a point fo contact (PoC) for AI builders. Our view is that this PoC service is poorly advertised across Arbitrum digital properties and we invite OCL and the Foundation to coordinate in improving access to OCL customer support services by making them more prominent and known across the DAO too. Some ideas:

  • a grantee info pack or builder info pack with all the resources available could be created/mantained (similar documents already exist but for some reason are not quite meeting the mark).
  • OCL and AF coordinating to organising recurring (e.g. monthly) AMAs and roundtables to map and address builder needs and signal a welcoming ecosystem.

Generally, we see similar activities have been tried already, but are in our view not enough. Our hunch is that a more relational component is needed (it’s not about having the documentation, infra, and resources, but about having someone to talk to and feel hand held). We share this view simply as feedback with the caveat that we haven’t researched the merits/gaps of the customer support offered (such initiative could be carried by the ARDC).

We have also realised how communication between the delegates, @Arbitrum Foundation, and @offchainlabs about ecosystem development initiatives could be improved and have proposed to the AF and OCL to organise a monthly call to share with delegates what they’re working on in this areas.

cc @raam

8 Likes

The proposal makes a good point—AI is moving fast, and Arbitrum has a real shot at becoming a major player. The idea of a public interface for AI builders and a flexible team to fill infrastructure gaps sounds solid, especially since current grant programs focus more on funding than actual support. Plus, I believe the team has the experience and technical skills to pull this off.

That said, there are some concerns. The proposal is pretty vague on what the team would actually do and how success would be measured, especially with a $500K budget. Picking a Program Manager through a DAO vote might not guarantee the right expertise, and it’s unclear how this fits in with existing AI initiatives like Trailblazer and ETHGlobal’s hackathon. Some people also feel it’s too early to form a team without a clearer roadmap, and without a solid business plan, justifying the budget is tricky.

Overall, I think this has potential, but it needs more details. A clearer plan, specific goals, and a way to avoid overlapping with other programs would make it way more convincing.

The following reflects the views of GMX’s Governance Committee, and is based on the combined research, evaluation, consensus, and ideation of various committee members.


We support exploring AI within Arbitrum but believe this proposal needs a clearer framework before moving forward.

  • Objectives & KPIs: The proposal lacks defined goals and success metrics. A structured plan with measurable outcomes is essential.
  • Methodology: Flexibility is fine, but success must be clearly defined—what are the objectives, and how will progress be measured?
  • Integration with Existing Initiatives: How will this align with Questbook and SOS? Who will manage collaboration, and do PMs/DAs have the capacity for this? We recommend waiting for SOS to help set funding priorities.
  • Strategic Approach vs. Hype: Reacting late to trends isn’t ideal. If AI is a real priority, should ARDC conduct exploratory research first, or should we wait for SOS to guide the DAO’s investment strategy?

We encourage further refinement to ensure this initiative is well-structured, strategically aligned, and properly resourced.

For proposals that propose such high spend, there needs to be much clearer goals and success criteria. How will we determine if this program is a success? How will the return-on-investment for such a high spending proposal be evaluated?

We think there is benefit in promoting activity like AI agents on the network, but without clear goals and success criteria it doesn’t make sense to spend such large amounts.

The proposal should include at least some KPIs. Some options could be focsued on AI agent market share, AI deployments that generate revenue for the DAO through various methods, or similar KPIs that are focused on the ROI to the DAO.

Even with these additions, we think that the $500,000 budget is extremely high. There is little reason that the DAO should be adding service provider budgets that far exceed other DAO service provider budgets that clearly deliver value for their DAOs.

As it stands we would be voting AGAINST this proposal.