Learnings from STIP: Community Interview Summaries and Notes

Introduction

Over the past five months since the Arbitrum DAO introduced the Short Term Incentives Program, significant progress has been made, with over 71.4M ARB granted to 56 projects. New proposals and programs have emerged, including the LTIPP and ARDC, alongside key infrastructure grants in areas like open data, KPIs, and analytics.

This post consolidates feedback from a series of interviews conducted by @tnorm and aims to synthesize lessons from the DAO’s status post-STIP. It reflects on the progress, lessons, and collective views of the Arbitrum community. These insights offer many actionable learnings for future endeavors.

The following summary combines opinions and responses of these Arbitrum DAO community members and are not reflective of @tnorm’s personal opinion nor advice or recommendations for the Arbitrum community.

A special thanks to all contributors who shared their perspectives on STIP, incentives, and the future of the DAO.

Lessons From STIP

What Worked Well

  • Decentralization: Respondents felt that the formative discussions and program design were open, transparent, collaborative, and relatively decentralized. They felt feedback was incorporated and the program was adjusted appropriately throughout its lifecycle.
  • Community: Feedback consistently pointed to an increase in DAO growth and participation following STIP. Increased engagement from delegates, protocols, service providers, and prospective builders was largely seen as a large success.
  • Visibility: STIP raised the profile of Arbitrum and the willingness of the DAO to support builders. While STIP focused on awarding existing projects, the program proved the DAO was able to design, approve, and execute large-scale grants for the ecosystem.
  • Accountability: Generally, respondents felt that the streaming mechanism, the clawbacks, and the multisig were generally effective in keeping grantees accountable. Teams felt OpenBlock was respectful but thorough in asking for details in their data aggregation.
  • Ecosystem Support: Respondents felt the program was essential for the Arbitrum DAO’s positioning and reputation as an entity capable of supporting its ecosystem.

What Worked Poorly

  • Communications: The decentralized nature of the DAO meant that the program was only known by those directly active in the community. Despite nearly 100 applications, the program’s rapid pace meant that many projects were left scrambling to submit an application and/or missed on the program entirely.
  • Application and Review Process: Feedback on the process was mixed: some felt the application was a success given its status as a community initiative, while others found it chaotic and hard to navigate due to numerous requirements. Most agreed the aggressive timeline hindered thorough proposal review, and delegates’ lack of expertise in incentive design led to insufficient critical evaluation of any closely reviewed proposals.
  • Consensus: Direct delegate voting was unfeasible with the amount of applications submitted. Many delegates were traumatized by this event, and using Snapshot was questioned as a means to conduct similar votes in the future. In general, most felt councils would be a more appropriate format.
  • Operations: A lack of structure between the DAO and Foundation meant operations were stressed from legal structure to KYC to fund disbursement. A silver lining is that the program created important foundational workflows and infrastructure was laid for future programs.
  • Strategic Direction: The lack of strategic direction led to inefficiencies in grantee evaluation, confusion, and difficulties in evaluating STIP’s success - especially given the size of the spending approved by the DAO.

Biggest Lessons for the DAO

STIP’s journey was challenging for all involved, yet it resulted in some remarkable achievements. However, many lessons from STIP are still revealing themselves and may not be openly shared within the community. A few I think are worth highlighting:

1. Hesitance to Speak Up

Learning: Many folks are hesitant to openly speak their mind. The power dynamics of DAOs mean delegates, service providers, contributors, projects, and stakeholders are often afraid to speak honestly and risk stepping on someone’s toes.

Lesson: Don’t be afraid to speak your mind, welcome constructive criticism. The DAO improves only when the culture is allowed to respectfully challenge itself and build better solutions.

2. Objectives are Important

Learning: The main goal of STIP was to design an acceptable enough framework that it would allow the DAO to approve short-term grants. The lack of a larger specific objective created a difficult job for those analyzing the onchain effects for the DAO because STIP lacked clear objectives, other than experimentation.

Thus, it’s unclear whether the validity and success of any STIP grant should be evaluated on its effect on the network (as no clear objective was defined) or rather based the evaluation of each experiment’s individual goals.

Lesson: To effectively evaluate strategic alignment with Arbitrum DAO, future programs must define clear objectives. The mistake of STIP shouldn’t be repeated - or if it is, the expectation must be set that a project should not be punished for the faults of the program that awarded the grant.

3. Intention Before Inclusivity

Learning: Facing the reality that no incentive grant had passed, STIP was designed to be as vague, and inclusive, as possible. As such, grant requirements were made extremely stringent to enforce alignment, at the expense of experimentation.

Lesson: A more intentional program allows for greater experimentation. STIP purposefully bifurcated the strategy and the tactics (to include as many projects as possible). The program’s goal was largely unclear, a point that was hammered home by many and has created many difficulties in measuring impact (1, 2).

4. Bad Habits

Learning: Seemingly dozens of ideas, proposals, and working groups have spun out in the months following STIP. Many respondents felt while the activity is great, the DAO has mistakenly used STIP as a model for future programs, interpreting it as a framework to build on, in contrast to its purpose as a one-off program to learn from. Many felt STIPs precedent hindered future program design.

Lesson: Original thinking from original principles is necessary to design better and improved grant programs. STIP should be learned from to design new programs from the ground up, rather than as a guiding example.

Looking Forward

In addition to lessons from STIP, respondents were also asked to provide feedback on their perception of the DAO in its current state, and their opinions and desires for future Incentives programs. A few themes arose which I think are worth covering briefly:

Themes of the DAO

1. Bigger and (probably) Better

The overwhelming consensus from respondents was that Arbitrum DAO was the most interesting and involved DAO in the ecosystem and had grown considerably following the launch of STIP. Many found the momentum encouraging but the amount of proposals, programs, and activities overwhelming.

2. Lack of Trust

Many respondents shared a mutual frustration with a perceived rise in lobbying, politics, and distrustful aspects of Arbitrum governance. Respondent verbiage ranged from corruption to favoritism to ignorance. There is frustration from many protocol teams that feel as though its a necessity to hire a DAO political respondent to maintain influence in the DAO both giving larger protocols an advantage and creating complicated power structures (1, 2).

Many felt that the open conversations and program design during STIP had regressed to conversations by private parties. This made providing feedback more difficult and affected respondents’ trust in programs and elected councils.

3. Lack of Leadership

Several felt the DAO needed clarity from its founding fathers (Offchain Labs/Arbitrum Foundation) to help guide its initiatives. There’s a general desire for these entities to play a larger role in providing strategic direction and guidance for the ecosystem. Many respondents expressed confusion about the greater Arbitrum Orbit strategy and how the DAO should prioritize and support the initiative.

Several respondents felt that the AIP 1 controversy left both the Foundation and Offchain Labs alienated from the DAO, leading to an estranged relationship under which the ecosystem would continue to suffer without greater strategic clarity or realignment.

4. No One Size Fits All

A consensus was that a one-fits-all program was likely not feasible for the diversity of use cases and variability in grant size for DAO incentives. Most respondents felt that programs should have a specific focus whether that be the size of grants, a certain KPI/metric, or a strategic objective.

5. Moving Quickly

Several respondents expressed significant concern over the velocity of programs following STIP. Some expressed frustration with the DAO’s velocity to launch new problems before understanding and learning from STIP’s results. Many harped on a need for data-driven analysis to inform programs and ingest takeaways.

Respondents expressed similar concern over substantial spending without clear objectives or learnings from previous outcomes. This suggests a gap in understanding how best to allocate resources.

How Would You Distribute Incentives on Arbitrum?

A key component of each interview was an opportunity for the respondent to describe how, given full control, they would distribute incentives on Arbitrum. While independent respondents differed, a few patterns and ideas emerged that are worth sharing below. In an attempt to simplify these learnings and increase digestibility for the community, these can broadly be organized as:

  1. Project-centric Strategies
  2. Metric-centric Strategies
  3. Objective-centric Strategies.

Project-centric Strategies

Tiered Program Strategy

Most respondents brought up a need for project-based incentives focused on the stages of growth. A recurring trend was a vision that comprised three programs with specific objectives across multiple tiers.

  • Small Projects:

    • Objective: Support innovative early-stage projects and encourage them to become native Arbitrum projects.
    • Target Audience: Small projects with niche, unique, or innovative use-cases that may not have product market fit, but have a high growth potential. They may lack resources to get off the ground, have less than a year of runway.
    • Funding Level: Recommendations ranged from 100K to 1M depending on the respondent, but the common comparison was to liken these grants to seed funding.
    • Governance: Council focused on weekly grants, low admin & high throughput.
    • Velocity: High cadence, high churn,
  • Medium Projects:

    • Objective: Scale up projects that have traction and achieved growth in the sector.
    • Target: Projects with a proven product and can point to a track record of metrics, usage, and market fit. Proven ability to manage incentives over a 3-6 month period.
    • Funding Level: Around 1M ARB grants, drawn from a set budget per quarter.
    • Criteria: Clear ability to hit pre-defined KPIs and Objectives, large enough to be self-supported. Potential to reach a higher tier of demand and growth.
    • Governance: Larger council than small, focused on quality of project and grant purpose over throughput.
  • Large Projects:

    • Objective: Provide large investment grants to support sophisticated and strategic objectives.
    • Target: Top protocols on Arbitrum with a clear track record of success and a clear strategic roadmap.
    • Funding Level: No limit, the most wiggle room but the highest level of oversight, strategy, and requirements.
    • Governance: A council or advisor with a partnership-style approach (potential oversight/collaboration with Arbitrum Foundation/OCL). These are adults providing white-glove service by providing advisory and strategic investments when needed.

Objective-centric Strategies

Sticky Users

Consistent and high-quality wallets were a common priority for many of the protocol teams interviewed. DeFi and onchain activity are inherently financial and thus users tend towards mercenary behavior. Many respondents felt incentive programs were well-positioned to analyze the habits of these users and design programs that retain quality users. An emphasis from respondents on users was establishing a definition of users most valuable to the ecosystem, and targeting them specifically.

As one respondent aptly put it: “A weird thing with this space is there’s more infra than protocols and more protocols than users. Users are the kingmakers”.

Native Projects and Tokens

Multiple respondents brought up strategies to increase native token distribution on Arbitrum. These included:

  • Native Strategic Assets:

    • Incentive programs to support strategic bluechip assets minted on Arbitrum. This includes supporting integrations with key strategic partners to help incentivize capital deployment on Arbitrum for native Arbitrum minting (one respondent noted native minting is essential for institutional adoption for security/legal/operational reasons).
    • This includes supporting partners such as:
      • BitGo (wBTC)
      • Circle (USDC)
      • Lido (stETH/wstETH)
      • Tether (USDT)
    • Native arbitrum minting of yield-bearing assets:
      • LSTs
      • RWAs
  • Native Tokens:
    • Providing incentives for projects that agree to launch tokens on Arbitrum creates both long-term alignment w/ Arbitrum native projects and supports outsized value creation helping metrics such as TVL.

Arbitrum Orbit

An aforementioned lack of communication around the Foundation and OCL’s strategy permeated a general desire of respondents to support deployments on the Arbitrum Orbit stack. While some pointed to grants as an essential priority, others felt uncertain of the importance of these chains given the DAO’s lack of a cohesive strategic goal.

Metric-centric Strategies

Sequencer Revenue

Multiple respondents noted that targeting programs focused on the immediate optimization of sequencer revenue would be a part of their strategy. However, optimizing revenue was often presented as a complementary strategy in tandem with additional programs to support long-term projects on Arbitrum. Most felt sequencer revenue was an appropriate long-term objective for the DAO.

Liquidity

Multiple respondents noted liquidity as an important metric for Arbitrum dominance in the marketplace. Those respondents calling for TVL as a metric often expressed a preference for native tokens over bridged assets, due to their stickiness to the ecosystem.

9 Likes

Well done on putting this together @tnorm !

I personally think that given this post-mortem assessment, it’s well worth hosting a couple of discussions to tackle each section so that this valuable feedback does not die out.

Some points from my end that I earmarked and that I feel are very relevant and valuable moving forward:

“Many respondents felt while the activity is great, the DAO has mistakenly used STIP as a model for future programs, interpreting it as a framework to build on, in contrast to its purpose as a one-off program to learn from. Many felt STIPs precedent hindered future program design”

“Some expressed frustration with the DAO’s velocity to launch new problems before understanding and learning from STIP’s results. Many harped on a need for data-driven analysis to inform programs and ingest takeaways”

“Respondents expressed similar concern over substantial spending without clear objectives or learnings from previous outcomes. This suggests a gap in understanding how best to allocate resources”

“The overwhelming consensus from respondents was that Arbitrum DAO was the most interesting and involved DAO in the ecosystem and had grown considerably following the launch of STIP. Many found the momentum encouraging but the amount of proposals, programs, and activities overwhelming”

" Learning: Many folks are hesitant to openly speak their mind. The power dynamics of DAOs mean delegates, service providers, contributors, projects, and stakeholders are often afraid to speak honestly and risk stepping on someone’s toes.

Lesson: Don’t be afraid to speak your mind, welcome constructive criticism. The DAO improves only when the culture is allowed to respectfully challenge itself and build better solutions"

4 Likes

This is great. Thank you very much.

I believe that with the programs already implemented, along with the ongoing approved programs, we have a solid foundation for the DAO to enter a sort of “period of reflection” (a concept from Optimism). In Optimism, Reflection Periods are a time for delegates to reflect on what’s been going well and what can be improved.

It’s time for the DAO to reach a wide consensus on ‘what the DAO wants to do’ with its funds. In my opinion, there’s a need for a debate to align short-, mid-, and long-term objectives, and based on that, develop suitable grant/incentives frameworks.

This post is a great starting point for discussion. The approved research bounty in LTIPP will aid in discussing and developing frameworks with a better understanding of what has worked, what hasn’t, and what needs to be done more efficiently.

5 Likes

Personally, I believe that the core component is data and its fair assessment by a community.

I know, we, arbitrum dao have a sporadic way of working together such as congregating in tg groups such as that of kpi working group:

But there should be a way to track and actively monitor grants data transparently.

For example, STIP Monitoring - Objective 1 & 2 - OpenBlock Labs - [FINAL]
provides a very easy to export data that can be used for calculating KPIs.

However, what is missing, in my opinion, is the transparent communication between grant recipients and arbitrum dao community at large. For example, on challenges at hand and grant completion. Not necessarily the overly involved dictator manager, but just a friendly reminder on what is going on.

I believe these learnings are extremely valuable so thanks for putting these together @tnorm. I like extensive reports and access to raw data just as much as anyone, but I think we need more of these “digestible” reflections of different initiatives that the average participant can understand and discuss.

I’m curious about how many people you interviewed before compiling this information and whether it was only participants that actively contributed to the STIP working group?

2 Likes

I talked with ~15 people individually, these interviews were specifically focused on the Working Group and folks involved in the original STIP (w/ some variance).

It would be well worth extending this interview process to the greater community…not sure what that looks like practically, however. To the point of a few of the Biggest Lesson takeaways, I think the best thing folks can do to bring this type of reflection forward is 1) bring the discussion back to the forums for the more technical/serious discussions around design, budgets, etc. and 2) just speak their minds. Feedback needs to be active and honest.

I would definitely encourage anyone who feels they have more to add either complete the survey, comment below, or reach out!

3 Likes