Reflections on the Short-Term Incentive Program

The below response reflects the views of L2BEAT’s governance team, composed of @krst and @Sinkas, and it’s based on the combined research, fact-checking and ideation of the two.

With the STIP voting now behind us, we wanted to write a post to share our view of the entire process, our reasoning for voting the way we did, and the lessons we learnt along the way. We also want to take the opportunity to congratulate all protocols for submitting great applications, all the parties involved with bringing the STIP to life (and especially @tnorm and @Matt_StableLab for spearheading it), as well as all the delegates who put in the effort of reviewing and voting on all the applications in such a short time frame.

Process

First off, we believe we speak for everyone when we say that we didn’t anticipate the volume of applications the STIP received. We were overwhelmed by countless requests to review applications and offer our feedback to the point where it became impossible to do so. As a result, we adopted an “everyone or no one” policy which basically meant that we’d only do something if we had the capacity to do it for everyone, or we wouldn’t do it at all.

After going through the applications that made it to the final stage, it became apparent that it wouldn’t be easy to objectively assess all the proposals based on a shared set of evaluation points. Given that, we decided to create a basic rubric that would take a more general approach and then subjectively assess each application after collectively reviewing and discussing them internally.

However, having a rubric introduced a new problem; it quickly became overwhelming to actually score each application. So what we settled with is to use the rubric we created as a reference point when assessing applications, but not restrict ourselves to simply the score produced.

We won’t go into details about the score of each application as we feel it’s outside the scope of this post, but we will explain our rubric and how we used it.

Our Rubric

We separated our rubric in 2 parts: 3 core aspects and 6 secondary aspects that we used as a discretionary modifiers.

Core Aspects

  1. New Capital
  • The effectiveness in bringing in new capital to Arbitrum. We also tried to determine how sticky that capital would be after the incentives ended, based on the structure of the incentives.
  1. New Users
  • The effectiveness of the proposed use of funds in bringing new users to Arbitrum, and how effective the structure of the incentives would be in user retention.
  1. On-chain Activity
  • Last of the 3 core aspects was whether the project applying would create on-chain activity, and ideally recurring activity that would last beyond the timeframe of the incentives.

Secondary Aspects

  1. Novelty
  • How innovative was the project, or how they differentiated themselves from other projects in the same category.
  1. Size of project
  • The size of the project itself, in terms of TVL, DAU, Volume and even subjective brand recognition.
  1. Composability with other projects
  • Whether the project applying is collaborating with other projects, how many, and to what extent.
  1. Size of grant request
  • The grant request in comparison with that of other projects, the size of the project (as per above), and the justification for the size.
  1. Grant breakdown
  • How the requested funds would be used, and what percentage would go towards what.
  1. Alignment with Arbitrum
  • Protocols that were exclusive to Arbitrum scored higher from multi-chain protocols simply because they are better aligned with Arbitrum. However, we did keep in mind that some protocols (e.g. bridges) are inherently multi-chain and we scored them accordingly.

In the future, it might be beneficial to the entire process for the program to have a rubric or assessment framework built into it so both applicants and delegates can have a reference point of what to focus on. That will make the whole process much smoother for everyone involved.

Voting

There has been some discussion around the use of “Abstain” and what it means vs what it’s supposed to mean. We wanted to address this to clarify how we voted in the way we voted.

  • For - We voted in favor of applications which not only scored high in our rubric, but which we also subjectively believed were properly balanced.

  • Against - We voted against applications which scored low on our rubric, or applications which we subjectively believed were not balanced.

  • Abstain - We voted abstain for applications that we were on the fence about and for which we had no strong reasons to vote against, but we weren’t compelled to vote in favour of either. So in practice we treated it as a “soft against”.

We tried our best to be as objective as possible when assessing applications, even though the volume of applications received, as well as their diversity both in nature but also in context and grant requested made it a difficult and often a necessarily subjective task.

Looking Forward

In our view, the STIP initiative was a great experiment that yielded lots of valuable information which can be used and adapted to future iterations of it, or in similar programs. It’s very important that each application receives enough time, effort, and ultimately feedback from delegates, something which didn’t really happen this time around, at least to the extent we’d want it to, given the time constraints.

Some thoughts for the future:

  • Perhaps introducing a cap on the numbers of applications for each round of STIP can help mitigate the overwhelming task of reviewing and providing feedback to every applicant.
  • We need to have some sort of a system in place to ensure that each proposal gets reviewed and has feedback offered before the voting process begins.
  • A thorough analysis of the applications received in Round 1 of the STIP could help with emerging some patterns (e.g. common components across applications in a specific category) which could be used as building blocks or application template in the future.

We are looking forward to other delegates’ feedback!

16 Likes

Thank you for the transparency and effort @krst :handshake:

2 Likes

Some thoughts looking forward as well. We probably should establish some rules on communications between projects and delegates. I don’t think lobbying is necessarily a bad thing; it helped a lot of projects come together in ways they were not before, but some ground rules of what is not tolerated we should think about and work into ratification eventually.

Rules

  1. No bribing or buying votes.
  2. No unwanted pestering of delegates to promote your proposal. Maybe a public status if the delegate is willing to be contacted.
  3. No lying or false claims about contributions or past impact (including metrics)
  4. Voting on proposals in which they have a direct or indirect undisclosed financial interest? (Self-voting/partner-voting is okay)
  5. Delegates to be transparent and disclose any relevant information that may influence their vote, available publicly.
  6. Delegates to not use their position for their own private gain or for that of persons or organizations with which they are associated personally.
  7. Retaliation - Spreading of misinfo for a project that doesn’t vote for you; idk

Considerations

  1. Review the proposal and any supporting materials before casting your vote. (We were able to get through all proposals; we did use abstain as a soft-against to still add to their quorum total, yet to keep total allocation under 50mm)
  2. Consider the proposal’s potential impact on the community and the ecosystem as a whole, does it encourage sticky users/txns/vol.
  3. Consider the long-term sustainability of the proposal and its potential to contribute to the growth and development of the ecosystem ie exclusivity deals, projects that can generate a lot of txn throughput etc. Small volume today, big volume in future.
  4. Consider the potential benefits of the proposal for ecosystem users, quality of life improvements, privacy protecting projects, etc
  5. Consider the diversity of the project and the potential impact of the proposal on all community members or projects. Dev tooling, wasm stylus formal verification, wallet tagging, order flow, ecosystem market makers, large ecosystem LPs, retainer auditing services, etc.
  6. Maybe some more forum mods to shutdown inappropriate conjecture/noise.
4 Likes

Thanks @krst and @dk3 for sharing insights. I align with your views and I’d like to share some key takeaways.

  1. Let’s acknowledge and remember we’re in a bear market. Projects are keen, if not desperate, to draw TVL, gain traction, and attract more buyers/investors. As a builder, I empathize with their grant-seeking efforts, yet there’s a need for clear boundaries.
    Ok lobbying, ok voting for your partners, but we need to find ways for delegates to be more impartial, and for newcomers to have a shot at kickstarting and bootstrapping their projects - otherwise we’ll end up with an ossified number of players and low innovation.

  2. The crypto realm is unique—no other sector would have disbursed $50m with merely a few weeks for due diligence on proposals and projects. I found myself somewhat overwhelmed and under-equipped to review all proposals in that timeframe—this model suits team delegates but individuals will invariably need to prioritize based on familiarity or dedicate time to research selected few projects.

  3. My objective was to review first the smaller projects, enabling them a shot at grants. This approach had mixed outcomes—per feedback, it did help some projects in gaining visibility and securing their STIP. However, toward the end, as clear winners emerged, I hadn’t delved into the mid-tier projects that could have benefited from my votes to surpass the threshold.

  4. Casting a ‘no’ vote requires courage, meriting more respect. Many delegates felt the strain to only vote for Yes, and I faced backlash for my decisions. I feel we need to devise better mechanisms for individuals to vote freely and without pressure.

  5. I have been criticized for focusing on projects that made Arbitrum their primary home (“how can you say that when you work for a bridge?”) . I support this position, and I believe we should design retention mechanisms to overcome “chase-the-incentives” mindset common among crypto builders.
    The Blue Pledge is a fascinating idea, and I think more experiment in this area will determine the success or failure of Arbitrum.

  6. We now need data to measure outcomes and impact of each grant, evaluate which ones were most successful, and use these insights to drive new rounds.

5 Likes