The below response reflects the views of L2BEAT’s governance team, composed of @krst and @Sinkas, and it’s based on the combined research, fact-checking and ideation of the two.
With the STIP voting now behind us, we wanted to write a post to share our view of the entire process, our reasoning for voting the way we did, and the lessons we learnt along the way. We also want to take the opportunity to congratulate all protocols for submitting great applications, all the parties involved with bringing the STIP to life (and especially @tnorm and @Matt_StableLab for spearheading it), as well as all the delegates who put in the effort of reviewing and voting on all the applications in such a short time frame.
Process
First off, we believe we speak for everyone when we say that we didn’t anticipate the volume of applications the STIP received. We were overwhelmed by countless requests to review applications and offer our feedback to the point where it became impossible to do so. As a result, we adopted an “everyone or no one” policy which basically meant that we’d only do something if we had the capacity to do it for everyone, or we wouldn’t do it at all.
After going through the applications that made it to the final stage, it became apparent that it wouldn’t be easy to objectively assess all the proposals based on a shared set of evaluation points. Given that, we decided to create a basic rubric that would take a more general approach and then subjectively assess each application after collectively reviewing and discussing them internally.
However, having a rubric introduced a new problem; it quickly became overwhelming to actually score each application. So what we settled with is to use the rubric we created as a reference point when assessing applications, but not restrict ourselves to simply the score produced.
We won’t go into details about the score of each application as we feel it’s outside the scope of this post, but we will explain our rubric and how we used it.
Our Rubric
We separated our rubric in 2 parts: 3 core aspects and 6 secondary aspects that we used as a discretionary modifiers.
Core Aspects
- New Capital
- The effectiveness in bringing in new capital to Arbitrum. We also tried to determine how sticky that capital would be after the incentives ended, based on the structure of the incentives.
- New Users
- The effectiveness of the proposed use of funds in bringing new users to Arbitrum, and how effective the structure of the incentives would be in user retention.
- On-chain Activity
- Last of the 3 core aspects was whether the project applying would create on-chain activity, and ideally recurring activity that would last beyond the timeframe of the incentives.
Secondary Aspects
- Novelty
- How innovative was the project, or how they differentiated themselves from other projects in the same category.
- Size of project
- The size of the project itself, in terms of TVL, DAU, Volume and even subjective brand recognition.
- Composability with other projects
- Whether the project applying is collaborating with other projects, how many, and to what extent.
- Size of grant request
- The grant request in comparison with that of other projects, the size of the project (as per above), and the justification for the size.
- Grant breakdown
- How the requested funds would be used, and what percentage would go towards what.
- Alignment with Arbitrum
- Protocols that were exclusive to Arbitrum scored higher from multi-chain protocols simply because they are better aligned with Arbitrum. However, we did keep in mind that some protocols (e.g. bridges) are inherently multi-chain and we scored them accordingly.
In the future, it might be beneficial to the entire process for the program to have a rubric or assessment framework built into it so both applicants and delegates can have a reference point of what to focus on. That will make the whole process much smoother for everyone involved.
Voting
There has been some discussion around the use of “Abstain” and what it means vs what it’s supposed to mean. We wanted to address this to clarify how we voted in the way we voted.
-
For - We voted in favor of applications which not only scored high in our rubric, but which we also subjectively believed were properly balanced.
-
Against - We voted against applications which scored low on our rubric, or applications which we subjectively believed were not balanced.
-
Abstain - We voted abstain for applications that we were on the fence about and for which we had no strong reasons to vote against, but we weren’t compelled to vote in favour of either. So in practice we treated it as a “soft against”.
We tried our best to be as objective as possible when assessing applications, even though the volume of applications received, as well as their diversity both in nature but also in context and grant requested made it a difficult and often a necessarily subjective task.
Looking Forward
In our view, the STIP initiative was a great experiment that yielded lots of valuable information which can be used and adapted to future iterations of it, or in similar programs. It’s very important that each application receives enough time, effort, and ultimately feedback from delegates, something which didn’t really happen this time around, at least to the extent we’d want it to, given the time constraints.
Some thoughts for the future:
- Perhaps introducing a cap on the numbers of applications for each round of STIP can help mitigate the overwhelming task of reviewing and providing feedback to every applicant.
- We need to have some sort of a system in place to ensure that each proposal gets reviewed and has feedback offered before the voting process begins.
- A thorough analysis of the applications received in Round 1 of the STIP could help with emerging some patterns (e.g. common components across applications in a specific category) which could be used as building blocks or application template in the future.
We are looking forward to other delegates’ feedback!