Authors
@SeriousIan @SeriousTaylor @SeriousKeith
Proposed KPIs for Arbitrum Grant Programs
Abstract
The purpose of this post is to address some common concerns that we have been hearing over the past few months with the STIP/LTIPP initiatives. The Serious People team has been having a plethora of conversations over the past few months, with that, we feel we are in an optimal spot to summarize our findings and make a number of recommendations to move the ball forward, focusing on better cohesion of all the moving pieces currently at play.
Our goal is to streamline the application, the advisor work, and the councilās ultimate decision making to line up to predetermined KPIās set forth and agreed upon by the DAO. This will empower the DAO with a standard KPI rubric to be used for this initiative and to be built off of for future initiatives. Additionally, we need a framework that enables us to compare projects on an apples to apples basis regardless of which program they are in.
So with that, the first step to us is aligning on KPIās that the DAO believes we should be aiming to achieve with the incentive programs. To do that, we must first identify the kinds of actions we want to incentivize, then create a way to convert each of those into a quantifiable number we can plug into a formula, allowing us to compare different programs by normalizing values and return.
We also believe that the current system is set up to incentivise competition to get into the programs, but not competition for who can provide the most value when the program is running. If we can agree on the basic goals of the program, we can evaluate and weigh each achievement, compare that to the value of the tokens emitted for the program and find out what the return is for each program as a percent.
Please note - this is a starting point for everyone to build off of, we are not saying this is the exact answer and we hold all the solutions. We welcome collaboration and constructive feedback!
Serious Peopleās Recommended KPI Formula: Return on Emissions (ROE)
We believe the best way to define a new standard and normalize returns is by using our Return on Emissions (ROE) metric, simply put:
TVE = Total Value Emitted (the value of each token at the time of emission)
TVR = Total Value Returned (the value brought back to the Arbitrum DAO)
This formula will tell us what is returned for every dollar spent from emissions. If you spend $100k of $ARB and bring $0 of value you have provided a return of 0%. This means that 100% of your emissions were harmful to the ecosystem. That said, depending on how we evaluate each of the factors that bring the ecosystem value, returns have no limit of how high they can go. Arbitrum and the protocols that are participating in the program should have a goal of bringing more value to the ecosystem than they have spent with the incentives. For example, spend $100k $ARB and bring $110k of value to the ecosystem.
So with that, what do we believe brings Arbitrum value? If we do a good job of capturing everything that brings value, this should be relevant for every program moving forward. We took a first cut, but would love to know what others think we are missing!
TVR Buckets
- Users from other blockchains
- New Defi users
- Permanent users
- Liquidity acquisition
- Liquidity retention
- New volume (for your respective protocol or what you are incentivizing)
- Retained Volume
- Sequencer fees generated by your protocol
- Reasons to hold/buy $ARB
- Innovation
As we are sure you can tell, some of these factors are objective and some are subjective. It will be difficult to have a perfect formula to evaluate each category, but the more complete our analysis is, the more accurate our comparison will be and the better we can compare projects and incentive programs. Having these goals baked out in advance gives participating protocols a direction on how to use the emissions that ARB gives them, with the target they are shooting for well defined up front. It also gives the advisors metrics to aim for when giving feedback and a more clear way to judge each proposal for the counsel.
This ultimately will also ensure that the application advisors are providing advice to the same rubric the council is voting on, otherwise without this standardization, participants could easily get different answers depending on when and who they speak to, which would only cause headaches down the line and make projects feel shorted.
How do we calculate TVR?
User Metrics - New DeFi users, Users from other blockchains, and Permanent users
- Identify a value for different types of users. For example, if we found all new users to arb are worth $10 each and a project brings 100 new users, they net $1,000 of TVR
Liquidity - Acquisition and Retention
- Identify how much the liquidity is worth and does the liquidity stay on Arbitrum after the program ends. We can break this into how much liquidity was acquired, and its stickiness to account for retention.
- For example, a project brings an average of $500k of liquidity for 90 days and an average of $100k of that liquidity remains for the 90 days after the program. We could say $1k of liquidity for 1 day = $1 of TVR. The TVR would then be calculated as follows: (500190) + (100190)= $54,000).
Volume - New Volume and Retained Volume
- We establish a baseline volume number for any project looking to optimize on this KPI, ideally using a 30-90 day average. We then measure the increase in volume over the life of the program, and the retained volume one the program closes.
- Using the baseline across all participants and looking at DAO wide goals, we set dollar values for increases in volume and retained volume, similar to Liquidity and Users.
Sequencer Fees generated by your protocol
- We believe this one is straightforward, but please confirm our understanding if we missed something. We can measure the fees generated from the protocolās smart contracts.
- We think this would be a great spot to pull in the Treasury working group on this as they are already tasked with figuring out the most effective way to spend sequencer fees, so they should have a good starting point on valuing these fees that we can work into ROE.
Holders
- OpenBlock has some beautiful dashboards that were used for past incentives, we would recommend working with them on the following items:
- Reason to buy/hold ARB
- % of claimed rewards sold
- Average duration held
Innovation
- We would likely recommend to have this separate from ROE unless the DAO can agree on a fair voting system that the counsel can vote on. We may be able to incentivise certain innovation that the DAO needs higher than others to make this less subjective.
How do we gather all of this information?
- We believe that we can gather all of this info by using the research bounty program that is already accounted for in Mattās LTIPP outline, there is a budget of 200k ARB.
- Serious People has already taken a first cut at how we would set the research bounties into buckets and what questions we would propose for each. You can see them here: Research Bounties for LTIPP
- Different teams can solve different buckets and define the TVR calculation.
- The more questions that are answered. The more we can add to our return analysis and the stronger this rubric can get over time.
- Hopefully other DAOs start to see our work and this rubric can help the larger DeFi space!
It is important to note - even if we can only start off with a couple of these categories, it is still a massive step in the right direction in terms of normalizing and standardizing our process, and then we can continue to build and refine from there.
When should we be running these calculations on incentive programs?
- We should ensure that each projectās dashboard at least captures TVE and the simple, more objective TVR metrics.
- A team or working group should be put in place to analyze the program during and monthly afterwards. It is imperative to look at multiple months past the program end to see how well value was retained. A spike in users is only helpful if we actually keep them active on chain post programs.
- Some of the more subjective pieces will have to be held to the end to analyze once, like innovation for example.
Example project (EXP) KPI Evaluation
Inputs:
- EXP is accepted into the Arbitrum LTIPP with a 100k allocation of arb tokens.
- EXP decides to take 20k of their $ARB to set up a referral program to get new users from other blockchains onto their platform. This program is very successful and brings 3000 new users to arb through their platform.
- EXP decides to use the other 80k $ARB to bond into liquidity for their token. This program raises them $70k of liquidity and attracts an additional 75 users.
Assumptions:
- Their $ARB was worth $1 each when they emitted it meaning that they spent $100k of ARB or their TVE was $100,000
- Each new user is worth $10
How would we calculate their Return to the DAO?
ROE = [(TVE - TVR) TVE] +1
ROE = [($100,000 - ($70k + ($3,075* $10))) $100,000] +1
ROE = [($100,000 - ($70k + $30,750)) $100,000] +1
ROE = [($100,000 - $100,750) $100,000] +1
ROE = [$750 $100,000] +1
ROE = 0.0075 +1
ROE = 1.0075 or 100.75%
We would consider this program to be a success as 100k was spend and over 100k was returned meaning that they had a positive return of .75% to the DAO
Conclusion
We believe that taking this approach will perfectly complement the new changes to the program of having a council and advisors.
Aligning the goals of the program and how we compare project outcomes should have the following effects:
- Easy alignment between participants in the program, the council, the advisors, and the DAO.
- A framework that we can carry into the future and improve on, but quickly establishes a baseline KPI very similar to commonly used financial metrics like ROI or IRR.
- Less collusion or favoritism
- Better Outcomes as projects have a clear target to shoot for, rather than how the application is currently set up, where we ask projects to come up with their own KPIs.
- Allow us to simplify the application so that we are getting only the information we need and less fluff / irrelevant information.
With that, we already took a cut at what we believe the updated application template should look like and have posted it here: Application Template Suggested updates
This will allow us to prioritize two things, lining up to this overarching post.
- Projects providing all of the relevant data needed to analyze KPIās and showing it on their dashboard during the program.
- Projects avoid wasting time creating their own KPIās, having these set by the DAO as we should all be moving toward the same goals anyway.
We welcome all feedback! We want to build this together and collaborate. By no means are we saying we have the complete solution, just wanted to share tangible outputs and recommendations from our conversations so we can keep pushing forward with the DAO!
Additionally we want to thank everyone that has been jumping on the DAO wide calls through Q3 and Q4 last year; your thoughts and feedback were imperative to our refinement process here. We would also like to specifically thank @Matt_StableLab and @AlexLumley for all their hard work, hopping on calls and teeing us up for success with these posts. Also @DisruptionJoe, and the Treasury and Sustainability Working Group, as well as the Incentive Working Group - all been incredible resources!
Letās make Arbitrum the strongest functioning DAO in DeFi!