We believe that the research bounty section in the LTIPP can be used to help create and calculate KPI’s, therefore allowing various programs to be compared to one another on an even playing field. There are hundreds of questions that we are sure everyone would love an answer to, however the ones that we decided to prioritize relate specifically to how we are looking at the STIP and LTIPP that the Arbitrum DAO is running.
Looking at the example questions that Matt posted, a few things became apparent to us:
- These questions can be categorized into a few buckets based on their primary subject matter. If a team is able to easily pull a certain type of info they should be able to answer most of the questions in a category.
- Some of these questions need to be answered first in order for the others to be answered later. In other words, there is a forced prioritization of the questions. For example, in order to answer a question regarding the appropriate size for a grant program, one would first need to understand how well the program worked initially.
The buckets that we have created are:
- User Acquisition
- Liquidity Acquisition
- Volume/ Sequencer fees created
- Innovation of protocols
- Liquidity for $ARB (not to be confused with liquidity acquisition, this refers to how much liquidity there should be for the actual $ARB token and not just TVL on the chain)
Below we have compiled a list of questions that we believe would fall into each bucket.
- What type of users was this program attracting?
- Mercenary miners?
- What info can we pull on wallets to start to classify these users into different segments to better understand the user base and their on chain actions.
- How new is the wallet?
- How was money transferred to this wallet?
- Are they DOXed?
- Were they already on Arbitrum?
- Are they new to DeFi?
- How much money do they have?
- Did they hold ARB before the program started?
- What did they do with the ARB that they received/bought/acquired?
- How much ARB are they still holding 1 month after the program? 3 months?
- How much did they interact with Arbitrum and what type of interactions did they have?
- How much overlap was there with other protocols that received STIP funding?
- What is the lifetime value of a new DeFi user and a new Arbitrum user?
- How do we start to assign values to these users to better inform ROE?
- How much liquidity was acquired during the program?
- On average
- At the peak
- How was it dispersed? (how many wallets and concentration in each)
- How efficiently was the liquidity acquired?
- How much liquidity remained after the incentives ended?
- 1 day, 1 month, 6 months, etc.
- How efficiently was that liquidity used?
- What type of tokens
- Trade volume through those pairs
- What won out as the best strategy?
- Why do we think that?
- What was the worst strategy?
- What led to this performing poorly?
- What outcomes can we derive from the strategy’s success?
Volume / Sequencer Fees
- How much volume did this program create?
- How much volume were they having on what they were incentivising before, during, and after the program?
- How much money was generated via sequencer fees?
Innovation / Protocols
- What do we believe that the chain is looking for in terms of innovation?
- Is this new to the chain?
- Is this new to DeFi?
- Who is this innovation for? ( specific niche of users)
- Is your product battle tested?
- What is the risk of this type of innovation?
- What makes their program unique?
- Are protocols more competitive or complementary?
- How long has the protocol been around?
- How large is the protocol?
- What category/categories does the protocol fall into?
Liquidity for ARB
- Does Arbitrum have enough liquidity?
- Should Arb focus more on DeFi liquidity?
- How many emissions should go toward incentivising liquidity for the arb token
- Is the arb treasury diversified enough?
- What should Arbitrum hold?
When all of these questions are answered, the outputs can be used to create a formula that determines the value of each unique user, liquidity, and innovation. This will allow the DAO to find answers to many more questions. For instance:
- Based on our KPI’s, how successful was this program compared to others?
- How can we replicate this success for a more ongoing incentive strategy?
- Based on our metrics, what do we consider to be success?
- Was this program a success?
- How large was your grant compared to the size of your protocol and could that have affected the result?
- Can we avoid overlap between certain protocols?
- Should each STIP or LTIPP be aimed at a different market
- Which protocols should continue to be funded?
- Does it make sense to run incentives in strict groups?
- Should different programs be run for each class of business that DAO needs?
- Should there be a cap on each class of business to avoid overlap/ internal competition?
- What is the correlation between size of protocol and effectiveness of the grant?
- What is the correlation between size of the grant and effectiveness of the grant?
Given how in-depth each of these questions are, it is our opinion this might cost all or more than the whole 200k $ARB to solve. We would recommend allocating $ARB values to each of the buckets, then different service providers can come in and solve different buckets, focusing on their areas of expertise.
For example, Serious People would be well equipped to answer questions regarding liquidity, but not so much regarding user tracking. This framework allows us to put the best players in each section.
We also will need to continue funding research as a DAO; this data is imperative to measure success but is only the first step. We should be thinking about how we can refill this research bounty bucket, or carve out a larger bucket for funding continuous research.
We also understand that some of these questions will be difficult or maybe impossible to answer given the tools that the space has access to at this time. We will want a reviewal process in place to ensure a service provider feels setup for success and they have a clear path forward on how to capture, analyze and present the data.
What did we miss and how can we improve this?
We have outlined many of the questions that are top of mind for us as we look at proposals and what we believe is needed from the DAO. We are looking for feedback on what questions or categories are missing or if someone believes that there is a better way to organize the buckets that we have.