Serious People: Research Bounties for LTIPP


@SeriousIan @SeriousTaylor @SeriousKeith

Research Bounties for LTIPP

We believe that the research bounty section in the LTIPP can be used to help create and calculate KPI’s, therefore allowing various programs to be compared to one another on an even playing field. There are hundreds of questions that we are sure everyone would love an answer to, however the ones that we decided to prioritize relate specifically to how we are looking at the STIP and LTIPP that the Arbitrum DAO is running.

Looking at the example questions that Matt posted, a few things became apparent to us:

  1. These questions can be categorized into a few buckets based on their primary subject matter. If a team is able to easily pull a certain type of info they should be able to answer most of the questions in a category.
  2. Some of these questions need to be answered first in order for the others to be answered later. In other words, there is a forced prioritization of the questions. For example, in order to answer a question regarding the appropriate size for a grant program, one would first need to understand how well the program worked initially.

The buckets that we have created are:

  1. User Acquisition
  2. Liquidity Acquisition
  3. Volume/ Sequencer fees created
  4. Innovation of protocols
  5. Liquidity for $ARB (not to be confused with liquidity acquisition, this refers to how much liquidity there should be for the actual $ARB token and not just TVL on the chain)

Below we have compiled a list of questions that we believe would fall into each bucket.

User Acquisition

  • What type of users was this program attracting?
    • Mercenary miners?
  • What info can we pull on wallets to start to classify these users into different segments to better understand the user base and their on chain actions.
    • How new is the wallet?
    • How was money transferred to this wallet?
    • Are they DOXed?
    • Were they already on Arbitrum?
    • Are they new to DeFi?
    • How much money do they have?
    • Did they hold ARB before the program started?
    • What did they do with the ARB that they received/bought/acquired?
    • How much ARB are they still holding 1 month after the program? 3 months?
    • How much did they interact with Arbitrum and what type of interactions did they have?
    • How much overlap was there with other protocols that received STIP funding?
  • What is the lifetime value of a new DeFi user and a new Arbitrum user?
  • How do we start to assign values to these users to better inform ROE?

Liquidity Acquisition

  • How much liquidity was acquired during the program?
    • On average
    • At the peak
    • How was it dispersed? (how many wallets and concentration in each)
  • How efficiently was the liquidity acquired?
  • How much liquidity remained after the incentives ended?
    • 1 day, 1 month, 6 months, etc.
  • How efficiently was that liquidity used?
    • What type of tokens
    • Trade volume through those pairs
  • What won out as the best strategy?
    • Why do we think that?
  • What was the worst strategy?
    • What led to this performing poorly?
  • What outcomes can we derive from the strategy’s success?

Volume / Sequencer Fees

  • How much volume did this program create?
    • How much volume were they having on what they were incentivising before, during, and after the program?
  • How much money was generated via sequencer fees?

Innovation / Protocols

  • What do we believe that the chain is looking for in terms of innovation?
  • Is this new to the chain?
  • Is this new to DeFi?
  • Who is this innovation for? ( specific niche of users)
  • Is your product battle tested?
  • What is the risk of this type of innovation?
  • What makes their program unique?
  • Are protocols more competitive or complementary?
  • How long has the protocol been around?
  • How large is the protocol?
  • What category/categories does the protocol fall into?

Liquidity for ARB

  • Does Arbitrum have enough liquidity?
  • Should Arb focus more on DeFi liquidity?
  • How many emissions should go toward incentivising liquidity for the arb token
  • Is the arb treasury diversified enough?
  • What should Arbitrum hold?

When all of these questions are answered, the outputs can be used to create a formula that determines the value of each unique user, liquidity, and innovation. This will allow the DAO to find answers to many more questions. For instance:

  • Based on our KPI’s, how successful was this program compared to others?
  • How can we replicate this success for a more ongoing incentive strategy?
  • Based on our metrics, what do we consider to be success?
  • Was this program a success?
  • How large was your grant compared to the size of your protocol and could that have affected the result?
  • Can we avoid overlap between certain protocols?
  • Should each STIP or LTIPP be aimed at a different market
  • Which protocols should continue to be funded?
  • Does it make sense to run incentives in strict groups?
  • Should different programs be run for each class of business that DAO needs?
  • Should there be a cap on each class of business to avoid overlap/ internal competition?
  • What is the correlation between size of protocol and effectiveness of the grant?
  • What is the correlation between size of the grant and effectiveness of the grant?
  • ETC…

Given how in-depth each of these questions are, it is our opinion this might cost all or more than the whole 200k $ARB to solve. We would recommend allocating $ARB values to each of the buckets, then different service providers can come in and solve different buckets, focusing on their areas of expertise.

For example, Serious People would be well equipped to answer questions regarding liquidity, but not so much regarding user tracking. This framework allows us to put the best players in each section.

We also will need to continue funding research as a DAO; this data is imperative to measure success but is only the first step. We should be thinking about how we can refill this research bounty bucket, or carve out a larger bucket for funding continuous research.

We also understand that some of these questions will be difficult or maybe impossible to answer given the tools that the space has access to at this time. We will want a reviewal process in place to ensure a service provider feels setup for success and they have a clear path forward on how to capture, analyze and present the data.

What did we miss and how can we improve this?

We have outlined many of the questions that are top of mind for us as we look at proposals and what we believe is needed from the DAO. We are looking for feedback on what questions or categories are missing or if someone believes that there is a better way to organize the buckets that we have.


Hello, serious people!
Interesting questions, but I have my own questions for them.

  • How to understand how new is the wallet? The most of users has several addresses.
  • How to understand were they already on Arbitrum, if they has many addresses?
  • How to understand they new to DeFi, how much money do they have and so on.

There are many question, which we probably won’t be able to answer.
The second part of questions are:
How to apply these answers to determining effectiveness and how to calculate KPI? What is better - to receive funds from another address, or by bridge, or by CEX?


Hey @cp0x thank you for the feedback!

When it comes to user data we honestly have a lot of the same questions. Wanted to be thorough as the tools for finding info on users are getting better every day. These lists were meant to be optimistic as we can always remove fields that end up being too difficult to find. We are hoping we get experts in different verticals to help build on and refine the starting buckets that we presented.

We have a post here about how we would approach calculating KPI’s that might answer your last question here. Serious People: Proposed KPIs for Arbitrum Grant Programs

Happy to answer any other questions here or on one of the group calls in the future!


Hey Serious People,

This is a great expansion of the research bounties for LTIPP. I would agree, 200K ARB is likely not enough to cover these questions, and it makes me think this should be expanded outside of the LTI PP. I also second the idea of funding service providers for buckets.

Do we have a research working group atm? This should likely be an ongoing workstream given new questions will continue to pop up as we gather and analyze more data

1 Like

I have asked chat gpt to expand on these questions:

In the context of Innovation/Protocols, the questions related to the LTIPP and research bounties can be effectively addressed as follows:

  1. What do we believe that the chain is looking for in terms of innovation? The chain is likely seeking innovations that enhance scalability, interoperability, security, and usability within its protocol. These innovations may also aim to address specific pain points or limitations within the existing protocol architecture.
  2. Is this new to the chain? The degree of novelty of the innovation to the chain would depend on the specific details of the proposed innovation. While the chain may have previously implemented various innovations, the specific nature and novelty of the current innovation would need to be evaluated to determine its level of newness to the chain.
  3. Is this new to DeFi? The novelty of the innovation within the broader DeFi ecosystem would depend on its specific features and functionalities. It would be important to assess whether similar solutions have been previously implemented within the DeFi space and to what extent the proposed innovation offers unique capabilities.
  4. Who is this innovation for? (specific niche of users) The innovation may be targeted towards specific user segments within the DeFi ecosystem, such as liquidity providers, traders, developers, or decentralized application (dApp) users. Understanding the specific target audience for the innovation is crucial for effective implementation and adoption.
  5. Is your product battle tested? The level of testing and validation of the product or innovation would determine its readiness for deployment. Conducting thorough testing, including stress testing and security audits, is essential to ensure the robustness and reliability of the innovation.
  6. What is the risk of this type of innovation? Assessing the potential risks associated with the innovation, such as security vulnerabilities, network disruptions, or unintended consequences on existing protocols, is essential for informed decision-making. Understanding and mitigating these risks is crucial for successful implementation.
  7. What makes their program unique? The uniqueness of the program may stem from its differentiated features, technological advancements, novel consensus mechanisms, or its strategic positioning within the DeFi landscape. Articulating its unique value proposition would be essential to distinguish it from existing protocols.
  8. Are protocols more competitive or complementary? Understanding the competitive landscape and potential synergies with existing protocols is critical for evaluating the positioning of the protocol within the broader DeFi ecosystem. Assessing whether the protocol competes directly with existing solutions or offers complementary functionalities is crucial for strategic planning and partnership opportunities.
  9. How long has the protocol been around? Evaluating the maturity and history of the protocol is essential for understanding its evolution, track record, and resilience in the face of market dynamics and technological advancements.
  10. How large is the protocol? Assessing the scale and adoption of the protocol within the DeFi ecosystem would provide insights into its market presence, user base, and potential impact on the broader ecosystem.
  11. What category/categories does the protocol fall into? Categorizing the protocol in terms of its core functionalities (e.g., lending, decentralized exchange, asset management) and identifying its specific use cases and applications within the DeFi ecosystem would provide clarity on its role and positioning within the market.

Personally, I believe its very important to consider team and team experience.

The experience of the team is still a crucial factor in influencing the outcome. The collective expertise, track record, and complementary skills of the team can greatly impact the success of the program, which in turn can influence the decisions made and the overall performance of the LTIPP.

A team with a strong background in their respective fields may be more likely to make sound decisions, which can contribute to the overall success of the LTIPP.

Additionally, the experience and success history of the team may make the LTIPP more attractive to potential participants and investors, thus influencing their interest in the program and potentially affecting the value of the LTIPP. Therefore, the experience of the team plays a significant role in shaping the performance and perception of the LTIPP, ultimately impacting its success.

Mentioned this in KPI / innovation research into grant recipients / programs · Issue #18 · OpenDataforWeb3/DataGrantsforARB · GitHub

Please provide feedback :slight_smile:

Research can be done answering the questions you outline here, but to be uselful for the future a key consideration needs to happen:

Every research bounty ought to be scientific. A scientific research can be validated, and replicated, and based on the limitations defined by the research, it can also be improved or scaled further by another researcher.

If answers to those questions are priority, waiting for tools to get better is not ideal. The research happens in a specific time period with specific tools and approaches. If the chosen tool improves in the run, that’s great, but the research methodology should have defined which tool to be used and the reasoning behind it.

Following a scientific approach will allow the DAO to take those initial findings to set an initial context that is evidence-supported and which then can be improved as needed.