[RFC] Incentives Detox Proposal

Below is the recording of the third call, the call’s transcript, and the chat log. You will also find notes on the topics/questions we discussed and a summary of the views expressed. In the third call, we had Matt from StableLabs present the LTIPP Council’s perspective of incentives.

Links

Recording - Liquidity Incentives Call #3 (4.9.2024)
Transcript - Liquidity Incentives Call #3 (4.9.2024)
Chat Log - Liquidity Incentives Call #3 (4.9.2024)

Summary

The things that were discussed were:

  • On operational challenges faced by the advisors and the council, the feasibility of advising 180 protocols with three people was questioned, highlighting the time-consuming nature of the process.
  • In response, the the variability in the time required to advise protocols, depending on their experience and preparedness was also discussed.
  • The need for a coordination layer within the DAO to support multiple grant programs and
    initiatives.
  • The need for a more holistic approach to support protocols and ensure alignment with DAO objectives.
  • The need for a smaller, dedicated group to research and propose the new incentives program

From the presentation of Matt from StableLabs, we discussed:

  • The DAO voted on 95 proposals, leading to chaos and poor incentive design choices.
  • Operational challenges included KYC delays, lack of clear goals, and ineffective
    decision-making by the multi-sig team.
  • The PM was overwhelmed with managing 75 protocols and 70 million ARB
  • Advisors were added to help protocols design incentives, but protocols missed deadlines
    and couldn’t alter proposals.
  • KYC delays and fluctuating ARB prices led to misjudged goals and confusion for
    delegates.
  • The lack of data funding and a council made it difficult to monitor and approve proposals.
  • The STIP Bridge program was created to address the disadvantages faced by protocols applying to LTIPP.
  • Advisors helped with STIP Bridge addendums, but the program had a short incentive period and no
    guidelines for ARB use.
  • Delegates were hesitant to post challenges due to unclear past performance metrics.
  • The lack of objectives, operational structure, marketing, and enough people to manage the programs was highlighted.
  • Timelines were unclear and too short, leading to delays and shortened incentives periods.
  • Protocols were unsure about future incentives and faced competition dynamics like a race-to-zero on protocol fees.
  • Having a single bucket of incentives for all protocols made it hard to create a standardized rubric and guardrails.

In terms of ideas for future programs:

  • A new program should have specific goals derived from the DAO’s objectives, with different
    programs for different protocol categories.
  • A marketing team and watchdog should be introduced to improve visibility and monitor for
    misuse of funds
  • The program should be iterative, allowing for changes based on feedback and performance.
  • It was suggested that some tasks, like gathering delegate feedback and defining DAO
    goals should be handled by a dedicated incentive team.
  • The need for a one-year program to reduce the rush and improve the process.
  • The idea of having verified experts as advisors was proposed to improve the quality of advice and reduce the workload for DAO contributors and delegates.
  • The importance of aligning the program with DAO goals and ensuring clear operational
    costs and structures.

For next call:

As with previous calls, we encourage participants to prepare for the next one and share their visions and lessons earned with the rest of the working group. If you have an analysis, a report, or other findings to present during one of the upcoming calls, please contact Sinkas on Telegram.

5 Likes

@Matt_StableLab would you be ok to share the presentation file, that you presented on this call last week?

3 Likes

@paulofonseca Yeah, here is the presentation Arbitrum Incentives - Google Slides

4 Likes

Below is the recording of the third call, the call’s transcript, and the chat log. You will also find notes on the topics/questions we discussed and a summary of the views expressed. In the fourth call, we had Tnorm, the man who spearheaded STIP, present his views and learnings.

Recording - Liquidity Incentives Call #4 (11.9.2024)
Transcript - Liquidity Incentives Call #4 (11.9.2024)
Chat Log - Liquidity Incentives Call #4 (11.9.2024)

Summary

The things that were discussed were:

  • The importance of having protocol representatives share their perspectives during the calls.
  • The feasibility of a small grants program, noting the challenges of scaling and the high volume of applications, was questioned.
  • That the classification of projects requires better definition and that the goals for different types of projects should be clearly stated.
  • The need for a better definition of “good projects” and the importance of metrics in evaluating project success.

From the presentation of Tnorm, we discussed:

  • STIP was designed as an emergency program to unlock capital and support the ecosystem. I was set up quickly with stakeholder input and prioritized speed over practicality.
  • The need to return to first principles and strategic thinking around incentives and incentive programs.
  • Onchain incentives aim to create actions onchain by making transactions cheaper and/or more attractive. There’s a spectrum of actions that can be incentivized.
  • Objective-based grants, which can target specific actions like buying, selling, depositing, liquidity provision, and so on.
  • Direct-to-contract incentives, which could offer granular, targeted incentives at the contract level. (more on this here).
  • The differences between strategy and tactics in incentive programs, emphasizing the
    importance of clear goals, KPIs, and objectives.
  • What three main stakeholder groups that Tnorm identifies are: network stakeholders, application stakeholders, and user stakeholders.
  • The traditional program structure, where incentives flow from a treasury to decision-makers, who then distribute them to projects.
  • Insights Tnorm’s interviews with delegates, highlighting the need for clear processes to serve different project buckets.
  • Three main buckets for project-based incentive grants: small and early-stage projects, scaling projects, and large, established projects.
  • The challenges of defining small projects and the need for a council to handle high-volume, low-admin grants.
  • The need for higher oversight and clear KPIs for scaling projects, which aim to support projects with proven product-market fit.
  • A partnership budget approach for large projects, where the DAO provides strategic support and high-level services.
  • The importance of objective-based incentive programs, which focus on specific metrics like user incentives and native project support.
  • The need for sophisticated user segmentation and the potential for targeted campaigns to attract valuable users.
  • The importance of native token minting and strategic asset support as key objectives for incentive programs.
  • The concept of network-wide campaigns, which could incentivize actions across multiple protocols and verticals.
  • The use of service providers to support sophisticated analytics and distribution of incentives, at least for specific verticals.
  • The need for tooling improvements to support innovative approaches, such as direct-to-contract incentives, which could eliminate the need for protocol-specific grants and support market dynamics.

In terms of ideas for future programs:

  • The need for a layer of accountability and authoritative power to make decisions on project funding was highlighted.

For the next call:

As with previous calls, we encourage participants to prepare for the next one and share their visions and lessons earned with the rest of the working group. If you have an analysis, a report, or other findings to present during one of the upcoming calls, please contact Sinkas on Telegram

Please note that the next call will be on Wednesday 25/9 as we’ll skip this week’s call due to Token2049.

4 Likes

Below is the recording of the fifth call, the call’s transcript, and the chat log. In the fifth call, we had OpenBlock’s Labs present the dashboards they have created to track incentives.

Recording - Liquidity Incentives Call #5 (25.9.2024)
Transcript - Liquidity Incentives Call #5 (25.9.2024)
Chat Log - Liquidity Incentives Call #5 (25.9.2024)

For the next call:

As with previous calls, we encourage participants to prepare for the next one and share their visions and lessons earned with the rest of the working group. If you have an analysis, a report, or other findings to present during one of the upcoming calls, please contact Sinkas on Telegram

1 Like

Apologies for the delay on posting the last 2 recordings. Publishing them in a batch in this comment.

Below is the recording of the sixth and seventh call, the calls’ transcripts, and the chat logs. You will also find notes on the topics/questions we discussed in each call and a summary of the views expressed.

6th Call

For the 6th call, we had Darius from Vertex (perp DEX) offer their perspective on incentives as a protocol that received incentives. We also had Coinflip from GMX briefly share their perspective.

Recording - Liquidity Incentives Call #6 (2.10.2024)
Transcript - Liquidity Incentives Call #6 (2.10.2024)
Chat Log - Liquidity Incentives Call #6 (2.10.2024)

Summary

The things that were discussed were:

  • Vertex’s initial proposal aimed to cut fees and use ARB tokens as collateral for market
    makers to build liquidity. The proposal was modified due to structural constraints, leading to a less effective version that focused on volume-based incentives.
  • It was suggested that protocols should have more flexibility in structuring their proposals
    to better align with DAO goals.
  • The initial STIP allocation was influenced by the timing and strategy of other protocols,
    leading to a more adversarial environment.
  • The execution of the STIP program was generally fair, but the lack of clear DAO
    objectives made it difficult to compare results across different protocols.
  • It was suggested that a top-down prioritization from the DAO level could have provided
    clearer objectives and made the process more efficient.
  • In Vertex’s case, Darius explained that trading firm integrations were key to long-term stickiness, as these firms tend to trade frequently and pay trading fees. Retail users were less sticky, but incentives contributed significantly to Vertex’s TVL growth.
  • The value of STIP rewards declined over time due to the decrease in ARB token value,
    affecting the overall impact of the incentives.
  • GMX’s program included a budget for supporting other protocols, helping them present
    their case to the DAO and become part of the Arbitrum ecosystem.
  • The success of GMX’s program was partly due to the reduction in available funds for subsequent rounds, indicating a maturing ecosystem.
  • The integration of large traders and the multiplier effect of their activities were crucial for growth and liquidity.
  • Less developed ecosystems face significant challenges in attracting capital and liquidity.
  • The issue of aligning incentives with DAO objectives and the complexity of the current process was raised. There is a challenge in matching the DAO’s goals with the specific needs of protocols, making it difficult to design effective incentive programs.
  • There’s a need for better coordination and signaling from the DAO to guide market demand and support infrastructure development.

In terms of ideas for future programs:

  • The importance of clear DAO objectives and more autonomy for protocols to design effective strategies was emphasised.
  • A blended approach with both clear DAO directives and open competitions could balance
    the need for alignment with the flexibility for innovative strategies
  • Infrastructure improvements, such as support for native assets and better custodial
    services, are essential for attracting and retaining liquidity.

7th Call

For the 7th call, we had Kamil from TokenGuard, which focuses on analyzing user behavior and conversion in blockchain systems, present their insights from LTIPP.

Recording - Liquidity Incentives Call #7 (9.10.2024)
Transcript - Liquidity Incentives Call #7 (9.10.2024)
Chat Log - Liquidity Incentives Call #7 (9.10.2024)

Summary

The things that were discussed were:

  • Kamil shared his screen and presented an overview of the metrics from two protocols (Across and Delta Prime) that received incentives during LTIPP.
  • There was a 15% increase in stablecoin inflow to Arbitrum One through Across during LTIPP.
  • The number of users using Across to move funds from other blockchains to Arbitrum
    grew eightfold, with interactions increasing threefold.
  • The average deposit value in Across lowered during LTIP compared to pre-LTIP, resulting in a
    significant increase in the number of users but a decrease in the average deposit value.
  • Users acquired through Across were extremely active within the Arbitrum ecosystem, using multiple applications.
  • The number of unique users and interactions on Delta Prime increased dramatically by 400%, but returned to baseline after the incentive period ended.
  • The number of deposits in Delta Prime increased threefold, and the average value of a deposit grew by
    39%.
  • Delta Prime’s TVL increased 300% during LTIP but dropped to 150% of its baseline before the
    program.
  • Kamil summarized that Across had a better impact on the Arbitrum ecosystem compared to Delta Prime. Across spent 200k ARB, resulting in a direct ROI of 0.1, while Delta Prime spent 400k ARB, resulting in a direct ROI of 0.01. ROI is calculated based on the amount of money invested and the fees generated.

In terms of ideas for future programs:

  • The importance of measuring claimable rewards to understand which users know about the program was brought up.
  • Setting long-term goals and segmenting products to design specific rewarding mechanisms for different user groups was discussed.
  • Brainstorming sample KPIs and goals for future incentive programs was encouraged.
  • The importance of setting North Stars to define KPIs and judge progress was also mentioned.

For the next call:

As with previous calls, we encourage participants to prepare for the next one and share their visions and lessons earned with the rest of the working group. If you have an analysis, a report, or other findings to present during one of the upcoming calls, please contact Sinkas on Telegram

2 Likes

8th Call

For the 8th call, we had Powerhouse, GMX, and Camelot talk about their experience with and perspective on incentives.

Recording - Liquidity Incentives Call #8 (16.10.2024)
Transcript - Liquidity Incentives Call #8 (16.10.2024)
Chat Log - Liquidity Incentives Call #8 (16.10.2024)

Summary

The things that were discussed were:

  • The complexity of prior reporting workflows, with LTIPP having 85 grantees and 510 bi-weekly reports to monitor.
  • Using forum to provide updates lead to issues with scalability and discoverability, as well as poor UX and data standardization problems.
  • Powerhouse introduced software workflows to standardize data, using ARB signers for permission and providing a structured data view for program managers. The system allowed for real-time monitoring and compliance, significantly improving efficiency and accuracy.
  • Key outcomes included 95% of projects creating a document and 86% submitting at least one report, with improved efficiency and accuracy for program managers. User-submitted data was available for export to the ARDC.
  • The importance of simulating operational workflows in advance and considering software design development margins.
  • The challenges, including overlapping QA periods with live operations and difficulties with communications and compliance.
  • For GMX, key highlights of STIP and STIP Bridge included a surge in TVL from $80 million to $400 million, significant liquidity growth, and increased trading volume and market depth.
  • Key challenges included yield hopping, higher rebates attracting traders dependent on larger budgets, and the impact of declining ARB prices on rebate benchmarks.
  • Both programs (STIP and STIP Bridge) effectively supported liquidity growth, trader retention, and innovation on v2, despite challenges.
  • The importance of aligning incentives with core objectives rather than just metrics.
  • Camelot saw the incentives as more damaging due to the race to zero price wars among protocols, leading to unsustainable practices.
  • Incentives also facilitated collaboration and creativity among protocols, highlighting the potential positive impacts.

In terms of ideas for future programs:

  • Recommendations include utilizing software workflows or a consolidated homepage for user experience, automating data sources, and forming an operational team to support large numbers of grantees.
  • Future programs should be longer and include more time for grantees to complete milestones.
  • The focus should be on bringing new protocols to the chain, supporting new partnerships, and integrations rather than maximizing metrics.
  • The importance of involving protocols early in the program design process to ensure alignment with their needs and objectives.
  • Announcing future programs to increase engagement and participation from protocols.
  • The need for better communication and collaboration between protocols, the DAO, OCL, and AF to ensure effective program design and implementation.
  • The importance of setting clear goals and aligning incentives with long-term value rather than just short-term metrics.

For the next call

As with previous calls, we encourage participants to prepare for the next one and share their visions, lessons learned and ideas with the rest of the working group. If you have an analysis, a report, or other findings or ideas to present during one of the upcoming calls, please contact Sinkas on Telegram.

3 Likes

Retrospective Analysis: Importance of developing Incentive Goals

STIP (~ Nov 2023 - Feb 2024), STIP.b (~ Dec 2023 - March 2024), and LTIPP (~June 3 - Aug 1 2024) have all now come to a close, distributing on the order of ~100M ARB to participants. A flurry of discussion and data analysis around these programs has been ongoing throughout the forum and key working group calls, including before, during, and after the campaigns.

Vending Machine has been reviewing the various discussions around the campaigns both here on the forum and during the weekly calls, and has gathered that one of the key goals of the DAO going forward for future campaigns is to define clear campaign objectives before the campaign begins, and outline the relevant KPIs to measure them. There has already been great work done in this vain, both during and just after STIP (for example here, here, here, here and here), as well as in and around the LTIPP (for example here and here).

The overall goal of this brief report is to summarise key findings across the comprehensive list of conducted research, and focus on the following:

  1. To give a high level outline of how the setup, organization, design process, parametrization process, operational management, and ongoing monitoring and optimization were conducted throughout the campaigns.
  2. Outline, based on the already completed analyses by OpenBlock (such as here, here, here and here), Blockworks, Team Lampros Lab DAO, and others, the objectives we believe were achieved with these campaigns, and what KPIs were used to measure them. Note that the campaigns’ objectives in some cases lacked a detailed, research informed, iterative process to arrive at clear definition, specificity, and clarity (though this was improved somewhat for LTIPP, where explicit goals such as user engagement, value generation, and protocol growth across various sectors were explicitly outlined).

Setup and Organization

Though it is not perfectly clear how long discussions were ongoing within the DAO surrounding a launch of an incentive campaign, a call for an Arbitrum Incentives Program Working Group was made in August 2023, stating the need for one because “…of the difficult challenge of balancing the urgent desire to fuel network growth via incentives with the equally important need for responsible delegate oversight and community consensus.” A general framework for what a campaign proposal should entail was given.

The associated AIP, the main launchpoint for STIP, was posted to the forum a month later. In this proposal, goals of the campaign were outlined as such:

  1. Support Network Growth
  2. Experiment with Incentive Grants
  3. Find new models for grants and developer support that generate maximum activity on the Arbitrum network
  4. Create Incentive Data

A number of constraints on grantees were outlined, covering data requirements for grantees, contract disclosures, no DAO voting with grants, and general good behavior guidelines.

Additionally, it appears that there were some competing opinions within the DAO. Urgency for incentives was also stressed.

Design Process

It seems Arbitrum DAO members and early working groups surrounding STIP were created around June-August of 2023, and with the finalized AIP being posted in September, it is reasonable to conclude that roughly 1-2 months of deliberation went into the designing of STIP.

There seems to be general consensus that both STIP.b and LTIPP were created in quite a quick fashion, due to pressures from various competing interests. Basing this also around forum discussion, it seems likely that around 1 month of deliberation went into the designing of these programs, perhaps more for LTIPP, though both were created and influenced heavily by STIP’s design.

LTIPP had roughly 1-2 months of public filtering/iteration on the design (with perhaps more privately), with many more months of deliberation and protocol discussion/voting in the forum. A number of temperature check snapshot votes happened during this period as designs were iterated on.

In LTIPP applications, clear grant objectives and KPIs were required from protocols, which is closer to a bottom up approach. Many of the outlined objectives coincided with LTIPP’s higher level outlined goals, which included:

  1. User engagement
  2. Value generation
  3. Protocol growth

Parameterization Process

The total rewards for the campaigns were outlined in forum posts and discussed between participants. They were then approved and iterated on through snapshot votes. The consensus among forum discussion participants originally was that 75M ARB was too much for STIP.

Here it is argued 75M was necessary, and protocols were involved in this recommendation as were delegates. A comparison to Optimism’s spend was given. It is unclear whether there was a strong cap or not, but it was stated “delegates will decide whether an amount is excessive given the scope.”

Eventually, after much discussion in the forum, a Snapshot temperature check included 3 budget options, 25M, 50M, 75M. Ultimately, 50M was decided on through Snapshot. A similar vote on incentive size was conducted for LTIPP.

Operational Management

The operational management and difficulties faced throughout these campaigns have been discussed and outlined extensively as pointed out in the introduction. LTIPP made large strides in a positive direction here, with the introduction of smaller more focused delegate committees, and there have also already been large strides in to how the organizational structure should look in the future. As a rough high level summary, the timeline of the operational flow and various working groups and their responsibilities were as follows:

  1. The Arbitrum incentives Working Group kickstarted discussion around the launch of STIP.
  2. Program Management responsibilities were transferred over to StableLab in September 2023.
  3. Conversations between Arbitrum delegates, protocols, and the DAO were ongoing throughout both campaigns.
  4. ARDC (Blockworks) had some data reporting responsibilities.
  5. OpenBlock also shared much of the data tracking and reporting responsibilities.
  6. Arbgrants in conjunction with OpenBlock seemed to manage the data reporting and tracking responsibilities for LTIPP.
  7. Instead of delegates having to manage many concurring protocol applications and relationships as they did during STIP, more focused committees were developed for LTIPP, which seemed to result in significant operational improvement.

Monitoring and Optimization

Data monitoring, and ensuring funds were being used appropriately, was an explicit priority across all campaigns. Based on discussions in the forum, it was clear many delegates felt overwhelmed by all the responsibility here. Reviewing hundreds of applications and keeping track of nearly just as many protocols quickly became unfeasible.

OpenBlock proposed the framework and to take on responsibility for STIP data monitoring. Throughout STIP, dashboards and bi-weekly updates were provided by OpenBlock. Bi-weekly updates from protocols and final reports from some could be found as well. The same was true for STIP.b.

For LTIPP, protocols were given OBL’s Onboarding Checklist, and protocols were required to comply for the entire life of the program and three months following. Arbgrants was spun up to facilitate this process and to lighten the operational burden that was occurring due to extremely overworked delegates, and to de-clutter the forums. Powerhouse seemed to be the main company behind this development.

Were the Objectives Achieved?

Given the objectives outlined in the launching of the STIP and LTIPP campaigns, we can compare the analysis generated by OpenBlock, Blockworks (ARDC), Lampos, and others against these objectives, to see if and how well they were obtained.

Recall, the stated objectives for STIP were:

  1. Support Network Growth
  2. Experiment with Incentive Grants
  3. Find new models for grants and developer support that generate maximum activity on the Arbitrum network
  4. Create Incentive Data

and the stated objectives for LTIPP were:

  1. User engagement
  2. Value generation
  3. Protocol growth

STIP

1. Support Network Growth (not achieved)

According to ARDC’s “STIP Analysis of Operations and Incentive Mechanisms” report:

“Overall, during the STIP, Arbitrum’s market share growth across major blockchains peaked at ~0% for TVL, ~5% for spot volume, ~12% for perp volume, and ~0% for loans outstanding. The market shares are currently at around September 2023 values, except for TVL, which is down from ~6% to ~4%.”

2. Experiment with Incentive Grants (achieved)

From OpenBlock’s STIP Incentive Efficacy Analysis: 30 protocols were allocated a share of 50 million ARB tokens.

3. Find new models for grants and developer support that generate maximum activity on the Arbitrum network (not achieved)

Though experimental data was generated, based on the variety of post-incentive campaign discussion from Tnorm and others posted on the forum, it is clear that the models used for the STIP campaign were in need of an upgrade.

It can be argued however, that in moving from STIP to LTIPP, new models for grants and developer support were experimented with. However, STIP, STIP.b, and LTIPP all did employ roughly the same template of handing out incentives to protocols.

4. Create Incentive Data (achieved)

This goal was clearly achieved, given the wealth of post-campaign analysis, including dashboards, more formal research reports, and community/forum discussion.

LTIPP

The findings here are picked from Team Lampros Labs DAO LTIPP Report and Open Block’s LTIPP Efficacy Analysis reports.

1. User Engagement (mixed)

Summarizing the report from Team Lampros Labs: The Long-Term Incentive Pilot Program (LTIPP) on Arbitrum had varying impacts on user engagement across sectors, with the “Quests” sector showing the highest growth in Daily Active Users (DAU). Proprietary incentives were the most successful in driving TVL and engagement, particularly in “Options” and “Oracles.” However, other sectors like “Perpetual” and “Stables/Synthetics” saw limited growth or declines, highlighting the need for more targeted strategies. A significant portion of ARB rewards (47.8%) was used for selling tokens, with other actions like liquidity provision and lending following behind. Unintended behaviors, such as circular transactions and holding rewards without further action, also undermined the program’s effectiveness. Furthermore, users who participated across multiple protocols for short-term rewards, had low governance involvement, which poses challenges for long-term engagement.

Though user engagement post-campaign was not an explicit goal, by looking at many of the graphs in Open Block’s LTIPP Efficacy Analysis, it is clear most metrics have fallen upon program competition, including a lot of capital outflows (except, interestingly, for stablecoins).

2. Value Generation (mixed)

At the start, this objective seems poorly defined. Assuming value generation corresponds to sequencer fees, we can say that the program increased this revenue during the campaign, but now, post campaign, it has fallen to levels below the start of the campaign. It is also unclear whether this additional revenue was simply due to other market factors and unclear whether it can be attributed to LTIPP directly.

Many of the data reports did not explicitly analyze “value generation”, potentially because of its lack of clear definition. Interpreting it as sequencer fee generation, reports tended to not include data in this regard. From Artemis and Token Terminal:

3. Protocol Growth (mixed)

Incentivized DEXs showed superior growth in TVL and fees during LTIPP, exceeding expectations based on TVL and fees on other networks. Yield protocols also saw boosts during the campaign.

However, after the campaign finished, the majority of all relevant protocol growth metrics have fallen.

There were some protocols that seemed to maintain their growth, such as Compound. However, this is mainly because protocols were onboarded due to the LTIPP campaign, and would have had zero growth without the campaign (because they would have never been onboarded).

Therefore, the network objectives and whether they were achieved or not can be summarized in the following table:

Objective Achieved/Not Achieved
Support Network Growth (STIP) :x:
Experiment with Incentive Grants (STIP) :heavy_check_mark:
Find new models for grants and developer support that generate maximum activity on the Arbitrum network (STIP) :x:
Create Incentive Data (STIP) :heavy_check_mark:
User engagement (LTIPP) mixed
Value generation (LTIPP) mixed
Protocol growth (LTIPP) mixed

Moving Forward

(from Arbitrum Incentives Retrospective Presentation)

It is clear from forum discussion, the mixed results of the stated campaign objectives, the lack of standardization around post campaign analyses, and the drop off of KPIs post-campaign, that the stated network objectives may have been misguided.

Going forward, it is clear that more time and effort spent on the development of campaign incentive goals will be needed. As has been observed, the campaign incentive goals shape the campaign design, which shape user behavior, and ultimately, shape the impact of these campaigns on the ecosystem as a whole.

Additionally, with more time and effort spent on developing incentive campaign goals, analyzing their success both during and after the campaign will be easier for the associated research labs. This will allow the Arbitrum DAO, key working groups, and delegates to analyze the success of the campaign more effectively.

Disclosure

Vending Machine was not directly involved in the STIP or LTIPP. The analysis presented is based off of the current public reports - if something in this summary report is inaccurate or misrepresented, we invite the community to reach out so we can update this post and improve its accuracy.

Resources

STIP original program outline

STIP.b original program outline

LTIPP original program outline

Proposal to Improve Future Incentive Programs - teddy-notional

Discussion: Reframing Incentives on Arbitrum

Serious People: Proposed KPIs for Arbitrum Grant Programs (LTIPP)

A New Thesis for Network-Level Incentives Programs

Learnings from STIP: Community Interview Summaries and Notes

[RFC] Thoughts on the End-Game Perpetual Incentives Program

Memories of a (grantor) cow: thoughts about incentives program and what could be next

GMX Bi-weekly STIP.b Reports

LTIPP Application Template

OpenBlock Labs STIP Efficacy + Sybil Analysis (2/24)

OpenBlock’s STIP Incentive Efficacy Analysis

OpenBlock realtime dashboards

OpenBlock Arbitrum LTIPP Efficacy Analysis

ARDC Research Deliverables

STIP Analysis of Operations and Incentive Mechanisms

Team Lampros Labs DAO - LTIPP Research Bounty Reports

Arbitrum Incentives Program - Working Group

Tnorm reply on original STIP proposal

Tnorm on STIP ARB Allocation

Comparison to Optimism’s Incentive Spend

50M ARB STIP spend snapshot vote result

StableLab Engagement

ArbGrants

Arbitrum Incentives Retrospective Presentation

4 Likes

Dear friend, I totally Agree with you :+1:

Lets praise arbitrum for creating a powerfull mission/vision!

“Making blockchain technology more inclusive and sustainable for everyone, in a secure way.”

I also enjoyed the following post, is one of my favorite just in case you missed it :smiley:

1 Like

This is excellent analysis and I highly appreciate your input to the working group call this week! I hope to see you all lead a workstream as suggested here. Please do reach out for proposal review or feedback early in the process.

1 Like

9th Call

For the 9th call, we had Vending Machine and Boost offer their insights on incentives programs.

Recording - Liquidity Incentives Call #9 (30.10.2024)
Transcript - Liquidity Incentives Call #9 (30.10.2024)
Chat Log - Liquidity Incentives Call #9 (30.10.2024)

Last week, the call took place but in a more informal setting as nothing was planned due to people returning from Devcon.

For this week, we’re going to have Kamil from Patterns.build present their views.

2 Likes

10th Call

For the 10th call, we had Kamil from patterns.build talk about how they would see the future incentive programs utilizing their platform for measuring impact.

Recording - Liquidity Incentives Call #10 (27.11.2024)
Transcript - Liquidity Incentives Call #10 (27.11.2024)
Chat Log - Liquidity Incentives Call #10 (27.11.2024)

1 Like

11th Call

For the 11th call, we Momir from IOSG discuss their recent proposal and we also had and Sov from OpenSourceObserver…

Recording - Liquidity Incentives Call #11 (11.12.2024)
Transcript - Liquidity Incentives Call #11 (11.12.2024)
Chat Log - Liquidity Incentives Call #11 (11.12.2024)

1 Like

The feedback and communication on this proposal have been excellent and eye-opening. Thank you for your proposal; I have learned a lot from it, and I appreciate your attention to these matters.

The proposal is very comprehensive, with a clear and direct direction. I believe there is no need to establish a new working group. Instead, efforts should focus on leveraging ARDC’s research results to optimize the existing framework. At the same time, innovative incentive methods and a shared funding mechanism should be introduced to maintain a competitive edge.

ARDC has already conducted extensive research and discussions, and directly utilizing their existing outcomes would be more efficient. Creating a new working group could lead to resource fragmentation and reduced efficiency. It would be better to concentrate efforts on refining ARDC’s existing work.

The previous incentive model, where the DAO entirely subsidized projects, posed high risks and led to resource wastage. Future incentive plans should incorporate a “shared funding mechanism,” requiring protocols to contribute part of the capital to increase their responsibility for project success.

Arbitrum should consider diversifying its incentive approaches, not limiting itself to liquidity incentives, but also exploring innovative methods such as account abstraction and point systems to attract users. Permanent incentive plans should not only focus on user acquisition but also on the long-term value of ArbitrumDAO, including protocol revenue, ecosystem user retention, and the genuine contributions of ecosystem participants to the network.

Future incentive plans should require protocols to contribute a portion of the capital, such as in STIP’s funding ratio, to avoid having the DAO fully bear the costs and to enhance the sustainability of the incentive programs.

Would this be the call on the 18th? Was that recorded?

1 Like

12th Call

For the 12th call, we had Matt from Entropy Advisors present their DeFi Renaissance Incentive Program (DRIP).

Recording - Liquidity Incentives Call #12 (18.12.2024)
Transcript - Liquidity Incentives Call #12 (18.12.2024)
Chat Log - Liquidity Incentives Call #12 (18.12.2024)

2 Likes

13th Call

For the 13th call, we had Jumper Exchange and Merkl present their proposal for an incentive program.

Recording - Liquidity Incentives Call #13 (15.1.2024)
Transcript - Liquidity Incentives Call #13 (15.1.2024)
Chat Log - Liquidity Incentives Call #13 (15.1.2024)

14th Call

The 14th call marked the official end of the detox period and the incentives working group we’ve been facilitating over the past few weeks. Going forward, anyone is welcome -encouraged, even- to propose a new incentive program for Arbitrum, taking into account all the things we’ve learned over the past 13 calls.

Recording - Liquidity Incentives Call #14 (23.1.2024)
Transcript - Liquidity Incentives Call #14 (23.1.2024)
Chat Log - Liquidity Incentives Call #14 (23.1.2024)

1 Like

TL;DR

Incentives detox is officially over. We now invite people that have been working on proposal for incentive programs to come forward and propose them to the DAO. We have compiled a list of useful input from the past 14 calls of the ‘Liquidity Incentives Working Group’ that proposal authors are encouraged to incorporate in their design.

This post officially marks the end of the ‘Incentives Detox’ proposal and the period of holding back from trying to implement an incentives program in Arbitrum. From today onward, we encourage individuals, teams, or organizations that want to propose an incentive program design to do so in the forum.

There are already a few proposals being worked on that have been presented and discussed in past calls of the incentives working group. Moving forward, we think that ETH Denver is a good deadline to aim for in terms of being in a position as a DAO to announce the new program. To do so, we need to utilize the next few weeks to make headway in narrowing down the program we want to implement.

To help facilitate the process of incorporating feedback from all participants of the working group over the past 14 calls, we’ve fed the transcripts of the calls into ChatGPT and created a list of suggestions that have been brought up. We then filtered the suggestions to remove duplicate ones and then proceeded to group them together under common themes.

Here’s what we came up with:

KPIs and Monitoring

  • Design incentive programs with clear goals, such as boosting TVL, attracting protocols, or increasing capital stickiness.
  • Measure incentive program success using specific KPIs, such as unique users, TVL, or sequencer fees.
  • Design incentives based on vertical-specific KPIs, such as DEX volume or lending protocol utilization.
  • Use predictive analytics to adjust reward distributions dynamically based on market conditions.
  • Monitor incentive programs live to allow for mid-course corrections rather than postmortem adjustments.
  • Leverage data-driven dashboards to monitor metrics in real time.
  • Collaborate with analytics firms for real-time compliance monitoring during programs.
  • Introduce short-term incentive seasons (e.g., three months) to enable rapid iteration and targeted objectives.

Direct incentive distribution

  • Use retrospective rewards rather than upfront incentives to ensure actual outcomes.
  • Make use of direct-to-contract incentives that validate user activity on-chain.
  • Reward liquidity providers directly instead of funding protocols.

Long-term user/capital retention

  • Explore mechanisms to retain capital, such as rewarding only after users maintain capital within the ecosystem for a defined period.
  • Incentivize long-term participation through staking tiers, with higher rewards for longer commitments.
  • Introduce “progressive incentives”, where rewards scale with sustained activity over time. Incentivize user retention through reward vesting mechanisms.
  • Incentivize long-term staking programs to build user loyalty.

Asset / Vertical / Protocol Selection

  • Incentivize usage of wrapped assets, like wrapped ETH, to grow Arbitrum’s market share.
  • Prioritize the growth of spot markets as a foundational layer for DeFi.
  • Limit the scope of rewards to well-established assets or leading categories, avoiding speculative or low-TVL projects.
  • Target new protocols and users on other blockchains, such as Tron or Binance Smart Chain, to expand the user base.
  • Focus on protocol-specific incentives for bridges, lending platforms, and DEXs.
  • Shift toward vertical prioritization, such as focusing on DEX and lending over gaming
  • Reward protocols based on their innovative incentive mechanisms, such as gamified rewards.
  • Incentivize behavior-based actions, such as liquidity provision or user engagement, rather than blanket rewards.
  • Use gas fee reimbursements as a form of user incentive.
  • Encourage protocols to adopt modular incentive structures that can scale with user activity.
  • Reward high-value users (e.g., whales) by targeting their preferences, such as deeper liquidity.
  • Introduce programs focusing on building the deepest liquidity pools for specific assets. Implement granular incentives targeting specific user actions, such as deposits into liquidity pools.

Protocol Collaboration

  • Incorporate cross-protocol collaboration in incentive design to maximize synergies. Reward cross-chain integrations that funnel activity into Arbitrum. Develop multi-chain incentives to bridge activity between Arbitrum and other chains.
  • Prioritize protocol partnerships to co-sponsor incentive programs.

Marketing & Operations

  • Encourage off-chain marketing campaigns targeting non-crypto users, such as forex traders or fintech users.
1 Like

what were the 4 service providers that L2Beat is working with to help craft proposals, that @krst mentioned in the call?

even rewatching the recording I can only identify patterns.build by @kamilgorski and I think DRIP by @Entropy but not the other 2.

could you clarify?

1 Like

Hey @paulofonseca, the other two are IOSG and Merkl/Jumper (they’ve been both presenting their ideas during our calls).

3 Likes