Following feedback on the proposal to establish the STIP Bridge, it was agreed to involve the LTIPP Advisors in this process with the mission to “help applicants gain insights into their proposals. This not only guides applicants through the process but also ensures that the DAO will review better proposals.”
Despite the inclusion of Advisors, this process does not involve the Council, leading us to believe that this addendum places a significant burden on the delegates who must review all the proposals. One of the reasons for the LTIPP was precisely to avoid this excessive burden. Moreover, the optimistic model adopted in this phase could raise concerns about the real control the DAO will have over these proposals, as reviewing six months of data for each applicant is time-consuming.
For this reason, we decided to accompany each application we reviewed with a brief report. We ask the delegates not to take this as an in-depth or definitive basis for deciding your vote, but rather as a high level overview that can potentially raise questions for your own analysis.
Regarding Jones DAO their KPIs were:
In terms of the metrics obtained from both the OBL report and the Dashboard created by the Jones team, it can be seen that TVL, number of users and number of transactions rose during STIP.
Despite this, we have noticed that after the incentives ended, there have been some issues with TVL stickiness and and transaction count (user activity) since both metrics are down from November when STIP started.
Also, there has been some concentration on the distribution of incentives in strategies such as wjAURA (as can be seen in the third image, from the team’s dashboard) as a result of which the applicant agreed to remove the incentives in this case.
All of this has also been noted both in the addendum and in the bi-weekly reports made by the team and in the public STIP bridge Discord. Their reflections on this show that there was deep learning and that they have done enough research, allowing them to modify their strategy for the STIP Bridge.
Notably, the team has collected enough information to be able to draw accurate conclusions about the performance of the incentives introduced and to act accordingly. This indicates that for this type of incentive programs we should allow protocols to change strategy if they detect errors and improvements, as happened in this case.
Conclusions
While the results during STIP were positive, we believe that some objectives were partially met (for example “attract and onboard new users to Arbitrum” Although the TVL has fallen, it has partly migrated to GMX and other apps in the Arbitrum ecosystem.) while other objectives were not (like increasing participation in Jones Vaults).
We are confident that the learnings from STIP, together with the modifications made by the applicants for the STIP Bridge, can lead to better results.