The following reflects the views of the Lampros DAO governance team, composed of Chain_L (@Blueweb), @Euphoria, and Hirangi Pandya (@Nyx), based on our combined research, analysis, and ideation.
Thank you for putting forward this detailed proposal.
This proposal takes an interesting approach to tackling user acquisition by combining off-chain marketing with measurable on-chain incentives. The performance-based funding model ensures that funds are used efficiently, and the focus on tracking CAC and LTV is a necessary shift for long-term ecosystem growth.
Similar problems were highlighted in our LTIPP Research Bounty reports, which also indicate that short-term user boosts from reward increases show that rewards alone are insufficient to sustain long-term user engagement.
There are a few areas where further clarification would help understand how this will be implemented effectively.
Since funding is directly linked to KPI achievements, how will the Patterns team verify the accuracy of these metrics? Will there be measures in place to prevent projects from inflating their numbers artificially? For instance, certain metrics like DAU/MAU ratios or on-chain interactions could be gamed through non-organic activity.
If selection is based purely on KPI-to-budget efficiency, larger protocols with existing user bases may have a significant advantage over newer projects. We suggest incorporating percentage-based growth metrics instead of absolute numbers to ensure fair participation for smaller dApps. Ensuring a mix of both well-established and emerging protocols could provide deeper insights into what works best for different segments of the ecosystem.
We echo with other delegates as many early-stage projects lack dedicated marketing teams, will this program offer any guidance or resources to help them structure their campaigns effectively? While providing funding is crucial, teams with limited marketing experience may struggle to execute high-ROI campaigns.
In LTIPP, application advisors assisted projects in refining their proposals. Will there be a similar support mechanism here to ensure that protocols make the most of the funding they receive?
Who are the team members from Patterns that will be assessing these applications? Can you please share the evaluation process which will be used to assess the applications?
Overall, this proposal presents a structured and results-oriented framework for funding user acquisition in Arbitrum, which is a much-needed evolution from previous broad-based incentive models. However, ensuring that KPIs are not manipulated, tracking long-term retention, and supporting smaller teams will be critical factors in making this a truly impactful program.
Looking forward to hearing more details on these aspects.