As explored in AVI’s ecosystem thesis, Arbitrum needs to invest in its ecosystem and builders. We have a number of proposals moving us in that direction, but how do we make sense of all of them and create metrics for comparison and continuous improvement?
With 15 years of experience in building startup ecosystems and advancing public goods funding, Farstar has gained deep insights into the challenges and opportunities of fostering innovation. Through these experiences, we aided in a research project with the Behavioural Insights Team (BIT), where we reviewed academic literature on enabling entrepreneurial mindsets and developed a set of measurements. Now, we want to integrate these learnings into the Arbitrum ecosystem.
This article shares insights from this collaboration, including key considerations for determining metrics for builder ecosystem support proposals, such as:
- Theories of change
- Logic Models
- Measurement Frameworks
- Incentives for Managing the Programs
Approach
An impact assessment framework is generally created by a group of stakeholders that want to provide clarity and accountability on what they set out to achieve. The framework is created through a participatory process where stakeholders involved with implementation or delivery of a programme come to alignment and agreement on what their approach to systems change is and how they measure it.
On the one hand the impact assessment framework is a method for self-auditing progress and claims of impact. On the other hand by making the framework explicit, other parties can follow the logic by which delivered impact is attributed to interventions and also by which indicators it is measured (and based on what data that feeds the indicators). This way, an impact assessment framework becomes a tool that supports conversation for continuous improvement between stakeholders.
Building an impact assessment framework consists of the following steps.
Theory of Change
General description of what you think the causality is of the interventions. Involves a detailed problem description, which is the introduction for the theory of causality of how this problem can be addressed.
Logic Model
Describes how you think your resource inputs translate to outputs and outcomes and lastly impact… Essentially this is a breakdown of the mechanics of the Theory of Change, describing how you attribute impact to the effort put it.
For example, the Logic Model for AGC is as follows:
Inputs (Resources Needed)
- Expertise and a bit of social proof from a couple of known people in the industry to commit good participants
- Community & peer networks
- Virtual & in-person engagement spaces - Facilitators, mentors & program organizers
- Digital platforms (Discord, forum, online clinics)
- Technical and Product experts, Market makers, Investor, etc
Activities (What We Will Do)
- Live office hours and possibly social happy hours as side event to major conferences where our team and key participants might already be attending
- Online clinics for technical & strategic guidance
- Regular 1:1 calls with program organizers & facilitators to calibrate people needs and challenges as well as what useful knowledge they can share if we give them a platform
- Program team reviewing online resources, decks and data rooms of participants as part of the calibration effort
- Forum writeups & documentation of key learnings (ideally use that as source of engaging online content that can be marketed on the back of stories and heroes from the program, but have this as a secondary objective at this stage)
- Structured peer-to-peer learning sessions
- Final unconference where we can maximise learning sharing, and connectivity between the participants
- Some public promotion of outputs (with the objective to generate fomo in outsiders and sense of significance for participants, not yet optimised as actually useful content)
Mediating Mechanisms (How We Gauge Reactions)
- Peer Learning Mindset → Are participants actively learning from each other well?
- Acquiring Knowledge & Skills → Are they improving in Orbit, Stylus, and liquidity strategies, or whatever gets prioritised during responsive calibration of participant needs?
- Developing Deep Ecosystem Relationships → Are high trust networks forming?
- Adoption of Digital Tools → Are participants using Arbitrum’s platforms effectively?
- Better Leverage on OCL Time → Are key personnel reaching more builders efficiently?
Secondary Outcomes (Behavioral Changes
- Mindset Shift Toward Arbitrum → Arbitrum becomes the preferred place for more builders / Orbit is seen as the leading AppChain solution / Positive sentiment change from before & after engagement
- Adoption of Key Practices & Skills → More builders using Orbit / More developers leveraging Stylus / Improved pitching strategies to market makers
- Stronger Professional & Community Networks → More long-term collaborations between protocols & liquidity providers / Increase in peer-to-peer learning & best practices sharing
- Enhanced Collaboration Channels → More high quality participation in Discord, forum discussions, and community-led initiatives
Primary Outcomes (Long-Term Impact)
- More Tokens → Increased ecosystem token issuance & adoption
- More Liquidity → Strengthened relationships with market makers / LPs & deeper liquidity pools
- More Trading on Arbitrum → Higher transaction volume & protocol activity
Measurement Frameworks
These are designed to be used consistently across programs for comparable results. Key metrics might include:
- Net Promoter Score (NPS)
- Return on time invested from the recipient’s perspective
- Quality and relevance of mentorship received
- Growth and usefulness of networks
- Key learnings and capital raised
- Key contracts or pilots enabled with relevant market players
- Educational benefits (general entrepreneurial mindset and enablement)
- Educational benefits (Arbitrum strategic objectives related)
- E.g., aligned with strategic objectives like the adoption of Stylus and Orbit Chains
The first two steps, creating a theory of change and defining a logic model is done with consultation of key stakeholders to the delivery impact, particularly the targeted beneficiaries. The measurement frameworks, which describe “how we make impact apparent” is completed with expert consultation workshops who can provide technical expertise on (meaningful) indicators and methods of measurement and advice on making implementation of the methodology light-weight in implementation.
Benefit to Stakeholders Managing Programs
Programs like EVM accelerator benefit from access to public goods capital for grassroots mobilization and value creation, which is difficult to fund with private capital unaligned with the ecosystem. To receive such capital, an ecosystem impact thesis needs to be formulated with designed accountability.
Program managers often struggle to justify the resources needed to design and develop this framework. Even when well-designed, concerns of bias and lack of benchmarking arise. Decision-makers within ecosystems may lack the understanding or sophistication to design such accountability measures, leading to poorly thought-out actions that deter relevant talent and projects.
Engaging in a joint impact accountability framework benefits funders by reducing overheads in further fundraising and promoting a more aligned impact strategy. This makes it easier to allocate larger funding amounts responsive to the strategic direction of the funder.