We recently received a grant from Plurality Labs to build a decentralized grant evaluation platform. We will use this thread to keep the community updated on our progress.
Provide a summary of your intended program & the impact
Karma GAP is a protocol for tracking and ensuring accountability of community-issued grants. It enables grantees to set and update milestones, allowing communities to monitor progress and assess the impact of projects.
Our mission is to create a neutral system for evaluating funding, giving communities and grant programs the data needed to discern effective projects and decide on funding allocations.
Below are specific problems we would like to solve for the Arbitrum Grant ecosystem.
- Insufficient visibility of grant issuance, progress, and outcomes
- Lack of data for community-driven decisions on grant category funding adjustments.
- No mechanism for grantees to establish a track record.
- Inadequate information to gauge the impact of specific projects or categories.
- The community is willing to engage but lacks a structured way to contribute.
How does your project meet one or more of our strategic priorities outlined above?
Our project aligns with strategic priorities:
- Achieve GOVERNANCE OPTIMIZATION by identifying and iteratively improving key capabilities to increase DAO performance and accountability.
- Capital allocation is crucial for DAOs. Implementing GAP yields insights and transparency, leading to improved capital distribution choices.
- GROW THE COMMUNITY through awareness, participation, efficient inquiry handling, and bias reduction.
- Creating a reviewer system to engage the community aims to boost DAO awareness and involvement, potentially enhancing other activities such as governance voting and delegation.
- How do you plan to execute this program (Specification & Implementation)?
We intend to implement Karma GAP for Arbitrum DAO’s grantees to report updates, allowing community evaluation of the projects. All data will be on-chain on Arbitrum One.
Our solution addresses the outlined problems as follows:
Visibility: Integration with Arbitrum One allows grantees to create projects and outline milestones for received grants, with regular update postings. The Arbitrum community gains the ability to monitor all grants and their progress.
Data-Driven Decisions: We’re developing a feature for grantees and community members to assess grants on various metrics. This data, stored on-chain, will be available for public analysis.
Grantee Reputation: A profile page will showcase grantees’ grants and their statuses, useful for future grant applications within the Arbitrum ecosystem or elsewhere.
Impact Measurement: Projects will be categorized to facilitate community access to categorized evaluation outcomes.
Community Contribution: A reviewer system will enable community members to assess grants matching their expertise. We will collaborate with Plurality Labs to establish measurable reviewing standards.
What are your Milestones?
Below are all the specifications broken down by milestones.
Milestone 1: Add support for Arbitrum One (2 weeks) and Categorize grants for better evaluation (1 week) - Due Dec 8, 2023
Deploy smart contracts on Arbitrum One, update front and back-end systems for One compatibility. Post-Milestone 1, grantees can create projects and update grants with all data stored on Arbitrum One.
In collaboration with Plurality Labs, we will define project categories and update our system to classify projects accordingly. Post-implementation, community members can browse grants by category.
Milestone 2: Feature for grantees to describe how their project should be evaluated (1 week), anyone should be able to evaluate projects (3 weeks) and integrate Open Source Observer (1 week) - Due Jan 12th, 2024
To assess impact, projects must be evaluated using criteria set by those who know them best—the grantees. We will build a feature allowing grantees to specify their evaluation metrics, ensuring transparency as these will be publicly viewable. This information will be on-chain and retrievable via our SDK or an indexer for quick access.
Community members can declare their expertise, self-nominate as reviewers, and evaluate grants. A dedicated page will display each reviewer’s grant assessments, aiding in building their reputation. We will introduce measures to filter out Sybil (fraudulent) reviews. Recognizing the complexity of this issue, we will refine the reviewer system continually, guided by community input.
We will integrate with Open Source Observer to include their analytics in the GAP application for projects linked to GitHub repositories, offering funders insight into the impact of open source contributions on ecosystem health.
- How will the community validate impact?
The community can apply GAP’s evaluation framework to assess GAP itself using these metrics:
- The proportion of grantees regularly updating their grant progress.
- The count of reviewers analyzing different grants.
- The community’s ability to discern impactful grant categories and inform future funding decisions.
Success is marked by the community’s capacity to use these indicators to guide funding strategies.