RFP - Abitrum Short-Term Incentive Program (STIP) Data Monitoring and Reporting

RFP - Abitrum Short-Term Incentive Program (STIP) Data Monitoring and Reporting

(Credits to @tnorm @Burns & @epowell101 for drafting this RFP)

Project Overview

The Arbitrum Short-Term Incentives Program (STIP) is due to begin after Round 1 voting concludes October 12, 2023, followed by distributions to successful teams. Intended goals of this program are:

  • accelerate ecosystem growth;
  • experiment with grant distribution models;
  • generate data to inform future programs; and,
  • uncover new strategies to drive activity on the network.

The size of this program (50M ARB) justifies significant monitoring of grant funds and collection of data for both fraud/unethical behavior and performance. This Request for Proposals (RFP) requests one or two specialized service providers (SPs) capable of delivering services to address three priorities. These priorities are:

Objective 1 - Reporting and Monitoring

  • Tracking funds and incentivized pools for misallocation, unethical behavior, etc.

The primary goal is to track grantee funds (and wallets) to ensure grants are used as specified, and allow for the identification of any grantee fraud or unethical behavior in violation of the grantee eligibility requirements. To assist this, applications require disclosure of contract addresses and other information to help streamline monitoring. Deliverables for this objective include transaction monitoring, wash trade detection, claimant analysis, and any other signals determined by an expert SP to violate the General Eligibility Requirements.

Objective 2 - Data Aggregation and Impact Analysis

  • Measuring the efficacy of each ARB spent on a per grant, per pool basis, per asset basis.

In its current form, data creation will be through each successful grantee providing analytics, dashboards and bi-weekly status reports to inform the Arbitrum Foundation and DAO of their progress. This will generate a lot of information which has the potential to be significantly fragmented across the multiple successful projects making assessments and analysis difficult for the community.

At a high-level, the program needs to track the overall impact in terms of short-term activity. This data is to be collected in a way that can inform what strategies proved to be most effective at providing value to the Arbitrum ecosystem. Baseline metrics are going to be the normal ones, for most of these incentives this will track at TVL, Volumes, Unique Addresses, Fee generation, # of transactions, progress milestones, KPI tracking, ARB spending etc. In addition to the claimant analysis mentioned above.

Note: Concurrent with this SP engagement, the Arbitrum DAO will be leading initiatives dedicated to improving community data infrastructure and analytics. This effort is expected to fund open source software for Grant analysis that may also be helpful to the SP.

Scope of Work

STIP comprises a collection of individual grants across XX grantees (Estimated - up to 50). The chosen SP will be responsible for the construction and maintenance of a data pipeline capable of delivering dashboards and regular reporting across each individual grant to achieve the following objectives:

Realtime Fraud Detection / Misuse of Funds (Priority 1)

The Arbitrum DAO seeks a monitoring and reporting solution to promptly alert the STIP-ARB Multisig of any activities that breach STIP eligibility criteria. These activities encompass fund misuse and actions deviating from individual grantee applications. Service providers are invited to showcase their proficiency in delivering monitoring and reporting solutions that focus on the following key metrics, among others.

Grant/Protocol Data and Impact Analysis (Priority 2)

The Arbitrum DAO seeks data analytics and dashboards capable of demonstrating grant efficacy on a per grant basis, across a wide-diversity of sectors and protocols. Given the diversity of strategies and protocols, the Arbitrum DAO expects the SP to present methods of standardization and benchmarking across protocols. Service providers are invited to showcase their proficiency in delivering precise analyses that focus on the following key metrics, among others.


Projects will be asked to deliver three deliverables: 1) open-source dashboard and analytics tooling capable of delivering real time data on the following objectives across both priorities; and 2) regular reporting to the Arbitrum community, and 3) a Final Impact Report.

  1. Open Source Dashboard and Analytics Tooling

Realtime Fraud Detection / Misuse of Funds

  • Wallet and Fund Tracking
  • Protocol/Core Team Farming
  • Transaction monitoring
  • Wash Trading
  • Claimant Analysis

Grant/Protocol Data and Impact Analysis

  • Ecosystem Analysis

    • Ecosystem Metrics
      • TVL
      • Transactions
      • Users
      • Fees Generated
      • Volume
    • Sector Analysis
      • DEX
      • Lend/Borrow
      • Yield Aggregators
      • Perpetuals
      • Etc.
    • Protocol Analysis
      • Pool Level Analysis
      • Impact per ARB
  • ARB spending by teams (planned vs actual), ARB distributed/remaining

  • Claimant Analysis

    • Distribution of claimants by protocol
    • Distribution of claimants by asset
    • Percentage of claimants selling ARB
    • Percentage of claimants delegating ARB
  • Grant matching by project

  1. Reporting to the Arbitrum Community

The successful SP will report to community stakeholders (Disruption Joe, tnorm, Burns, and program managers the Data Intelligence Grants Program) using weekly to bi-weekly reports with the following deliverables:

  • Report of suspicious activity regarding grant mismanagement or misallocation
  • Report of wash trading, grant manipulation, or worrisome claimant behaviour
  • Individual project update summaries for all successful STIP projects
  • Project KPI tracking
  • Links to project reports and dashboards/analytics

Bi-Weekly touch point and coordination with each project for status update and communication back to the community stakeholders/STIP-ARB Multisig signers to flag any issues and informany potential decision to halt funding streams. As such, we expect the service provider to intake requests communicated from the STIP-ARB multisig and conduct investigations raised by the community.

  1. Final Impact Report

Due on April 01, 2024, the Arbitrum community will receive a comprehensive report, demonstrating the impact of STIP across macro-level network trends, sector trends, protocol trends, pool trends, and assets trends.

This report will be published in collaboration with appointed community stakeholders (Disruption Joe, tnorm, Burns and program managers the Data Grants Program) after an initial draft, with 1 week to review followed by 1 week for updates and final submission.

Monitoring Duration

Monitoring is to be continuous for the duration of the program through April 01, 2024 to allow for extended distributions (if projects retain extra funds and run slightly over) in addition to extra time to collect granular retention data.

The STIP-ARB Multisig will determine the preferred cadence of its updates, which occur every two weeks, once the SP has been engaged for its formal contract. This preference will be communicated by community stakeholders (Disruption Joe, tnorm, Burns) who will serve as points of contact for the service provider throughout their engagement.

Award Timeline

RFP Live - Oct 8

RFP Submissions Open: Oct, 08 to Oct 015, 2023.

RFP Interview and Submission Reviews: Oct 15 to Oct 22, 2023.

RFP Decision: Oct 22 - Oct 27 2023.

Grant awarded - Oct 27

Foreseen Potential Difficulties

Potential Blocker Mitigation Strategy
Varied timelines for each project Set global program schedule for projects’ bi-weekly reporting as part of STIP application approval. Proposal is for this to begin 1 week after distribution of first funding tranches to give protocols sufficient time to set up incentive streams
Untimely submission of bi-weekly reports by projects KPI requirement for STIP approval and ongoing funding

Setting up communication channels with each project through TG or otherwise

Bi-weekly touch point|
|Insufficient data available from projects|KPI requirement for STIP approval and ongoing funding

SP to provide stand-alone on-chain analytics|
|Number of projects and submissions|Ingestion of proposal data from forum/multisig streams|

Applications to Include:

Understanding of Scope

[Project to define final scope including exhaustive list of metrics and signals to be monitored.]

Key Signals and Metrics to Monitor

[SPs to present their plan to meet aforementioned signals and metrics and present any other relevant signals and metrics]

Value Definition (Priority 2 only)

[In order for useful recommendations to be drawn from the data collected, a unified understanding of what value is defined as by the Arbitrum Community. This section is to describe how the applicant SP defines what results should be attributed to a successful distribution of funds and therefore what data is to be collected and presented for all projects in the program.]

Project Timeline

[Expected timeline for operations]

Milestone/Tranche amounts

[Total funding requested, individual tranches and timing]

Prior experience and Proposed Team

[Provide evidence of suitable prior experience for each priority application applies to. Note teams will need to agree to KYC requirements]

Information/resources required

[What information and resources are required of the STIP Committee, Plurality Labs, program teams or the Arbitrum foundation generally]


Definitely Needed; in terms of the short-term nature of getting it up and running, the only group I can think of that could coordinate resourcing that comes to mind is @CastleCapital since they have like 50 analysts to tap into that know most if not all of the Arbitrum Projects.


This proposal ensures robust oversight of the STI Program, with specialized service providers addressing priorities in monitoring, data analysis, and impact assessment, hence I am in favor of this proposal


This initiative holds promise, and starting with this approach would be advantageous.

I suggest the committee divide the task into monitoring individual projects or a select few. By doing so, we can open the doors for greater community involvement, potentially leveraging platforms like Dework.

Without this segmentation, the participation may remain skewed towards a single company with an excess of 50 analysts, rather than engaging a wider spectrum of contributors.

We are open to applicants applying for one objective or all. The work of coordinating a community effort for data infrastructure may not prove viable, however, you are welcome to submit a plan for review! I could be wrong and seeing how it would operationalize could help others understand too.

1 Like

Edited to add: Apparently this will be funded from a grants program run by Plurality Labs. Our critiques below are still relevant, though it does not require approval from governance to be offered.

Does this not already fall under the 94k ARB being spent on administration of this grants program?

While detection of misuse of funds is probably not reasonable to expect of the msig signers, given the short timeframe between now and the conclusion of the grants spending (Jan 31), it doesn’t seem reasonable to spend additional funds for this RFP.

If the goal is to have reporting tools more robust than a Dune dashboard, it’s unlikely a quality vendor can design and create a solution before the grant period is mostly concluded. It’s also difficult to assign fraud detection to a vendor building a dashboard. Fraud is a serious crime in most jurisdictions, and can often carry jail time. False accusations of fraud can also bring legal liability.

It’s also difficult to ask for a solution to measure impact per ARB in any kind of meaningful way without a specific set of criteria already established – impact can be measured many ways and invites being measured subjectively if definition and goal isn’t nailed down. Ideally what impact was desired would already be clear to those requesting grant funds, so they know what they’re supposed to deliver.

Based on our experience at other grants programs, we strongly recommend requiring grant recipients to self-report whatever metrics are required. This provides documentation of the recipient making assertions that can be 1) disputed or checked now or in the future, 2) avoids someone not familiar with a protocol having to educate themselves about where everything is and how to measure it, 3) provides clear evidence of ghosting/fraud/abandonment since the recipient themselves is responsible for sourcing and reporting data.

Ideally, this would be required to prevent a stream being turned off at specified date for each recipient, so that the full grant is not on autopilot should a project disappear or go silent.

We do not support this RFP as written. Even if you do not want to offload reporting duties to the grant recipients, there’s no reason to spend any additional funds to purchase a reporting solution that wouldn’t likely be ready and reliable until its usefulness was already degraded.


Thanks for your engagment. This is currently an RFP and was drafted with the working group that made STIP.

Isn’t the admin costs covered by the 94k arb shown

Some are covered, but not this.

  • tnorm stepped up to facilitate us to a solution
  • Multisig Signers - Hopefully people understand (if not, we can elaborate)
  • Stablelab has their hands full with the directives listed. If anything, they are well underpaid for this service

It’s unlikely a vendor can design and implement a quality solution in time

Fraud detection is just one objective here and will likely be split out from the other objectives through a “bug bounty” like process. I ran the fraud detection at Gitcoin for a couple years and understand both how difficult the process is while also having a realistic understanding of what IS possible.

For the monitoring and reporting, we are looking for the highest quality possible for the cost. This RFP will be won (or cancelled) based on the unique value prop brought by the applicants. In our early discussions, we have found multiple service providers who already have the capabilities and would take 2-4 weeks for full deployment.

Calling fraud could create liability

Fraud detection is the skillset the boomers would have searched for on the Linkedin. We won’t be calling anyone a fraud.

Value is hard to measure

Bingo. That doesn’t mean it isn’t worth trying. We will work with the vendor to learn what we can and provide learnings to other STIP recipients. The reality is that most of the benefits of this learning will benefit future programs. Why is the future “Long Term Incentive Framework” important.

Because Arbitrum is showing builders that we are serious about supporting them

Self-reporting works better

I agree! It is also already a current requirement in the STIP proposal.
A total administrative cost

“disputed now or in the future”

These are BIG differences! A 1% improvement in allocation is 500k ARB. Even if this is paid out, which it is not guaranteed at this point that we will, this will constitute a 0.4% administrative cost to deploying 50 million ARB.

Auto turn-off at specified date if self-reporting doesn’t happen

This is a great idea! Sharing to the working group now!


The reason Plurality Labs is considering this grant is because the math makes sense and including the pay for the service would likely create a conflict of interest. It probably isn’t a big deal, but is best to avoid the regulators of a proposal being paid by the same proposal.


First and foremost, I’d like to commend the working group for drafting a comprehensive RFP to ensure robust monitoring and data reporting for the STIP.

The scale of the Arbitrum STIP is ambitious, and the necessity for meticulous monitoring and reporting is self-evident. As someone deeply interested in the Arbitrum ecosystem, I find this initiative incredibly reassuring.

Strength points from my point of view –

  1. With the Arbitrum ecosystem growing at a good rate, the 50M ARB fund requires robust oversight, and this RFP addresses that need head-on.
  2. The objectives are laid out with good clarity. The multi-faceted approach, which includes not only fraud detection but also impact analysis, sets the stage for a comprehensive review of the program’s effectiveness.
  3. @DisruptionJoe’s addressing GFX Labs’ concerns, the claim that a capable service provider could be fully operational within 2-4 weeks is promising. It shows a well-thought-out plan backed by preliminary discussions with potential vendors.
  4. The explicit mention that the program won’t label anyone as fraudulent is an excellent way to navigate the legal complexities around fraud accusations. I believe Joe’s experience will be invaluable as this initiative moves forward.
  5. The integration of self-reporting mechanisms is commendable. It not only streamlines the reporting process but also places accountability squarely on the grant recipients.

Yet, there are points for further consideration:

  1. While the intent to measure impact is laudable, the RFP could benefit from a well-defined set of KPIs to ensure objectivity in evaluations.
  2. The proposal could outline the qualifications and criteria that will be used/used for selecting service providers. This would give everyone a better understanding of what to expect from the chosen vendor.
  3. Given the tight timelines and significant responsibilities, contingency plans should be in place if the chosen vendor falls short of expectations.
  4. While the working group suggests that a high-quality solution could be deployed within 2-4 weeks, including a risk assessment for this timeline would be beneficial. What are the contingencies if this timeline is not met?
  5. Given the RFP’s intent to provide insights for future programs, ensuring that the deployed tools/frameworks/tactics are scalable and adaptable for long-term use would be a wise investment.
  6. An “auto turn-off” feature or other mechanisms for halting funding to projects that do not meet expectations (such as delayed self-reporting) could be explicitly stated. This would act as a fail-safe to protect the program’s integrity.
  7. Given the complexities associated with fraud detection, including a formal legal review in the proposal could mitigate potential liabilities and add a layer of protection for all parties involved. OR @DisruptionJoe, can the working group clearly outline the limits of fraud detection in the RFP? Make it explicit that the system is designed to flag potential issues for further investigation, not to make legal determinations.
  8. The RFP does not specify the kind of contractual relationships that will be established with the service providers. Terms of service, responsibilities, and consequences of non-compliance need to be legally sound. Standardized contracts with service providers to lay out responsibilities, deliverables, and legal repercussions in case of non-compliance would be good. This ensures that both parties are legally protected.

In closing, this RFP, in my opinion, is a significant stride in the right direction for Arbitrum. It shows a responsible approach to fund management, which can only strengthen community trust. While there are always risks and challenges, the proactive and transparent handling of this initiative so far gives me confidence that they will be adeptly managed.

Note: I appreciate the feedback, but do share any relevant documentation to back it. I’ll be making edits as I get more informed about each component of this RFP.


This short term incentive program (stip) is definitely needed.

I think the RFP @DisruptionJoe brings up is something that would set the tone in a positive way for the entire direction of Arbitrum grants.

This would be quite useful for the ecosystem and IMO would be best managed by individuals with significant experience observing grants programs as opposed to larger organizations with established frameworks for this RFP here - especially in light of the short term nature of all of this.

It would stand to reason a group of 2-3 people would be sufficient to achieve the desired goal here.


Hey everyone, this is an initiative by Plurality Labs to provide data and reporting services for the STIP multisig to allow everyone to best execute their responsibilities as the provider of grant streams AND the Arbitrum community to learn from the execution of any grants recommended under the STIP consensus framework.

This is NOT requesting extra funding from the treasury NOR is it a part of the STIP proposal. The budget, provider, and scope is up to the discretion of Plurality Labs as this initiative will be generously funded under their program. Grantees have also been asked to provide self-reporting via bi-weekly updates and Dune dashboard reporting as part of the application template:

I am obviously in support of an initiative to provide additional monitoring and data on the behavior of grantees, especially efforts communicating and highlighting violations of a grantees agreed upon use of ARB per the eligibility requirements and application requirements of the STIP framework, in addition to any insights the community can gain on efficacy and impact.

I support this Plurality Labs initiative.


The Sixdegree team is very interested in this opportunity.

Would it be possible to have confirmation on where to apply?

Shall we post on the public forum or is there an email address to send the application to?

Thank you


For applications, please post proposals on the forum as a new post with the title format:

[STIP Monitoring - Objective 1/2/1&2] - Project Name - [DRAFT / FINAL]


At DefiLlama we could handle the section Grant/Protocol Data and Impact Analysis, we’re already tracking almost all the data requested for all protocols on arbitrum and have been doing so since Arbitrum launched

Would it be possible to split this proposal into two where one is simply tracking protocol metrics and the other one is about tracking fraud and everything else in the request?


The RFP is already split up into 1) Reporting/Monitoring for unethical behaviour and 2) Data Aggregation and Impact Analysis. It would be my preference not to fragment any further, but if a proposal is compelling then it can be judged on a partial offer. There is the potential that a separate entity covers impact reporting/analysis but the ideal case is where the data recording and those responsible for judging value to the ecosystem are either the same or very well aligned - else the data being collected may stay as only raw data without being a worthwhile measure of value.


Excited to place a prop for this later today.
At AlphaGrowth Ecosystem Attribution and Grants Effectiveness we have been working on the last couple months so this is perfect timing :slight_smile:

current teaser:


AlphaGrowth have posted a Proposal for STIP monitoring and reporting that is currently pending.



We are moving the interview process to happen this week, but still plan on selecting this Friday. Thanks for your patience. If you have a proposal in and have not scheduled an interview, please feel free to dm me!


Allocating more towards data monitoring towards incentives is critical to make sure the community understands what is working and can adjust. Fully in support of the proposal