Reporting Period: August - October 2025
Executive Summary
The Arbitrum Audit Program (AAP) completed its first operational quarter in October 2025. Following approval by ArbitrumDAO, the programme launched on 1 August with a clear purpose: make professional security audits accessible to early-stage projects building on Arbitrum.
During the first three months, the programme validated its core operating model through an intentionally conservative rollout. This approach protected DAO interests while confirming that application evaluation, auditor matching and quality assurance can be run on a consistent, repeatable basis.
The committee approved 11 projects from 81 applications - a 13% acceptance rate that reflects the programme’s commitment to quality over volume. Five audits have been completed and findings are being addressed ahead of mainnet deployment. Six additional audits are currently underway with leading security firms. Total commitments currently stand at $548,000 across approved projects, accounting for almost 5.5% of the annual $10 million budget.
This first transparency report sets out the programme’s operating framework and initial learnings in detail. Future quarterly reports will be shorter, focusing on performance updates and material developments as the programme matures.
Operational delivery has also surfaced a recurring constraint. The exclusivity requirement supports ecosystem alignment, but it is creating substantial friction in several applications and may warrant review to maintain efficiency and impact.
With the operating infrastructure in place and potential policy refinements now clear, the programme will transition into a controlled scaling phase from November 2025 to January 2026. The committee is completing reviews of the 13 applications currently in the pipeline and aims to secure 20 additional approvals across the November 2025 to January 2026 period. A marketing campaign will launch in January 2026 to close the awareness gap identified during the initial period of the programme and the process improvements already being implemented should materially reduce approval-to-audit timelines.
Key metrics from the first quarter:
- 81 applications received from diverse categories and regions
- 11 projects approved through multi-criteria committee evaluation (13% approval rate)
- 5 audits completed with findings currently being addressed before mainnet launch
- 6 audits in progress with leading security firms
- $548,000 committed across approved projects (5.5% of annual budget)
- 12 vetted auditors maintaining committed pricing within benchmark ranges
1. Programme Performance
1.1 Application Pipeline Analysis
The programme received 81 applications during its soft launch phase, a strong sign of meaningful ecosystem awareness. This included applications from both new builders discovering Arbitrum and existing ecosystem participants seeking security support. This dual pipeline suggests the programme is successfully serving its intended audience.
- 81 total applications received
- 11 applications approved
- 57 applications rejected following committee review
- 13 applications under active review
- 6 applications received per week on average during the soft launch period
Application quality showed a clear pattern: projects with pre-existing relationships within the Arbitrum ecosystem (prior grants, ADPC participation, documented activity) represented only 16% of applications but 55% of approvals. This suggests that established ecosystem familiarity tends to result in stronger submissions. However, 45% of approvals came from projects without prior formal Arbitrum engagement, confirming the programme successfully attracts new ecosystem participants.
The 57 rejected applications were primarily declined for:
- Insufficient audit readiness - lack of code maturity or unclear scope definition, revealing an education gap about what ‘audit-ready’ means
- Timeline incompatibility - urgent requests preventing proper due diligence
- Limited or unclear Arbitrum alignment
- Incomplete submissions - very low-effort applications with anonymous teams or minimal documentation
To address this, the committee now conducts early outreach to confirm readiness prior to formal evaluation. Together with expanded guidance materials (including practical readiness checklists) and clear lead-time requirements, these measures should reduce the operational friction that previously delayed or prevented otherwise viable applications from advancing through the evaluation process.
1.2 Financial Deployment and Pacing
- $548,000 committed across 11 approved projects
- 5.5% of $10 million annual programme budget was deployed in the first three months
- Average subsidy coverage: 70% of total audit costs
- Subsidy range: 50-100% depending on project need and alignment
Across the first three months, the programme committed $548,000 to 11 approved projects, equating to 5.5% of the $10M annual budget. Approved audits ranged from $5,000 to $158,000, with 70% average subsidy coverage (within a 50–100% range based on project need and alignment). This spread is consistent with the programme’s assumptions about the profiles and budgets of the projects it is designed to support.
The 5.5% deployment rate requires context: the programme launch on 1 August meant that initial applications required screening, due diligence and auditor matching before audits could commence. Payment to auditors occurs upon audit completion and confirmation by the audited project, giving the reporting period limited time for substantial deployment.
1.3 Auditor Performance and Capacity
The committee completed comprehensive due diligence on 31 interested firms between May and July 2025, ultimately approving 11 auditors based on technical capabilities, track record, pricing and programme alignment:
- OpenZeppelin
- Certora
- Nethermind
- Ackee Blockchain Security
- Oak Security
- Hexens
- Decurity
- Pashov Audit Group
- OXORIO
- Cyfrin
- Guardian
Six auditors have engaged with current-cycle projects (50% active engagement), reflecting expected variation in project preferences, technical specialisations and availability. The committee introduces each approved project to 4 auditors and includes at least one preferred firm where a preference is stated, while still spreading opportunities across the pool. Pricing remained within committed ranges, with no drift or unexpected increases, supporting the reliability of the programme’s benchmarks and the value of competitive quoting.
Current capacity is sufficient for the projected November to January volume. Decisions on expanding the pool will follow demonstrated demand, rather than being driven by pre-emptive growth.
1.4 Audit Outcomes and Security Impact
As of 31 October 2025, five audits have been completed. All audited projects are addressing findings before mainnet launch. Six audits are underway or in the contracting phase. These span multiple categories including DeFi protocols, infrastructure tools and emerging verticals.
All audit reports undergo Foundation review to confirm they meet industry standards for thoroughness, technical accuracy and actionable recommendations. Payment to auditors occurs only after:
- Audit report is submitted through the Foundation portal
- Audited project confirms completion and quality
- Foundation validates report meets quality standards
This payment-on-completion model safeguards programme integrity while ensuring auditors are compensated fairly for the work they deliver.
2. Strategic Insights
2.1 Application Readiness and Education Gaps
A significant share of rejected applications reflected readiness issues, rather than a lack of alignment or underlying merit. This highlights a practical education gap: many builders lack a clear understanding of what “audit-ready” means or the necessary preparation required before engaging professional security firms. Common readiness issues included:
- Undefined Scope - submissions that did not clearly specify which contracts required audit, resulting in repeated clarification cycles that slowed evaluation
- Incomplete Documentation - technical documentation insufficient for auditors to assess complexity or provide accurate quotes
- Premature Applications - code still in active development, making scope definition unrealistic and requiring teams to reapply later
To reduce this friction, the committee has implemented early outreach and strengthened guidance materials, including readiness checklists and clearer lead time requirements. This should limit the time spent on applications that are not yet ready for evaluation and keep committee attention on viable projects. It should also help teams prepare earlier, lifting overall application quality across the programme.
2.2 Timeline Pressures and Planning Horizons
A second driver of rejections was timing: projects requesting audits on urgent schedules that don’t allow for proper due diligence, high-quality auditor matching, price competition, or full committee deliberation. In practice, these requests often stem from funding-timing mismatches, early launch commitments, or teams discovering the programme late in their build cycle.
This highlights a practical tension between “on-demand availability” and responsible allocation. It needs firmer lead time expectations, incentives for earlier applications and a defined fast-track route for exceptional cases (notably existing ecosystem projects with demonstrated alignment). Earlier awareness in the builder journey should also reduce the number of crisis-moment applications.
2.3 Ecosystem Relationship Value
Projects with pre-existing Arbitrum relationships contributed 16% of applications but 55% of approvals. That skew likely reflects stronger information quality, greater technical maturity, clearer alignment signals and more realistic expectations. It supports the view that the programme is reaching its intended audience. It also suggests the intake could improve if promising teams were engaged earlier in their development cycle.
While ecosystem relationships correlate with success, 45% of approvals came from projects without prior formal engagement with Arbitrum. This indicates the programme is not simply subsidising teams that would have built on Arbitrum regardless, but is also attracting and supporting new ecosystem participants.
The marketing effort launching in January 2026 is intended to meet builders earlier in their planning phase, tie more closely into other ecosystem programmes that naturally feed the pipeline and publish clearer educational content on requirements.
3. Recommendations and Decision Points
3.1 Programme Scope and Eligibility
Early operations demonstrated the tension between “on-demand availability” and responsible due diligence. The committee advises teams to apply at least six weeks (one and a half months) before their target deployment, with fast-track provisions for existing ecosystem projects demonstrating clear alignment. This addresses rejections stemming from urgent timeline requests while maintaining flexibility for exceptional circumstances.
Rejections linked to insufficient readiness have already pushed changes in both guidance and screening. The committee now completes early verification outreach to confirm scope definition, code maturity and timeline expectations before formal evaluation begins. It protects committee capacity and it also gives applicants a clearer view of what’s expected before they enter a full review cycle. Updated application materials now include readiness checklists and scope definition templates.
The 50–100% coverage range (with an average of 70%) remains a case-by-case decision, assessed against project financial need, the strength of ecosystem alignment and potential impact. Where financial constraints would otherwise block an audit engagement, higher coverage may be appropriate. This approach preserves committee discretion while applying consistent evaluation principles across applications.
3.2 Exclusivity Requirement
The original proposal’s exclusivity requirement - audited code remains Arbitrum-exclusive for a defined period - had clear strategic intent: ensuring DAO subsidies benefit Arbitrum, preventing subsidy arbitrage, demonstrating a genuine commitment and creating competitive differentiation. In practice, the first three months have shown material friction: several applications required extended negotiation, needed modifications or withdrew entirely due to exclusivity concerns.
Modern blockchain applications increasingly employ multi-chain architectures for legitimate reasons: cross-chain messaging infrastructure, by definition, operates across chains; stablecoin issuers require a multi-network presence for liquidity; and DeFi aggregators depend on multi-chain integration. For these projects, exclusivity effectively excludes them, regardless of their technical quality or potential contribution to Arbitrum.
The exclusivity requirement is valuable for ensuring ecosystem alignment, but may be best revisited to allow flexible alternatives. In practice, the exclusivity requirement introduces layers of operational complexity: each application must be assessed not only for audit readiness, but also for how its architecture, product roadmap, or tokenomics align with the exclusivity requirement. While it ensures genuine Arbitrum alignment, it has also slowed approval timelines for product designs or business models that require cross-chain deployment.
Revising this clause to allow flexible alternatives would not result in indiscriminate access to the programme. Instead, it would enable the committee to consider promising teams whose products can bring value to Arbitrum while maintaining their broader ecosystem strategies. Examples include cross-chain messaging providers, stablecoin issuers, or DeFi aggregators whose deployment on Arbitrum could increase on-chain liquidity, user volume, or integrations.
The objective of revisiting the exclusivity requirement is not only to simplify operations but to enhance strategic impact. The revised approach would still require each applicant to demonstrate alignment with Arbitrum and the committee would still engage teams in tailored discussions to evaluate alternate forms of alignment other than exclusivity, for example, deploying core infrastructure on Arbitrum first, routing liquidity primarily through Arbitrum pools, prioritising Arbitrum for feature launches, or concentrating token incentives within the Arbitrum ecosystem.
4. Ecosystem Impact
4.1 Security Outcomes
As of 31 October 2025, five audits are complete and six are in progress. Across this first cohort, third-party reviews surfaced vulnerabilities that required remediation ahead of mainnet deployment and all approved projects are working through fixes in line with the auditor’s recommendations.
Auditor payment remains gated by Foundation validation that each report meets accepted industry standards. This step is the programme’s quality control mechanism: it protects integrity while still recognising completed work once it has been delivered and verified.
Individual project details will be shared in subsequent quarterly reports as audited projects launch on mainnet and achieve operational milestones. The committee is developing spotlight formats that feature project overviews, security findings addressed, ecosystem contributions and team perspectives. These narratives will demonstrate programme impact beyond aggregate statistics.
4.2 Category Distribution and Strategic Positioning
The 81 applications span a broad set of categories: DeFi (53), Infrastructure & Tools (10), AI (5), Gaming (6), Other/Experimental (6) and NFT (1). The weighting towards DeFi and Infrastructure aligns with Arbitrum’s current market position and the security demands typical of those categories. At the same time, the presence of emerging verticals (AI, DePIN, Gaming) points to growing interest from next-generation builders.
4.3 Geographic Reach
Applications were received from North America, Europe, Asia, Latin America and the MENA region. This spread reflects Arbitrum’s status as a preferred L2 for international teams, suggesting that awareness is expanding across regions. It also reduces ecosystem concentration risk and brings a wider mix of development perspectives into the programme.
5. Forward Programme
5.1 November 2025 - January 2026 Objectives
Building on the lessons learned from the first three months, the committee has established the following priorities for the next period:
Application Pipeline
The committee is finalising the 13 applications currently under review and targeting 20 additional approvals during the period. This target reflects the programme’s operational capacity while maintaining the evaluation standards that produced a 13% approval rate.
Marketing Campaign
A comprehensive marketing and awareness campaign is scheduled for launch in January 2026. The campaign is expected to attract a higher number of quality applicants in the next quarter. Campaign objectives include increasing visibility for the programme, educating builders on audit readiness requirements and eligibility criteria and encouraging early applications.
Auditor Performance
The committee will launch an auditor feedback initiative to evaluate quoting efficiency and improve the matching process. Similar to observations during the ADPC subsidy programme, an imbalance has been observed with certain auditors being consistently solicited or declared preferred auditors by projects. The feedback initiative aims to understand and address these dynamics.
Success Stories
The programme will publish two public success stories highlighting early audit outcomes. These narratives will demonstrate programme impact as audited projects launch on mainnet.
Growth Reporting
The committee will launch a Notion workspace to collect growth reports from approved projects at 2, 4 and 6 months post-launch. This infrastructure will streamline compliance with reporting requirements and create a centralized repository for demonstrating programme impact.
Policy Implementation
The exclusivity framework requires resolution so projects can plan deployments with confidence. The current timeline leaves room for DAO forum discussion and, if required, a vote, with any revised criteria implemented following community input.
5.2 Marketing Campaign
The marketing and awareness campaign scheduled for launch in January 2026 is intended to increase programme visibility, clarify audit readiness expectations and eligibility, as well as prompt earlier applications. The committee is working with the Arbitrum Foundation marketing team to deliver the campaign. For the next quarter, the committee is also preparing AMA sessions and project spotlights with auditors and grantee teams, starting with teams that recently completed their audits.
5.3 Process Improvements
The committee has implemented several operational improvements based on first-quarter learnings:
Audit Readiness Verification
Introduced early outreach practice to confirm applicants’ alignment with the current exclusivity rule and ensure they are audit-ready with defined scope. This allows for the prioritisation of ready teams and the earlier identification of preparation gaps.
Response Time Standards
New standard response time targets have been established for all parties - Arbitrum, projects and auditors. These standards will be communicated in onboarding materials and monitored internally to ensure faster matching and approval/rejection cycles.
Enhanced Workflow Tracking
The programme is enhancing workflow tools to record and aggregate individual committee member scores for each applicant (both approved and rejected) to improve traceability, consistency and enable future programme analytics.
Pre-Contractual Verification
Developed a systematic practice of scope reconfirmation with teams and auditors to prevent contentious rescoping situations after contracts are signed.
Growth Reporting Infrastructure
Establishing a Notion workspace dedicated to performance reporting where each approved project receives a dedicated page pre-formatted with standard reporting fields (TVL, protocol fees, integrations, user activity and other relevant KPIs) for their required 2, 4 and 6-month post-launch reports.
5.4 Success Stories
AMA sessions and project spotlights are in preparation with auditors and grantee teams, starting with those that have recently completed their audits. The next quarterly report will include success stories as audited projects reach mainnet and record operational milestones.
6. Governance and Transparency
6.1 Committee Composition and Decision-Making
The Arbitrum Audit Program operates under a three-member committee structure established by the DAO-approved framework. The committee comprises Gustavo Grieco, an independent security expert elected by the DAO, as well as security experts from the Arbitrum Foundation and Offchain Labs. This structure brings together security expertise, ecosystem context and direct DAO representation.
Evaluation Process
Every application receives an individual review from all three committee members before any collective discussion takes place. Each member provides an independent assessment covering technical maturity, audit readiness, team composition, ecosystem impact and alignment and funding needs. The committee then consolidates scores and qualitative comments to reach consensus decisions, ensuring consistency and fairness across evaluations.
When applications require clarification, the committee contacts applicants to refine audit scope, confirm deployment commitments, or obtain additional documentation. This consultative approach enables a thorough assessment while helping promising projects refine their submissions.
6.2 Privacy Framework
The programme strikes a balance between transparency to the DAO and operational realities that require selective confidentiality. This framework emerged from the original DAO proposal, which acknowledged that certain information - if publicly disclosed - would undermine programme effectiveness while providing limited additional accountability value.
Quarterly reports provide comprehensive aggregate data on programme operations, including total applications received, approval and rejection counts, budget deployment pacing, completed audit statistics and auditor engagement patterns. When projects consent, the committee publishes success stories highlighting audit outcomes and security impact. The roster of approved auditors and their general engagement patterns remain public to ensure ecosystem visibility into the programme’s security partnerships.
Individual project subsidy amounts and percentages stay confidential to preserve competitive dynamics. During the first three months, the committee confirmed that public disclosure of specific rates would create leverage for future applicants to demand the lowest published rate, while auditors become highly sensitive to competitors learning their pricing. This dynamic would undermine the competitive quoting process that benefits the programme.
For the same reason, auditor pricing and rates are not disclosed publicly. Detailed rejection reasons for individual projects are also kept private, giving teams space to reapply or seek other support without a permanent public record of earlier shortcomings. Audit reports are typically kept confidential between the project and the auditor, unless the project chooses to publish them, reflecting standard confidentiality practices for security work.
In line with the original proposal, the committee may report “funds committed” and “funds deployed” on different timelines. This provides the DAO with a clear view of pacing while maintaining some privacy regarding the timing and amounts of individual subsidies. Aggregate financial reporting continues to demonstrate responsible budget stewardship without revealing project-level arrangements.
This balanced approach received DAO support during the original proposal process, recognising that operational effectiveness sometimes requires selective confidentiality within an otherwise transparent reporting framework.
6.3 Reporting and Accountability
The programme reports to the DAO through quarterly transparency reports published every three months. Four quarterly reports will be issued across the 12-month programme period.
After the programme concludes, the committee will publish a final summary report covering overall programme impact, success stories from audited projects and operational learnings. That report will also assess whether programme objectives were met and will set out recommendations on potential continuation.
The committee actively seeks DAO feedback through multiple channels. Forum discussions following each quarterly report allow community members to raise questions or concerns. The committee can also be contacted directly for specific enquiries. Where substantial community disagreement arises about programme direction, formal modification proposals can be brought through established governance processes.
6.4 Feedback Mechanisms
The programme has enhanced application materials with specific readiness checklists and scope definition templates to help teams better understand preparation requirements. The committee and the Arbitrum Foundation verify that growth reports are delivered as required at 2, 4 and 6 months post-launch
Regular check-ins with auditing firms are used to assess programme experience and identify operational improvements. The feedback initiative launching next quarter will gather input on matching efficiency, project quality trends and pricing dynamics. Auditors are treated as key stakeholders and their input is taken seriously because it improves how the programme runs.
Beyond quarterly reporting, the committee maintains continuous DAO engagement through forum participation, responsiveness to governance proposals that reflect community sentiment and clear communication about both challenges and successes. Accountability to the DAO remains central to programme legitimacy.
7. Conclusion
The Arbitrum Audit Program’s first three months established operational foundations that enable controlled scaling while maintaining the quality standards essential to the programme’s mission. The 13% approval rate demonstrates appropriate rigour, protecting DAO investment while supporting projects with genuine potential to contribute to the Arbitrum ecosystem.
Early delivery has also surfaced clear insights into programme design. The exclusivity requirement supports ecosystem alignment, but in practice, it introduces substantial friction in approximately half of applications.
Over the quarter, the committee has validated its evaluation processes, auditor relationships and quality assurance standards under real operating conditions. These repeatable frameworks position the programme to increase throughput in the coming period. The security outcomes remain the clearest indicator of value: completed audits have already surfaced vulnerabilities that require remediation before production deployment.
Pacing in the validation phase was intentionally conservative. Deploying 5.5% of the annual budget during the first three months was strategically appropriate, rather than a sign of underperformance. The target of 20 additional approvals for November through January reflects increased operational confidence while maintaining responsible stewardship of DAO resources.
The committee welcomes feedback on this report, particularly regarding the exclusivity policy outlined in Section 3.2. The next period will focus on delivering the marketing campaign, implementing any policy refinements following DAO input, scaling to the targeted 20 approvals, developing success stories from completed audits and maintaining the quality standards established during this initial phase.



