Last update: 02/10/2024
This proposal has been updated based on feedback from delegates provided in this thread during the two calls we organized, as well as through discussions in the delegates’ Telegram group and private contributions from several delegates. We would like to thank the entire ArbitrumDAO community for their valuable input.
TL;DR
-
Expand the program by introducing a new and improved version, to be implemented over the course of a year.
-
We propose a total budget of USD 4.2 million (in a year) in delegate incentives, with a maximum monthly compensation of 7000 USD or 16,500 ARB per month per delegate.
-
Creating Delegates’ compensation tiers:
-
Tier 3: TP ≥ 65 and < 70. Compensation range: $3,000 to $3,250. ARB cap: 8,000.
-
Tier 2: TP ≥ 70% and < 85%. Compensation range: $4,200 to $5,100. ARB cap: 12,000.
-
Tier 1: TP ≥ 85%. Compensation range: $5,950 to $7,000. ARB cap: 16,500.
-
-
Elimination of the special multisig, the DIP adheres to the MSS
-
The DAO can cancel the program or modify parameters, such as the scoring methodology, through a Snapshot vote.
-
Min threshold requirement change: Participation Rate (Karma) ≥ 75% participation in on-chain votes in the last 90 days. Previously, the requirement was 25% of total historical votes.
-
Soft enforcement for DAO’s social agreements: Each delegate must adhere to and comply with all social agreements reached through Snapshot in order to receive incentives.
-
Adding the possibility of revoking DIP Ban via Snapshot vote and also creating the DIP Suspension.
-
Delegates’ Feedback Update: The DAO has opted for v1.5 of the DIP, so a new rubric will be applied to evaluate the feedback given by the delegates in the diverse discussions. Due to the experimental nature of this new scoring system, it will be in a testing phase, and we, as Administrators, commit to submitting it to the DAO for consultation after three months of running the program.
-
Note that everything related to DIP 1.0 that has not been mentioned as modifications in this new proposal will remain in effect in v1.5.
-
Scoring Weight Changes:
-
Participation Rate (PR): Previously weighted at 20% and based on historical participation rates in Tally. Now, it is reduced to 15%, calculated based on the participation rate of the last 90 days in on-chain votes (as calculated by Karma).
-
Snapshot Voting (SV): The weight of this parameter has increased from 15% to 20%.
-
Communicating Rationale (CR): The weight of this parameter has decreased from 25% to 10%.
-
Delegates’ Feedback (DF): The weight of this parameter has increased from 15% to 30%.
-
Total Participation (TP): The requirement for total participation has increased from +60% to +65%.
-
Bonus points update: Adding Bonus Points for delegates who attends to the “Arbitrum Governance Report Call” (monthly) and the “Open Discussion of Proposal(s) - Bi-weekly Governance Call.”
-
For the monthly call, 2,5% BP will be awarded for attendance.
-
For the bi-weekly calls, 2.5% BP will be awarded for attending each call.
-
-
Abstract
We propose renewing the ArbitrumDAO Delegate Incentive Program (DIP) for one year. This renewal will include adjustments to the parameters, requirements, budget, and incentives awarded to delegates.
Motivation
We’ve reached the fifth month of the current version of the incentive program, which ends on August 31.
As outlined in the mid-term report, the program has positively impacted ArbitrumDAO during the first three months.
However, simply renewing the program will not generate enough value. Therefore, we propose adjustments to improve the program and increase its impact on ArbitrumDAO.
These changes are based on our experience as administrators, the reports we’ve conducted, and feedback from delegates, the Karma team, and key community members.
Note: SEEDGov delivered the end-term report, you can check it here.
Strengthening DIP’s Mission & Vision
As we mentioned in the report, it is crucial to keep delegated voting power active within ArbitrumDAO. We believe the Incentive Program should focus on the professionalization of delegates.
By professionalization, delegates should dedicate a significant amount of time to staying informed about developments in ArbitrumDAO, gaining knowledge of Arbitrum’s technology, and making meaningful contributions to the DAO. This includes active participation in most DAO activities like providing feedback on proposals, attending to Governance Calls, maintaining high voting participation on Snapshot and Tally, and providing a rationale for such votes.
To achieve this, we also need to establish transparent and predictable incentives. Delegates should be confident that if they align with the DAO and improve their contributions over time—essentially if they professionalize—they will receive better incentives. These incentives should be transparent and attractive without being subject to manipulation or arbitrary changes in their amount. Delegates should focus on their role and DAO activities, not on understanding complex economic mechanisms to receive compensation.
We want delegates to be motivated to participate actively in ArbitrumDAO. In order to do this, incentives must be attractive enough for them to either participate directly or hire competent individuals to represent their interests, those of their community, or their protocol.
It’s important to note that while managing the incentive program, we realized that it doesn’t address all the challenges faced by the DAO, such as vote distribution, attracting new contributors, and other aspects. Expecting a single strategy to tackle all issues overlooks the diversity of factors involved. However, we’re pleased to see initiatives like ARB Staking, (Re)delegation Week, and the Public Good Citizen Enfranchisement Pool are underway to address these challenges. As each initiative matures, we can align them to achieve a more significant collective impact.
General parameters
The proposed changes below are based on the experience we’ve gained while managing the program, the delegates’ feedback during the discussion on this thread, and both reports we made for the DIP 1.0 (mid-term report and final report).
If you want to view the parameters of the previous program, click here.
Duration
The program’s first phase lasted six months, as it was designed as an experimental foundation. For this iteration, extending the duration to twelve months is appropriate, providing greater predictability and flexibility for implementing future changes.
ArbitrumDAO may cancel the program or modify parameters, such as scoring methodology, through an instantaneous vote.
Requirements to Participate in the DIP
The requirements to participate in the program are as follows:
-
Each delegate must adhere to all social agreements reached through Snapshot, including those outlined in proposals such as ‘Improving Predictability in Arbitrum DAO’s Operations,’ ‘Should the DAO Create COI & Self Voting Policies?,’ ‘Incentives Detox Proposal,’ and any other proposals or codes of conduct that may be approved in the future.
-
Voting Power: >50K ARB, corresponding to 176 delegates. (Source: Arbitrum Delegates and Voting Power - Dune Analytics).
-
Change: This parameter will remain unchanged.
-
Motivation: Currently, only 30% of delegates meeting this requirement are part of the program. One of our goals for this renewal is to increase the number of participating delegates.
-
-
Participation Rate (Karma): ≥75% participation in on-chain votes in the last 90 days.
-
Change: Previously, the requirement was 25% of total historical votes.
-
Motivation: As the number of proposals grows, the impact of each individual vote decreases, making it challenging to meet the historical 25% PR. This change aims to lower the barrier for new participants, ensuring that all delegates, regardless of when they joined Arbitrum, are recognized for their consistency and active participation in the DAO. In this way, the program’s approach encourages the active participation of delegates, ensuring a more accurate and up-to-date representation.
-
Onboarding new delegates
A new delegate, without prior participation history, can join the program starting in the third month after casting his first on-chain vote, as long as he meets the specified requirements (>50k voting power and ≥75% participation in on-chain votes) during that third month.
Incentive Program Application
Delegates who meet the requirements must confirm their participation in the DIP via the DIP Application Thread. Delegates can join the program anytime within 12 months, provided they meet the specified criteria. To minimize the potential for manipulation, delegates who sign up before the third day of the month will be included in the incentive calculations for that month. The delegates who have registered for v1.0 will not have to go through this procedure again
Regarding delegates’ KYC
It is important to mention that it won’t be necessary for the delegates already registered to complete the KYC again. They will only need to sign updated agreements with the Arbitrum Foundation.
Incentive Program Application Template
-
Forum Username (Link):
-
Twitter Profile (Link):
-
Snapshot Profile (Link):
-
Participation Rate 90 days - Karma (Link):
Note: Any delegate who chooses to withdraw from the program can indicate their intention to opt out by posting a message in the forum.
Number of Delegates to Receive Incentives
We will maintain this parameter at 50 delegates.
Incentive Budget
Budget Allocation: MAX 4,200,000 USD (up to 7.000 USD per delegate per month).
Making the incentives more predictable
In the initial iteration of the DIP, costs were denominated solely in ARB, which led to challenges due to the token’s volatility.
When the program launched in March, the token was valued at approximately USD 1.70, allowing a delegate with 100% Total Participation (TP) to earn around USD 8,500 per month. However, now, in the fifth month of the program, the token is valued at USD 0.55, reducing a delegate’s maximum monthly compensation to about USD 2,750—a decrease of over 60% for the same amount of work. This significant reduction could disincentivize delegate participation.
The same issue applies to operational, development, and maintenance costs, which have become increasingly misaligned.
To avoid this situation, we propose that payments be denominated in USD and made in ARB tokens. Again, the delegates’ incentives should be transparent and attractive without being subject to manipulation or arbitrary changes in their amount.
If we want to professionalize DAO operations as much as possible and stay aligned we should aim for delegates to have a certain seniority and dedication to Bringing Value to Arbitrum DAO. Thus, delegates who meet the requirements at the end of the month and achieve a TP of at least 65% will be eligible to receive up to 7,000 USD in ARB tokens as compensation.
Now, as program administrators, our goal is to create the necessary incentives to elevate the overall quality of contributions. During the first iteration of the DIP, we observed that the compensation for top delegates did not significantly differ from those with lower Total Participation (TP) scores. To address this, we propose the introduction of three compensation tiers based on the Total Participation Rate achieved by each delegate.
-
Tier 3: TP ≥ 65 and < 70. Compensation range: $3,000 to $3,250. ARB cap: 8,000.
-
Tier 2: TP ≥ 70% and < 85%. Compensation range: $4,200 to $5,100. ARB cap: 12,000.
-
Tier 1: TP ≥ 85% and < 100%. Compensation range: $5,950 to $7,000. ARB cap: 16,500.
This approach makes the program more cost-effective per USD spent because as the quality of contributions increases, more resources are allocated to higher-performing delegates (those in tiers 1 and 2). Conversely, if delegate performance is suboptimal, fewer resources are allocated, with more delegates falling into tiers 2 and 3.
We considered the volatility of the ARB token by suggesting an ARB cap for each Tier. In this way, while delegates’ compensation could still be affected after a drastic price drop, we would at the same time protect the DAO’s interests by limiting its “loss” somewhat. It also act as a mechanism to align delegates with Arbitrum DAO.
Note: ARB Cap for each Tier includes a 30% buffer and will be recalculated on the basis of the ARB price at the time of submitting the proposal for voting in Tally.
We’ll provide a simulation of what the payout would look like in 3 different scenarios:
- 1st scenario (base): ARB at $0.55
- 2nd Scenario: ARB at $0.35
- 3rd Scenario: ARB at $1.00
Note: We will use the Coingecko rate at the time of payment to determine the value of ARB each month.
As seen in the images, an increase in ARB’s price would significantly reduce the spending in ARB, allowing the DAO to benefit from the rise. In the other hand, if the price falls to $0.35, the token expenditure of the program is limited to mitigate the DAO’s potential “loss.” This approach also better aligns the program’s USD spending with the DAO’s new economic and financial reality.
Additionally, the use of tiers in this manner allows for a significant boost in incentives when efforts are increased, acting as a catalyst for delegate activity. The first tier serves as a ‘minimum payment’ for delegates who, for example, fulfill their primary duties: participating in every vote and providing some input in the forum.
Tiers 1 and 2 represent the leap in quality, requiring greater effort to obtain 70-100% of the TP, but offering a higher reward in return. A delegate who provides a high degree of dedication and high quality input can DOUBLE the incentives of the lowest tier.
Payments to delegates are expected to be processed in ARB from the MSS between the 15th and 16th of each month.
Conflict resolution
Dispute
If delegates disagree with the results presented by the Karma Dashboard at the beginning of each month, they have a four-day period to contest them.
To raise a dispute, delegates must post a message in the forum using the following template:
-
Title: Dispute
-
Username
-
Reason for Dispute (provide details)
The DIP administrator will address the issue promptly, with a resolution expected within a maximum of 4 days.
DIP Ban
The program administrator will have the right to expel a delegate if they attempt to game or exploit the program or if the delegate does not meet any of the aforementioned requirements to be considered eligible. This decision is at the discretion of the program administrator. In all cases the ban is permanent.
The affected delegate may request a Snapshot vote to ratify, change (for suspension), or revoke the Administrator’s decision. This serves as a one-time appeal, and the decision made by the DAO will be final.
DIP Suspension
The program administrator will have the right to suspend a delegate if he/she/they commits a fault that in the administrator’s judgment is insufficient cause for expulsion. The decision and duration of the suspension are at the discretion of the program administrator (duration can’t exceed the program’s current iteration).
The affected delegate may request a Snapshot vote to ratify, change, or revoke the Administrator’s decision. This serves as a one-time appeal, and the decision made by the DAO will be final.
Scoring
To determine which delegates will receive monthly payments, we will continue using the dashboard developed by Karma.
Note: The program manager may adjust the compensation parameters, provided they inform the DAO of the reasons for the changes.
New Evaluation System for Delegates’ Feedback
We’ve listened to the concerns raised by some delegates regarding the changes in the Delegates’ Feedback section.
So, we propose changing the way we collect feedback from a quantitative to a qualitative way.
Instead of counting comments on proposals that reach Snapshot, we propose implementing a monthly analysis of the feedback provided by delegates, regardless of whether the proposal/discussion has reached Snapshot.
In this way, the Program Administrator would be responsible for creating a rubric that evaluates the value and timeliness of the feedback provided by delegates. The goal of this system is to:
-
Incentivize quality over quantity of feedback.
-
Extend the analysis across all contributions made by a delegate in the forum (instead of only considering those that reach Snapshot).
-
Avoid unnecessary or spam comments made solely to achieve a higher score.
-
Allow delegates to focus on contributing to proposals or discussions related to their areas of expertise.
Under this system, a delegate could achieve the same score with (for example) one big significant contribution or by making several smaller contributions. It also discourages actors who might try to take advantage of the program.
Evaluation Approach
This rubric assesses the overall feedback provided by the delegate throughout the month (from day 1 at 00:00 UTC to the last day of the month at 23:59:59 UTC), based on a summary of their participation in various proposals and discussions. The aim is to measure the consistency, quality, and overall impact of their contributions. We expect delegates to comment on and/or provide feedback on proposals and discussions both before and during the voting process. This feedback should aim to foster debate, improve the proposal, or clarify issues not explicitly addressed within it.
We trust the goodwill of the delegates to avoid meaningless/spam comments and ensure that all contributions are sensible.
- Key point: Feedback or opinions that violate community rules will not be considered. Your interactions should contribute constructively to the discussions and the deliberation and improvement of the proposals.
Rubric Specifications
The parameter “Proposal Feedback” should be renamed to "Delegate’s Feedback” in this case since we’re analyzing the overall feedback provided by the delegate (not just proposals on snapshot) and will maintain a maximum weight of 30%, the score will be awarded based on the following rubric:
Here is a breakdown of each criterion included in the rubric:
-
Relevance: Analyzes whether the delegate’s feedback throughout the month is relevant to the discussion.
-
Depth of Analysis: It evaluates the depth of analysis provided by the delegate concerning the proposals or discussions. This serves as a metric to assess whether the delegate takes the time to thoroughly meditate on the discussion and demonstrates attention to the details. Key elements include solid arguments, relevant questions, and thorough reasoning.
-
Timing: Considers when the delegate provides feedback, rewarding those who provide feedback earlier, as long as they meet the above criteria. Note that feedback will be considered as provided before on-chain/off-chain voting if it was published before the day voting starts at 00:00 UTC.
-
Clarity and Communication: this is a review of the clarity, structured communication, and overall readability of the delegate’s feedback. Clear and well-written feedback is rewarded.
-
Impact on Decision-Making: While the proposer ultimately decides whether to incorporate feedback, high-quality feedback from a delegate often influences the final proposal that goes to vote. This criterion evaluates whether the delegate’s feedback tends to drive changes in proposals/discussions.
-
Presence in Discussions: This is a more quantitative analysis, intended to reflect the effort of delegates who participate in most discussions. This parameter serves as a multiplier to the score obtained across the previous five criteria. Note that the percentage of participation in monthly discussions could be not linear across all DAO’s discussions. Some proposals may carry more weight in the overall discussions (special cases such as LTIPP/STIP, gaming, treasury, etc.).
Monthly Evaluation Process
1. Data Collection: At the end of the month, the complete set of contributions by each delegate across all discussions on the forum is reviewed.
2. Overall Evaluation: The rubric is used to assess the delegate’s overall performance on each criterion, based on a holistic view of their participation.
3. Score Assignment: A level of 1 to 4 is assigned to each criterion, based on the consistency and quality of the delegate’s contributions over the month. Each level has an assigned score, from 1 to 4.
4. Monthly Report: A qualitative and quantitative report summarizing the delegate’s performance over the month is then produced.
Scoring Methodology
Each rubric criterion has levels with an assigned score, from 1 to 4, depending on the level achieved.
The initial score is obtained by adding the first five criteria, while the final score results from applying the “Presence in Discussions” multiplier to the initial average score. The maximum Initial Score is 20 points and 30 points for the Final Score.
For illustrative purposes, here’s an example:
-
Relevance: Level 3 - Scoring achieved = 3
-
Depth of Analysis: Level 2 - Scoring achieved = 2
-
Timing: Level 4 - Scoring achieved = 4
-
Clarity and Communication: Level 2 - Scoring achieved = 2
-
Impact on Decision-Making: Level 3 - Scoring achieved = 3
Initial Score/Average: 70% or 14/20 or 2.8/4
- Participation in Discussions: Level 2 - Multiplier assigned: 1.10x
Final Score: 70% x 1.1 = 77% or 23.1/30 Delegates’ Feedback points.
Trade-offs
We are aware that this proposed solution introduces trust assumptions regarding the Program Administrator’s criteria for evaluating feedback. We view this layer of subjectivity as inevitable until we can implement automated tools, such as the AI that Karma is developing, to assess the quality of delegate feedback. It is important to note that, as Program Administrators, after analyzing proposals and feedback for the last six months, we have gained experience that (we believe) will help us correctly identify constructive feedback.
At SEEDGov, we are committed to being as transparent as possible, as we have been thus far. Therefore, the rubric and the monthly report will always be publicly accessible to all interested parties. During this phase, feedback from Arbitrum DAO will also be crucial in helping us refine our evaluation criteria.
Next steps regarding Feedback
During the first iteration of this delegate program, we have been closely monitoring forum activity, particularly during the feedback stage of various proposals. As this phase was a trial, we primarily focused on observing certain dynamics within the forum to better understand them and make adjustments for the next iteration.
One of the issues we identified is the use of AI to comment on proposals. While the use of AI for tasks such as translating text or correcting grammatical errors is understandable, we have seen instances where it has been employed to generate feedback based on other users’ comments. This has not been a widespread or alarming issue but, in the next iteration, we plan to take a more reactive approach.
Another issue we have encountered is regarding users who post “Reserved for later comment.” We want to clarify that we do not recommend or encourage such practices, which should preferably be avoided.
In this iteration, we will try to provide feedback to delegates on this type of situation.
Experimental nature of the new delegates’ feedback scoring system.
Due to the experimental nature of this new scoring system, it will be in a testing phase, and we, as Administrators, commit to submitting it to the DAO for consultation after three months of running the program.
Scoring Weight Changes
After observing delegate behavior and some internal discussions, we believe it is necessary to adjust specific parameters of the program:
Participation Rate (PR)
-
Change: Previously weighted at 20% and based on historical participation rates in Tally. Now, it is reduced to 15%, calculated based on the participation rate of the last 90 days in on-chain votes (as calculated by Karma).
-
Motivation: The 90-day participation rate is a requirement for delegates to register in the program. Unlike the historical rate, this parameter is more accessible, so we have decided to lower its weight.
Snapshot Voting (SV)
-
Change: The weight of this parameter has increased from 15% to 20%.
-
Motivation: Snapshot voting is a crucial part of the governance process. It is essential to give it more weight to encourage delegates to vote in snapshots.
Communicating Rationale (CR)
-
Change: The weight of this parameter has decreased from 25% to 10%.
-
Motivation: While providing a rationale for votes is important, we consider the feedback period even more crucial. Therefore, we have reduced the weight of Communicating Rationale and increased the weight of Delegates’ Feedback.
Delegates’ Feedback (DF)
-
Change: The weight of this parameter has increased from 15% to 30%.
-
Motivation: Providing feedback on forum proposals is fundamental. We have raised this parameter’s weight above others and introduced a rubric to evaluate the feedback qualitatively. These changes are detailed further above.
Total Participation (TP)
-
Change: The requirement for total participation has increased from +60% to +65%.
-
Motivation: After analyzing the program’s results, we observed that some delegates with high historical participation in Tally could meet the +60% requirement by voting on 100% of monthly proposals. We have decided to raise the TP requirement to +65% for compensation to encourage more contributions in the forum.
Bonus Point (BP)
-
Change: Adding Bonus Points for delegates who attend the “Arbitrum Governance Report Call” (monthly) and the “Open Discussion of Proposal(s) - Bi-weekly Governance Call.”
-
For the monthly call, 2.5% BP will be awarded for attendance.
-
For the bi-weekly calls, 2.5% BP will be awarded for attending each call.
-
-
Motivation: We have received several questions about delegate participation in calls or working groups. While there is a trend to compensate work within specific WGs (and we want to avoid potential double spending by the DAO), we find it interesting to experiment with the idea of awarding Bonus Points to delegates who participate in both the “Arbitrum Governance Report Call” (monthly) and the “Open Discussion of Proposal(s) - Bi-weekly Governance Call”. We will continue to manage this parameter as in the previous program. Although it was a topic of discussion, we have yet to receive complaints about how these points were awarded. We welcome any feedback from the DAO.
Details: Terminology, Symbols, and Formulas
-
Activity Weight (%): Represents the weight assigned to each key activity to be measured in delegates.
-
Participation Rate - 90 days (PR90) - Weight 15: Percentage of the total participation of the member in votes in the last 90 days. This parameter will be calculated at the end of each month.
- PR90% fórmula: (PR90 * 15) / 100
-
Snapshot Voting (SV) - Weight 20: Percentage of delegate participation in snapshot voting. This parameter is reset at the beginning of each month.
-
Tn: Number of total proposals sent to snapshots for monthly voting.
-
Rn: Number of proposals the delegate voted on in the month.
-
SV% formula: (SV(Rn) / SV(Tn)) * 20
-
-
Tally Voting (TV) - Weight 25: Percentage of delegate participation in on-chain voting in Tally. This parameter will reset at the beginning of each month.
-
Tn: Number of total proposals sent to Tally for monthly voting.
-
Rn: Number of proposals the delegate voted on-chain in the month.
-
TV% formula: (TV(Rn) / TV(Tn)) * 25
-
-
Communicating Rationale (CR) - Weight 10: Percentage of communication threads with the justification of the delegate’s vote on the proposals sent to snapshots and Tally (if necessary if the vote does not change). This parameter is reset at the beginning of each month.
-
Tn: Total number of proposals that were submitted to a vote.
-
Rn: Number of honest communication rational threads where the delegate communicated and justified their decision.
-
CR% formula: (CR(Rn) / CR(Tn)) * 10
-
-
Delegates’ Feedback (DF) - Weight 30: This is the score given by the program administrator regarding the feedback provided by the delegate during the month. This new iteration (v1.5) will use a rubric with a scoring system detailed above.
- The scoring system will use the following formula:
(Σ qualitative criteria) / 20 * 100 * Presence in discussions multiplier * 30 (DF weight) = DF score
-
Bonus Point (BP) - Extra +30,0% TP: This parameter is extra. If the delegate makes a significant contribution to the DAO, it’s automatically granted +30% extra TP. This extra is at the discretion of the program administrator.
-
Total Participation (TP): Sum of the results of activities performed by the delegate. A TP% of 100 indicates full participation.
- TP% formula: PR% + SV% + TV% + CR% + DF% + BP
-
Payment USD (PUSD): The final amount of USD that the delegate will receive is based on his TP% and his Tier.
- PUSD formula: IF(TIER =1; TP/100x7000; IF(TIER=2; TP/85x5100;IF(TIER=3; TP/70x3250;0)))
-
Payment ARB (PARB): The final amount of ARB that the delegate will receive is based on his PUSD, the ARB Price and the ARB payment cap of the corresponding tier.
- PARB formula: IF(PUSD/ARB Price > Tier’s ARB Cap; Tier’s ARB Cap; (PUSD/ARB Price))
Note: Here is the framework template for delegates to review the parameters.
Parameter summary
Activity Weight (%):
-
Participation Rate 90 (PR90) - Weight 15%
-
Snapshot Voting (SV) - Weight 20%
-
Tally Voting (TV) - Weight 25%
-
Communicating Rationale (CR) - Weight 10%
-
Delegates’ Feedback (DF) - Weight 30%
-
Bonus Point (BP) - Extra +30%
-
Total Participation (TP):
-
TP = PR% + SV% + TV% + CR% + %PF + BP
Administrative Budget
We have updated the operational costs for administering this new program. This payment will be distributed over 12 months and sent regularly along with the payments to the delegates.
Karma Details:
- $7,250/month * 12 = $87,000 for continuing to build and enhance the dashboard.
SEEDGov Details:
-
2 Program Administrators (2 Full-time): $157,000 over 12 months.
-
1 Data Analyst (Part-Time): $35,000 over 12 months.
Total: $192,000 ($16,000 per month)
Source: Salaries are approximate and based on U.S. standards. We extracted data from this website.
Program growth clause
Although the proposed administrative budget is sufficient in both versions, we understand that if there is a considerable increase in registrations, the workload would increase significantly. That is why we will incorporate a clause where if the program exceeds 65 registered delegates (which is the number that we believe we could cover with the budget requested) we will reopen discussions in the forum on the budget, also considering the possibility of increasing the number of delegates incentivized.
Budget Summary
-
USD 4,200,000 in Incentives. Delegates’ compensations are capped at 16,500 ARB per delegate per month (Tier 1), which means the maximum spending on incentives would be 9,900,000 ARB per year (16,500 ARB x 50 delegates x 12 months). This amount represents 0.31% of the DAO’s treasury. The ARB cap already includes a 30% buffer and it will be recalculated before Tally.
-
USD 87,000 +30% ARB buffer for Dashboard maintenance and upgrade
-
USD 192,000 +30% ARB buffer for Operational Costs/Program Administration.
Total: USD 4,479,000
Here is a summary of the budget, buffers, and final amounts to be requested (see notes):
Budget considerations
-
USD costs are fixed, meaning that if the price of ARB increases, the USD costs will remain the same. At the end of the program, any remaining ARB tokens from the program will be returned to the treasury.
-
Any remaining funds from the experimental incentive program multisig will sent back to the treasury.*
-
Final amounts in ARB will be recalculated on the basis of the ARB price at the time of submitting the proposal for voting in Tally.
-
The funds will be sent to the recently approved MSS.
Management and development of the Delegate Incentives Program: Responsibilities and Deliverables
The SEEDGov team and Karma will continue to collaborate to maintain and manage this new version of the DIP.
Program Dashboard Management (Karma)
Over the past six months, our team has successfully built and maintained the DAO’s compensation dashboard. Based on this experience, we anticipate the following work for the upcoming year:
-
Infrastructure Maintenance and Expenses: We will continue to ensure that the dashboard operates smoothly, with real-time data updates for most metrics and daily overall calculations. This includes regular software maintenance, such as updating libraries and other necessary tasks to keep the system secure and efficient.
-
Ongoing Collaboration with SEEDGov: Regular calls with the SEEDGov team over the past six months have been instrumental in maintaining the program’s smooth operation. These meetings have allowed us to address bugs, resolve data discrepancies, and implement enhancements based on administrative needs. We will continue these calls to ensure ongoing improvements in operational efficiency.
-
Compensation Calculation Logic Updates: As outlined in the new proposal, we will implement necessary changes to the compensation calculation logic. This includes introducing a tier system, adjusting weights and metrics, and incorporating a 90-day calculation period.
-
Automation of Voting Statistics: Our current system fully automates voting statistics, streamlining the process for admins to determine compensation. Collecting data on Communication Rationale and Proposal Feedback has historically been time-consuming. A few months ago, we introduced an MVP that automates this process using LLM tools. We plan to continue enhancing this feature to further assist administrators.
-
Verification of Statistics: All statistics need to be verified for accuracy in a timely manner, specifically by the first of each month. We will continue to ensure that this verification process is completed on schedule to maintain the reliability of the data.
Program Manager (SEEDGov)
Responsibilities
In the first iteration of the DIP, we encountered additional tasks that were not anticipated when drafting the proposal, which required extra effort to meet the program’s requirements. Therefore, in this new version, we have updated the responsibilities of the program administrator:
-
Check corresponding data to see delegates’ eligibility.
-
Collaborate with Arbitrum Foundation to ensure delegates complete the KYC/KYB process and perform the necessary follow-up.
-
Constantly monitor delegates’ activity.
-
Support delegates with any questions or concerns related to the incentive program through Telegram, forum, or Discord.
-
Collect feedback from delegates and the community to improve the program.
-
Review delegate comments in the forum and filter out spam messages.
-
Communicate any changes in the incentive program to the delegates
-
Publish monthly results in the forum.
-
Publish monthly program costs in the forum.
-
Solve disputes
-
Determine which delegates receive Bonus Points.
-
Collaborate with MSS to ensure payments to delegates are processed each month.
-
Periodic review of the information uploaded to the Karma dashboard.
-
Have weekly meetings with Karma to fix bugs and enhance the dashboard.
-
Prepare periodic reports.
-
Prepare a rubric and a monthly report about each delegate’s feedback performance.
-
Monitor the participation of the delegates in the governance calls mentioned for the bonus points.
Deliverables
We commit to delivering:
-
Monthly results of the DIP.
-
Public cost reports to allow for audits by any interested party.
-
Mid-term and final evaluation reports of the program.
Additionals
-
Hold meetings with delegates to gather feedback on the program and provide them with updates.
-
Constantly work on improvements to the program.
What else can we do?
We are committed to reaching out to each of the delegates who meet the requirements to be part of the program but are not yet enrolled. Our goal is to encourage everyone to join, helping to maintain and increase the diversity of voices in Arbitrum DAO.
KPIs
In this new iteration of the DIP, we aim to establish the following KPIs:
-
Achieve that 50 delegates receive incentives.
-
Engage 100 delegates in the program.
-
Achieve an average Total Participation (TP) of 80% among participants in the program within six months.
-
Introduce improvements to the program after six months.
Continuous Upgrades
We know that this program still has room for improvement. While this new program version moves away from its experimental nature to become a more developed one, a delegate incentive program must continuously receive feedback from its participants.
The SEEDGov team and Karma are committed to gathering feedback, obtaining more information, and implementing the necessary changes to optimize performance. Considering the new duration of the program, the Program Administrator reserves the right to make changes in the scoring methodology by giving public notice in the forum.
Note that everything related to DIP 1.0 that has not been mentioned as modifications in this new proposal will remain in effect in v1.5.
Timeline
Snapshot vote: September 19th, 2024 (ready)
The options were sent to vote in Snapshot as follows:
-
FOR, DIP V1.5 (chosen option)
-
FOR, DIP V1.1
-
ABSTAIN
-
AGAINST
Tally vote: Starting on Thursday 10th October, 2024.