Klaus Brave
To begin with, we would like to clarify that the mere act of producing a podcast about Arbitrum does not automatically qualify for Bonus Points; rather, the impact it generates is the determining factor.
To expand on the criteria, we could say that there are three key items to consider when analyzing these types of contributions:
- Does it feature a key stakeholder?
- Did it achieve strong engagement metrics on social media?
- Would you say it had a tangible impact on the DAO by influencing other delegates’ decisions?
In this particular case, we believe both podcasts featured relevant stakeholders for the DAO, but we are unable to observe metrics that support strong engagement.
This leaves us with the final question:
Would you say it had a tangible impact on the DAO by influencing other delegates’ decisions?
Since this is a somewhat tricky question, any impact should be clearly documented.
Regarding the first podcast, we have no elements to support such a claim.
Regarding the second podcast, the only potential evidence we have is the following statement:
First, we would like to clarify that Paulo’s comment was published on May 10th, which places it outside the evaluation window for the month under review.
Additionally, this statement raises a few questions:
- @PauloFonseca, are you in a position to confirm that this podcast had a significant influence on the depth and clarity of your later comment?
- Finally, @KlausBrave, could you confirm that you will not request retroactive compensation from the DAO or any DAO-funded program for the production of these podcasts?
We want to emphasize that, considering the monthly nature of the DIP and the fact that we haven’t yet had the opportunity to thoroughly evaluate Paulo’s comment, any determination regarding the impact of this particular podcast will be made during the May review.
Thank you for pointing that out! It has been corrected.
EzR3al
While @Ministrodolar has already provided some insights into the assessment of your comments, we’d like to clarify that we, as PMs, want to avoid penalizing delegates based on technicalities or rigid structures.
We understand that the subjective nature of this parameter can be confusing, particularly in edge cases like this, where many delegates are clustered around the 60–65 point range. Part of this work unfortunately involves making the tough call of leaving some contributors without compensation in a given month.
As outlined in the report, we now observe two differentiated categories of delegates/contributors participating in the program:
- Those with a significant amount of voting power, who may also contribute to the DAO’s daily operations beyond voting. These individuals are key to maintaining governability by helping proposals reach quorum.
- Those with lower voting power, whose voting activity contributes less to reaching quorum, and for whom this program focuses more on rewarding their contributions to discussions rather than voting itself.
In that regard, there are cases where the sum of a delegate’s voting power and their contributions throughout the month does not generate sufficient impact to justify compensation under this program.
That said, this does not mean that “small” delegates have no chance of earning incentives. In fact, the data shows the opposite: 17 out of 30 compensated delegates this month had less than 1M ARB in Voting Power.
For those who haven’t been able to meet the 65-point threshold recently, we’ve provided various suggestions for improvement. From our perspective, the small delegates who are succeeding should serve as benchmarks for both the quality and time commitment required. That’s why we encourage anyone below the threshold to double down on their efforts and look to those successful cases for guidance.
Finally, since it’s related to this dispute, we’d like to address Paulo’s question:
Indeed, the criteria for classifying a comment as valid or invalid follow a subjective and strategic approach. In general, we try to only consider the best contributions from each delegate to avoid situations like those raised by cp0x and Tane in our thread [DIP v1.5] Delegate Incentive Program Questions and Feedback. This criterion is applied uniformly to all delegates.
Tane
While the comment may meet the requirements to be considered valid, there are a number of reasons we took into account for not including it in the assessment:
- The first suggestion, focused on an optimistic governance system similar to Lido’s Easy Track, is relevant and adds value to the discussion, but we had already assigned a score for a comment that included this same suggestion in the thread [Non-Constitutional] Service Provider Utilisation Framework.
- Another factor to consider is that, due to the nature of the discussion and the low level of controversy surrounding the proposal, it is difficult to have any real impact on it. We understand that the suggestions go beyond the discussion itself, but it’s important to note that very few delegates received scoring in this thread.
- Lastly, if there were a factor that we believe would justify considering the comment as valid, it would be the second suggestion related to the cap implemented in the DIP. As we’ve mentioned before, including the first suggestion in the assessment would (from our perspective) mean rewarding the same suggestion twice.
- That said, we should also consider that if we had to score this comment based on what was stated above, we believe it would receive a lower score than the comments already included. This would result in a disadvantage rather than a benefit for Tane, in line with what you yourselves expressed in this comment.
While from our perspective this comment represents a due diligence task involving a series of questions to better understand the proposal, we generally find it difficult to quantify the value it brings to the discussion. We’ll go point by point to explain our position:
-
First of all, we don’t fully understand the claim about potential conflicts of interest. It might make sense if the committee members were part of specific protocols, but in this case, we’re talking about three “Arbitrum Aligned Entities” with no specific affiliation to any ecosystem protocol. In this sense, it might have been helpful to elaborate further on this argument.
-
As for the subsequent questions, we believe that public reporting is already implied in the inclusion of the Evaluation Partner role.
From this excerpt, two things are clear:
- The Evaluation Partner must deliver periodic reports
- The Evaluation Partner must develop a public dashboard tracking all relevant metrics
- In this particular case, other delegates had already asked about the metrics used to determine the program’s performance. (In fact, the comment right before Tane’s includes a question along those lines.)
- As we’ve mentioned above, the proposal already included the creation of a public dashboard by the Evaluation Partner. In this case, we suggest reviewing the proposal details in depth before making comments.
- We acknowledge that this is an interesting question. However, given that the committee will have a degree of discretion, it’s reasonable to expect that protocols that fail to fulfill their co-marketing obligations will no longer receive incentives. That said, considering the rest of the comment, we don’t believe this is enough to assign a score (especially not one that wouldn’t negatively affect the rest of the assessment).
We appreciate the detailed justification provided in this particular case. We’ll go through it point by point once again:
-
After a second review, we believe the dispute regarding Relevance is valid, and as such, we have updated that score to 7.
-
Regarding Timing, it’s not only about the number of days or the phase of the discussion (RFC, Snapshot, or Tally), but also when the contribution is made relative to the level of engagement the proposal/topic has already received. In this case, Tane’s comment is the 30th out of 34, which—while not inherently negative—is an indicator that the contribution comes “relatively” late in a discussion where several other delegates have already provided substantial feedback. That’s why some of the alternatives mentioned by Tane in this comment—such as strategic delegation of DAO Treasury ARB to active delegates, ARB staking, and direct interventions on platforms like LobbyFi (dialogue, control, or bans)—had already been discussed to varying degrees.
This doesn’t mean we didn’t take into account the pros and cons analysis they included, but the relative value of that input is lower compared to the suggestion related to NEAR.
Once again, we appreciate the feedback you’ve shared with us. We’ll now proceed to analyze the final suggestions.

Assessment of “Impact”: A more detailed understanding of the specific factors, evidence, and weighting considered when evaluating a comment’s “Impact on Decision-Making.”
This is a somewhat tricky factor, as impact per se is difficult to quantify, although there are a number of elements we’ve been taking into consideration:
- Impact on other delegates and their decision-making (e.g., being quoted by others).
- Impact on the proposer and whether the suggestions led to changes in the proposal. (This also depends on the proposer, although we’ve noticed that well-reasoned suggestions with strong arguments tend to be taken into account.)
- Timeliness of the feedback (related to the overall timing).
- Expertise/background of the person providing the feedback. (If the person is an expert in the subject being discussed, their input will clearly carry more weight.)
- Voting Power of the delegate (it’s well known that a delegate with higher VP is more likely to influence a proposal through their suggestions, as the proposer needs their support).
We may be overlooking something, so we welcome suggestions to expand or improve this criterion.

Calibration of subjective scores: Enhanced insight into how “Timing” and “Clarity & Communication” scores are precisely adjusted relative to “Relevance,” “Depth of Analysis,” and “Impact,” as alluded to in the DIP v1.6 FAQ (p.13). Understanding this interplay more deeply would help delegates strategize their communication.
In this case, we are open to suggestions from the community. Our perspective on how to adjust Timing and Clarity in the Community so that they do not become gameable factors in the assessment can be found in the Bible 1.6, and we have also explained it previously here:
As you can observe in the notes included in the individual reports of each delegate, the scores for the parameters Timing and Clarity & Communication are adjusted relative to Relevance, Depth of Analysis, and Impact on Decision-Making. This means that it is unlikely for the Timing score to be higher than the Relevance score, for example.
This is justified by the fact that a delegate may be the first to comment, but, if he/she provides low or medium-quality feedback, it makes no sense to assign the maximum Timing score to a comment that contributed little to the discussion. Timing is valuable when the comment is insightful or has a significant impact. The same applies to Clarity & Communication.
In summary, a comment may have the same or a lower score in these two parameters compared to the other criteria in the rubric. This will be better reflected in the February results when the Scoring system transitions to a 0 to 10 scale.

Benchmarks for “Depth of Analysis”: Clearer benchmarks, or perhaps illustrative examples, for varying levels of “Depth of Analysis.” This would provide a more tangible guide for delegates on how to achieve higher ratings by demonstrating sophisticated understanding and reasoning.
Good suggestion! Perhaps we could add some examples to the Bible. For now, we suggest using the comments with the highest score in this parameter as a benchmark.
We hope this response has addressed your dispute. Best regards!
Ignas
Although the current price of ARB is 0.3, this is quite a large budget for the DAO, especially considering there is no allocation directly aimed at incentivizing users. Users may not see a direct correlation between participating in the program and the benefits derived from the ARB token.
In this paragraph, we do not understand the statement “there is no allocation directly aimed at incentivizing users.” The very purpose of an incentive program is precisely to make the use of certain primitives more attractive to users. This is the point where we usually ask for a deeper analysis so that your argument carries more weight.
But there’s a possible solution to prevent yield selling: How about using part of the budget to incentivize ARB token itself, thus increasing both demand for the token and encouraging long term commitment to the ecosystem?
This suggestion had already been made previously.
- Partnering with other protocols to offer additional rewards for users:
As mentioned, 80M ARB could put pressure on the DAO, I suggest considering collab with other protocols so that users can receive additional rewards or DAO can share budget from these partners.
Uniswap is already using this approach to provide a more attractive incentive structure for users and encourage cross protocol collaboration, check their vote here: BoB Uniswap v3 Incentives Package
We acknowledge that the suggestion is not bad; in fact, there have been several cases of protocols matching incentives (such as Curve). However, the large number of precedents regarding this suggestion indicates that it is not particularly novel.
In fact, this response from Entropy in the thread suggests that this possibility has already been considered (although not specifically as a way to reduce costs). It is worth clarifying that, so far, we have not found a reasonable and well-argued justification—neither in this comment nor elsewhere—explaining why the costs are high:
For example, if 3 DEXs are willing to match incentives for a specific program, but 1 is not and gets excluded, we don’t want them to have the ability to pressure the community, who may not have the full picture on why they were excluded. Bringing the drama to the forum may do more harm than good for all involved. Having said that, the committee will be in close contact with all relevant protocols to ensure adequate communication.

2. Comment on this proposal: A Vision for the Future of Arbitrum
While my forum comment was brief, my team extended the discussion by publishing a well researched thread on X to help spread the new Arbitrum vision to a broader audience, including users who may not actively follow the forum: Pink Brains Post.
I believe this type of off platform contribution warrants recognition through bonus points.
For the moment, this type of contributions (X threads) are out of the scope of the program.

3. Comment on the discussion: Vote Buying Services
I don’t believe my comment deserved such a low score. My intent was to shift the discussion toward a more structural and long term solution: redesigning tokenomics to realign incentives for ARB holders and reduce reliance on lobbying mechanisms.
I suggested exploring staking mechanisms or veTokenomics via Curve as potential models, not as a detailed implementation, but as a strategic direction to broaden the conversation.
Staking was mentioned by six other delegates prior to this comment.
Regarding the implementation of mechanisms like veTokenomics, while the suggestion is “novel” in the context of the discussion (i.e., no one else had mentioned it as an alternative), it lacks a solid rationale explaining why it represents a good option for the DAO. Therefore, the currently assigned scoring already reflects that it is merely a suggestion in the style of a “strategic direction,” since without a strong foundation or deep analysis of its benefits, the potential impact is limited.
cp0x

- [ Arbitrum Obrit ] - In-Chain SQL Database for Arbitrum Orbit - #13 by cp0x
I think the rating of the impact 1 is incorrect, as well as the depth of analysis of 3
- to analyze this proposal, I went through several similar tasks from other chains, where similar ideas were expressed over the past few years. Taking into account the results of the implementation of the DB in the chain, I expressed the opinion that it is possible to implement and everything will be fine for the developer, but it will be very expensive for the user due to very expensive Select queries
- taking into account this analysis, as well as the need to work with this database only within the Arbitrum and Orbit, I concluded that such a solution would not be very correct
Considering the depth of the analysis and conclusions, I believe that my rating of the 1st chain should be significantly higher.
We understand this point. In fact, the reason we assigned scoring to this comment is because we believe you raised a valid point. However, to achieve a higher score in the depth of analysis parameter, we think you should have included all the research you mention directly in the comment. The fact that we are learning about this investigation through your dispute suggests that the comment could have been more thorough and revealing than it was—and therefore could have scored better across most parameters, including impact.

- [DIP v1.5]Delegate Incentive Program Questions and Feedback - #35 by cp0x
3.I think this comment should be taken into account and given a high score*
- I did an analysis for different delegates to make sure that the comment points are taken into account on the arithmetic mean
With this in mind, I proposed a model of a situation in which this would have a bad effect not only on a specific delegate, but also on the overall system: after all, if you are punished for a bad comment, then everyone will be afraid to write one.- Also, I not only did the analysis, but also proposed a solution to this problem to avoid the human factor
- Tane also agreed with me, who mentioned me in his comment about calculating points
-Despite the answer that such a system suits SEEDGov, they still did a good analysis of various solutions, including mine.
In this regard, I think my comment should be highly appreciated and taken into account.
And I still think that my proposal was better than what is currently used, because now the selection of worthwhile comments is done by a person, and the human factor should be reduced
We remind you that, as an internal policy, any contribution related to the DIP is not considered in the assessment to avoid conflicts of interest. This is clearly stated in the Bible.

- [Non-Constitutional] Service Provider Utilisation Framework - #14 by cp0x
4.I also think this comment should be rated highly*
- I analyzed the proposal and compared it with the previous one
- I pointed out the discrepancy between the requested funds for the project of 100,000 and the levels that exceed 100,000
- I also proposed my vision of audit levels, in accordance with other DAOs, as well as relying on DAO grants, where QuestBook is already operating in a similar way
This way we could save time for small projects that do not require a lot of funds for audits, but the analysis of these projects will take up the time of all the council experts
Honestly, it’s difficult to understand your point here, as the proposed framework does not refer at any point to the Arbitrum Audit Program but simply establishes an example scenario where the DAO requires an audit of a recently developed cross-chain messaging infrastructure component:
Scenario: Critical Infrastructure Audit Requirement
The ArbitrumDAO has contracted a third-party development team to build a critical infrastructure component for cross-chain messaging. The component is nearing completion, and a comprehensive security audit is required before deployment. This audit is purely for the DAO’s infrastructure (not a project seeking funding), and timely implementation is crucial for maintaining ecosystem security and reliability.
We think that the comparison made lacks relevance since the fund allocation mechanism is different. The proposed framework doesn’t seem designed to allocate large amounts of funds but to provide small amounts in specific proposals that do not justify undergoing the full governance process.

DeFi Renaissance Incentive Program (DRIP) - #7 by cp0xI believe that my comment should be highly appreciated.
I have conducted a large analysis of this program, I will not additionally indicate all 7 points that I touched upon in my comment, however, this is one of my most important comments, where I analyzed both the proposal itself and previous Arbitrum grants, offered my ideas based on projects that showed positive results in past grants
I also answered questions from other delegates, which can be considered a continuation of my first comment, because the cost seemed overpriced to others. I indicated and compared it with other grants and showed that this is significantly less and perhaps it is worth increasing the amount for incentives
Well, in this case we will go point by point of the comment:
First, I want to support the initiative, where the goals of the Arbitrum are put first, not the protocols. This time, the Arbitrum will voice the goals, and not the protocols themselves, saying what they need. This is good, however, why are you sure that in this case there will not be the same result as last time? Why in the same example about wstETH all the borrowing will not go to another chain when the season ends?
This question is fine, although it has been asked previously by other delegates.
- In this program, vesting of rewards was not announced in any way, which was used by some protocols in the past grant program and which showed the best result. In order not to reduce the cost of ARB once again due to high costs, it seems to me that vesting should be included in the distribution of tokens.
Here, while we understand your suggestion, you have not provided sufficient evidence that vesting is an alternative that guarantees long-term capital retention. As we expressed to DonPepe in his report, the question remains: what prevents users from leaving once the vesting period has ended?
- 20 million ARB per month (and at the moment it is about $5.7 million, and I do not see any prerequisites for a significant change in the price in the near future) is a small amount to be distributed over 3 months. It is necessary to conduct the first season and possibly increase this figure.
Again, there is not enough evidence to support this assertion. Comparing it to LTIPP does not guarantee that the total amount allocated was appropriate nor that funds were distributed efficiently.
Additionally, assumptions are made about the token’s price action (which has increased by 30% since this comment).
- There are still questions about the distribution of ARB. How often should a partner conduct such an analysis and how often, accordingly, will the funds be distributed? Previously, in previous programs, partners themselves distributed ARB automatically, but how it will be implemented here is not entirely clear.
It is a valid question although somewhat difficult to answer without knowing the Distribution Partner.
Will there be information only on the partners’ website about the season, or will there be a separate website or a section on the official Arbitrum website about the progress of the seasons and their results?
It is already specified beforehand that the Evaluation Partner will provide a public dashboard to track the progress and results of the seasons:
Evaluation Partner
Each incentive program requires ongoing monitoring and analysis to assess its impact and guide continuous improvements. An independent evaluation partner is responsible for:
- Providing Continuous Public Data: The partner hosts a publicly accessible data dashboard that tracks relevant metrics throughout the program, such as DEX volumes, total incentives distributed, user participation rates, and more.
- Program Assessment & Recommendations: The partner periodically reviews program performance, compiling findings into reports, and recommends changes to, e.g., return levels of incentivised actions and eligibility criteria during each season. After each season, they additionally provide recommendations on how the program could be improved. Analysis should include retention metrics in the following 2-3 months as well.
- Nowhere are any specific percentages announced, how much the management of this program can spend on operational activities. It seems to me that this needs to be added so that everyone understands that in fact we will not have 20 million per month, but a maximum of 18 (as an example).
In this case the proposer replied that it would not be beneficial to establish a limit.
I don’t think it’s a good idea to be able to stop or cancel the season - it will have a bad effect on the reputation of the Arbitrum. There will be a committee that will meticulously develop the new season, and if it is not sure about something, then it’s probably worth thinking about changing the committee to other specialists. What I mean is that it’s better to prepare in advance than to lose our reputation later. And also, it’s probably worth publishing these discussions in some preliminary results on the forum, so that the community can adjust the season to avoid problems in the future.
This is your opinion and we respect it, but from our perspective, it is not a rewardable action.
In summary, out of the 7 items, two suggestions lack solid justification, one question was already asked by another delegate, one question whose answer was already in the proposal content, one expression of opinion regarding season cancellations, and one valid question about distribution (although still unanswered). Considering all the above, we do not believe this comment had enough impact on the discussion or the outcome of the proposal.

- Also, I participated in all the calls, I was late for the monthly one, but I still stayed there for about 1 hour, which I think is worth considering
Regarding the GRC, we have not been able to find you in the Attendance Report that we usually share in the Monthly Framework. If you have any evidence that contradicts the report, please share it with us through this channel or privately if you prefer.
Zeptimus

Comment on “Builders’ Voices Needed”
Regarding this comment, we would like to emphasize that the reason we do not believe comments in this thread should be incentivized is because the participants are acting as Builders rather than delegates. Incentivizing people to comment in this thread would, in a way, undermine its original purpose.
First - The objectives need concrete metrics. When you say “improve dev experience” - what specific measurements will show success? Without clear numbers, it’s hard to judge actual progress versus random activity.
Second - Consider adding stronger accountability mechanisms. In my experience with governance systems (TEC and HNY communities), privileges should always be revocable based on performance. What happens if objectives aren’t met? The best governance systems have clear consequences built in.
Requesting success metrics (KPIs) and stronger accountability mechanisms are generally standard questions and/or contributions. In this case, besides pointing out their absence, there is no deeper analysis provided. In general, it is expected that a delegate is able to offer a more thorough analysis in order to receive compensation.
What would have truly enriched the discussion is if you had provided examples of the types of accountability mechanisms you would apply or metrics that could work within the framework proposed by Gabriel.
The same applies to the third suggestion — of course, we all want to improve the value proposition of the $ARB token, but it is not as simple as just stating it.

I’ve been actively reading every proposal, thoughtfully considering each one, and voting on all of them, and that alone is a significant amount of work. Mindful, informed voting is the core responsibility of a delegate, and I’ve taken that role seriously. It’s disappointing to see that consistent, foundational participation seems to be undervalued
It’s worth mentioning that you have received 48 points for those tasks, which corresponds to what a delegate with your Voting Power should receive considering the framework.

I would also genuinely appreciate clearer, more objective rules. This level of subjectivity creates confusion and frustration, and I imagine it’s just as unpleasant for those administering the program as it is for those trying to meet its expectations.
It is also worth mentioning that the DAO has approved a degree of subjectivity in the assessment because the previous framework did not consider the quality of comments, which increased the potential for gaming the program (and therefore the noise in the forum).
We understand that not getting incentives may be frustrating, but as we stated in the report, especially delegates with low Voting Power need to make extra efforts to justify compensation of (at least) $3,000, since their impact on the DAO’s quorum objectives is considerably low.
In your personal case, to be honest, we do not see that the contributions made during April had sufficient impact to justify compensation from Arbitrum DAO.
Paulo Fonseca

This comment was the one that made clear to the whole DAO that there was a secret delegate meeting in Denver and prompted the answer by Patrick where he confirmed that the new Vision is a result of that meeting. Therefore, I think it was a very valuable comment to do, at that time, and should be considered valid. Establishing a DAO Events Budget for 2025 - #101 by paulofonseca
We understand that what you refer to as a ‘secret meeting’ was not an official ArbitrumDAO event. Beyond the fact that some matters discussed there may have ended up reflected in the Vision, we do not see that this piece of information you bring into the discussion in your comment has a significant impact on the DAO.

This comment where I share the podcast recording I did about the new vision A Vision for the Future of Arbitrum - #36 by paulofonseca
This was already clarified in the report; our position remains unchanged.

Any of my 4 comments in this proposal where I added context and clarification about the Arbitrum Delegates Private Telegram chat Proposal: enable the new TogetherCrew functionality: Free* summarizer and Q&A for delegates telegram chat - #3 by paulofonseca
As you rightly mention in that thread, the Telegram chat does not belong to the DAO, and in that same thread, a member of the Arbitrum Foundation can be seen stating that the matter did not warrant going through the usual governance process. Therefore, no comments in this thread have received any scoring.

- This suggestion is valuable and worth scoring I believe ARDC Communication Thread - #21 by paulofonseca
Here subsequent events tell us the opposite:
-
Tamara (who participated in this initiative as a contributor) mentions that it wouldn’t be a good idea:
There are 3 months in the ARDC left & it took the team around 1 month to ramp up.
If we bring in a new person @Juanrah will onramp the new person (while doing the job of 2 persons during this period) and then there are just 2 months left of the term.
-
Later, Entropy confirms this by consolidating the communication roles into a single one.

I would also like to ask for a rescoring of this comment of mine, since it was the first one in the thread that offered the idea that fighting vote buying services is a fruitless endeavor and explained why. So much so, that after my comment, other delegates pointed out the same argument.
We understand your point; indeed, since the comment was not very deep or detailed, the reason for its inclusion lies precisely in the fact that other delegates used the same argument later on.
That said, comparatively speaking, we believe the score assigned to your comment is consistent when observing the scoring of other comments deemed valid in this thread.

I would also like to ask for this comment of mine to be considered valid, retroactively since it was posted on March 29th, since I believe it ended up influencing the election results for the OpCo. OpCo – Oversight and Transparency Committee (OAT) Elections - #5 by paulofonseca
We have no evidence to corroborate such a claim; in fact, the election results indicate that Pedro was not elected. We also do not consider retroactive compensation based on a rationale to be appropriate. In any case, the decision to include Pedro as the fourth member of the OAT was made by the three existing OAT members, who have that authority by mandate.

I would also like for this dispute of March to be answered by @SEEDGov [DIP v1.6] Delegate Incentive Program Results (March 2025) - #20 by paulofonseca
This dispute was addressed in a timely manner. If anything has changed since then (for example, if the holding has been delegated to the null address), please let us know.

Denys from @lobbyfi and @zer8 were also present at the booth and helped quite a bit, I think they should get BP as well.
LobbyFi is not eligible for not voting in the Security Council elections, and zer8 is not eligible due to insufficient voting power.

My involvement also included managing the Arbitrum hackathon track in several ways. Supporting both ChrisCo and Alex, the hackathon track managers, and also mentoring and advising the hackathon teams. The most important work I did was to convince hackathon teams and individual hackaers, to participate in Arbitrum bounties, and we had 9 teams, out of 30 total, that did exactly that. So I believe the difference in BP between me and L2BEAT should be bigger than 1.5x.
It is worth clarifying that for your contributions related to ETH Bucharest, you have received a total of 45 points in 3 months, which represents the highest amount of points this program has assigned for an individual initiative.
It is also important to note that without the Bonus Points awarded this month, the compensation would have been $0 instead of $4,243. Together with the points awarded during February and March (which allowed you to reach Tiers 1 and 2), this represents, in our view, a sufficient monetary compensation.
Finally, we remind you that this contribution is a direct consequence of an inefficiency within the DAO, since a budget and sponsorship were approved for an event without an appropriately compensated Lead to carry it out. In this regard, expecting this program to fully fill that gap as a 100% representative of reality is an unfair expectation.
web3citizenxyz

Low scoring of Security Council rationale. As it has only been given a 4/10 both on depth and on clarity and communication, when it provides a thoughtful and clear breakdown on the criteria used to pick our candidates. We strongly believe it should be scored higher.
This rationale has received 13.6 points versus the 10 points that were typically assigned to all the rationales in a month under the previous framework. We agree that it is a good rationale; however, considering its low potential impact and the points mentioned above, we believe the scoring awarded in this case is consistent.

TMC discussion – This comment was not scored even though it contributes to fowarding the discussion on TMC and threesigma’s proposal and suggests improvements on its process. After seeing your report, we disagree, we do offer suggestions with potential impact on the outcome of the proposals, raising questions is valid when looking to contribute discussions around a subject. Addressing your feedback here, some questions had been raised before in September and February, however given that they didn’t make it into the final process the questions could be raised again.
As we have stated in the report, the main suggestion revolves around a point that has been mentioned on multiple occasions, which from our perspective, does not add value to the preceding comment nor to the discussion at hand. We maintain our position on this matter.

Vote buying services – This comment is valuable and worth scoring in our opinion. Once again, providing a clear assesment and contributing our stance to the discussion.
This is another comment for which we have justified exclusion. We observe an analysis regarding whether it is possible to mitigate the effects of LobbyFi (an issue that, as you rightly point out, has already been analyzed by several delegates) and finally some questions; however, these questions have not led to further engagement from other delegates.
Curia

Although this feedback was counted in the program, we believe we should have received a higher score because we suggested additional rubrics that other DAOs use in their incentive programs and advised DRIP to adopt a similar approach to make it more robust and transparent.
We also highlighted the accountability gap and offered the following example solution, which we believe illustrates how incentives should be distributed to the selected projects.
We will provide a rationale for the score assigned in this case:
- Concerns regarding long-term sustainability or the budget size had not only been widely mentioned but also lacked a solid argument.
- Regarding the large number of requested details, the proposer has made it clear that a rigid program could harm the ultimate goal.
- As for accountability, it is worth noting that the proposer has arranged for the creation of an Evaluation Partner, and furthermore, the suggestion, despite being well justified, had no impact on the proposal.
- The shared rubric is the main reason why this comment received a score.
Having said all this, comparatively speaking, we believe the score assigned to your comment is consistent when observing the scoring of other comments included as valid in this thread.

This feedback wasn’t counted in the DIP, even though we believe it should have been because we clearly outlined how the proposal could drive impact and offered actionable guidance. Though brief, our suggestion provided a solid contribution to strengthening the proposal.
We do not see any elements to support the claim that “there was a suggestion that strengthened the proposal,” as no changes were made to the proposal based on this comment.
Paulo Fonseca

this rationale explaining this decision is linking to a private link that can’t be seen publicly:
Thank you for letting us know; the link was incorrect. It has now been fixed. It was supposed to be the link to the JoJo report, which has always been public.
That said, since this is not the first time you have disputed another delegate’s scoring, we want to make it clear that we will not allow disputes to become a perverse mechanism where delegates target each other through the DIP (not saying this is your case, we just want to prevent this). For this reason, we will no longer process disputes regarding the scoring of third parties.
Having addressed all disputes and more than four days having passed since the report submission, we hereby consider the dispute period closed.
Thanks everyone for your feedback!