Feel free to share your questions or comments about the new iteration of the Delegate Incentive Program in this thread.
All contributions here will serve as a resource for shaping the “Bible” of the DIP v1.5.
Feel free to share your questions or comments about the new iteration of the Delegate Incentive Program in this thread.
All contributions here will serve as a resource for shaping the “Bible” of the DIP v1.5.
Hey, I have a question regarding Shielded voting on Snapshot. Should a delegate disclose on the forum what they voted for, if the vote is shielded? It kind of defeats the purpose of a secret vote then
I haven’t disclosed it, as like you said it wouldn’t make sense. But still should be considered a rationale
Hi guys
As @EzR3aL said, we won’t ask anybody to disclose who has voted for, but we will still ask for the Rationale.
On a separate note: Votes are visible now because the encryption ends at the same time as the voting.
I think Entropy gave an explanation on this.
They thought there was no problem with expressing one’s opinion on the secret vote, because it could encourage other delegates to learn new arguments and make a better decision.
The purpose of secret voting, it seems to me, is to hide the overall result, not its components (it can be assembled into a whole, but any delegate can change their vote during the voting)
I have a question regarding the Delegates Feedback parameter. In the Karma dashboard for delegate compensation, it says that this parameter considers a “Presence in discussion multiplier”. However, given the subjective approach of qualifying the variables, if a delegate comments too often, there is a bigger probability of obtaining a lower score. I see some delegates with 4 valid posts getting a score of 19.5 while other delegates commented in 14 threads obtaining only 14.02. Wouldn’t commenting less often be detrimental to the spirit of the program which is supposed to incentivize delegate participation?
I would also like to know what considerations are taken to qualify some comments with a score of zero instead of just marking them as invalid. If marked invalid, the score is unaffected, if marked with a zero, this lowers the average score.
Since the Delegate Feedback parameter consists of 30 points weight, I believe this is an important part to communicate to delegates so that they can avoid malpractices further down.
I can only imagine the amount of work to keep this initiative rolling, thanks for your work @SEEDGov, and thanks for your attention!
Cheers.
Hey @Juanrah, thanks for your questions.
As the proposal explains, a delegate can achieve a better score with fewer comments because the Delegate Feedback parameter prioritizes QUALITY over QUANTITY.
The purpose of the rubric is to encourage delegates to provide feedback only when they have something valuable to contribute, rather than trying to game the program. If a delegate attempts to increase their “Presence in discussions” multiplier with low-value comments, their score will likely be negatively impacted.
On the other hand, if a delegate offers meaningful contributions, his score will reflect that, and the multiplier will reward them accordingly. This could position him above another delegate who provides equally good feedback but participates in fewer discussions.
It’s important to note that this is the first month of this system. There are still details to refine, and we’ve already started working on improvements for this specific parameter. We’re committed to iterating on the program to perfect the framework as much as possible.
Comments marked as invalid could be due to:
Comments marked as valid but scored with zero are those that the program administrator deems irrelevant to the discussion. As you pointed out, this negatively impacts the scoring. The goal is to discourage spammy, repetitive, or shallow comments—such as those generated using AI tools.
To this end, we remind delegates that while a comment may have good “timing” and “clarity,” the merit of feedback lies in its relevance and reasoning. To improve scores, we recommend:
We completely agree with your point. All participants in the program have the opportunity to inquire about the scoring criteria and receive guidance on areas for improvement.
That said, we’re currently working on the DIP “Bible” and expect it to be ready later this month. This document will consolidate the best practices expected from delegates and include all relevant information about the DIP—both from version 1.0 and the updates in 1.5.