[DIP v1.5]Delegate Incentive Program Questions and Feedback

Hey @Juanrah, thanks for your questions.

Regarding the Delegate Feedback parameter:

As the proposal explains, a delegate can achieve a better score with fewer comments because the Delegate Feedback parameter prioritizes QUALITY over QUANTITY.

The purpose of the rubric is to encourage delegates to provide feedback only when they have something valuable to contribute, rather than trying to game the program. If a delegate attempts to increase their “Presence in discussions” multiplier with low-value comments, their score will likely be negatively impacted.

On the other hand, if a delegate offers meaningful contributions, his score will reflect that, and the multiplier will reward them accordingly. This could position him above another delegate who provides equally good feedback but participates in fewer discussions.

It’s important to note that this is the first month of this system. There are still details to refine, and we’ve already started working on improvements for this specific parameter. We’re committed to iterating on the program to perfect the framework as much as possible.

Regarding comments scored with zero vs. marked as invalid:

Comments marked as invalid could be due to:

  • Being identified as a rationale, thus considered under the Communication Rationale parameter.
  • Being a comment in a Delegate Thread, which also qualifies as a rationale.
  • Being posted in a thread not included in the analysis.
  • Being merged into a single rubric when multiple responses were posted by the same author within the same discussion.

Comments marked as valid but scored with zero are those that the program administrator deems irrelevant to the discussion. As you pointed out, this negatively impacts the scoring. The goal is to discourage spammy, repetitive, or shallow comments—such as those generated using AI tools.

To this end, we remind delegates that while a comment may have good “timing” and “clarity,” the merit of feedback lies in its relevance and reasoning. To improve scores, we recommend:

  1. Take the necessary time to provide feedback. A 1–2 day extra delay won’t heavily impact timing.
  2. Thoroughly review the discussions and follow subsequent feedback. Paying attention to details and fully understanding the proposal will help ensure your comment is relevant and avoids repetition.
  3. Focus on proposals where you can genuinely add value. This is crucial. The changes to the DIP aim to avoid “commenting for the sake of commenting.” Rather than discouraging participation, the goal is to promote thoughtful and organic involvement.
  4. Avoid using ChatGPT to “reason.” While we have no issue with its use for grammar corrections or translations, using it to generate feedback is discouraged.

On the weight of the Delegate Feedback parameter:

We completely agree with your point. All participants in the program have the opportunity to inquire about the scoring criteria and receive guidance on areas for improvement.

That said, we’re currently working on the DIP “Bible” and expect it to be ready later this month. This document will consolidate the best practices expected from delegates and include all relevant information about the DIP—both from version 1.0 and the updates in 1.5.

5 Likes