Answer to concerns from this weeks call:
Concerns Raised
- System could be gamed (e.g., sockpuppet accounts flooding the forum)
- DAO often prioritizes feedback from high-voting-power delegates, not just volume of comments
- How to ensure quality over quantity in comment impact?
Over the past year, I’ve become fascinated with the idea of wisdom of crowds. Digging deep, I settled on Daniel Kahneman’s work on noise and Thomas Malone’s concept of superminds as the foundation of my thinking.
While Surowiecki outlines specific conditions for wisdom of crowds (diversity, independence, decentralization, and aggregation), I discovered that Malone’s supermind concept offers more flexibility. But the common ground between these frameworks is unbiased aggregation - finding ways to combine inputs that preserve signal while filtering out noise.
This intellectual journey led me to develop SimScore, an API that provides statistical consensus analysis specifically for written feedback. SimScore is entirely different from voting systems like Snapshot or Tally. Instead, it transforms unstructured text discussions into quantifiable insights using consensus point calculations and similarity scoring, creating a systematic approach to finding consensus in forum comments and discourse.
As we were developing SimScore, I attended a blockchain event in Toronto in late summer of 2024.
I shared SimScore with Taco one of the event’s attendees . When I explained SimScore as wisdom of crowds technology for written judgments, Taco immediately got it - “Yeah, counting jellybeans in a jar.” But then he added something crucial: “Then we’ll f*%k it up with token weights.”
I just shrugged in response. At the time, ETH was around $2,773 and ARB was about $0.60 - things seemed to be going well in the market. That’s the way of crypto and blockchain. Token-weighted governance shouldn’t work according to wisdom of crowds theory, but at the time, I couldn’t argue with success. The prices of ETH and ARB seemed okay.
Later in the development, about three months ago, I demonstrated the SimScore PoC to Brick from Entropy. When he responded “That’s what we do,” I thought they must have an approach to analyze community feedback that is opaque.
Last week’s Arbitrum weekly call, Matt, revealed a different reality. The meeting minutes captured a key concern: “DAO often prioritizes feedback from high-voting-power delegates, not just volume of comments.” This means that entropy and others use a subjective process for incorporating written feedback into proposal development, giving special weight to comments from those with more tokens or power.
While I had assumed from Brick’s response that they were using some kind of systematic analysis for written feedback, Matt’s explanation revealed you’re actually using intuition and subjective judgment to aggregate and synthesize community comments - precisely the approach that introduces the noise Kahneman warns against. This variability in judgment undermines the quality of proposal development at a critical time for the ecosystem.
Matt also raised concerns about gaming the system through multiple forum posts, but this misses the point. Your current intuition-based approach to processing written feedback is already failing to capture the valuable insights from the broader community by systematically discounting contributions from members without large token holdings.
SimScore wouldn’t make this situation worse - it would provide you a more transparent, systematic way to analyze written feedback from forum discussions. Even with the current forum comments (which nobody claims are being massively gamed), SimScore would deliver better insight into which suggestions genuinely represent community consensus than the subjective methods you currently employ.
Let me be direct: Without systematic, rigorous approaches to analyzing written feedback during proposal development, both Arbitrum and Ethereum will continue to create suboptimal proposals that lead to poor decisions.