Serious People: Proposed KPIs for Arbitrum Grant Programs (LTIPP)

So the risk is that KPIs will never be able to handle the complexity of the DAO. That’s been very seriously researched by cyberneticians (see Ashby’s law of requisite variety) and is one of the reasons why OKRs are being criticized so much (they’re too mechanistic and so lead to antagonistic behaviour in their negotiation). The beyond budgeting movement also has contributed a lot here to understand how a small set of indicators can lead to serious problems.

Using the Viable System Model, the idea is that there is a monitoring system (system 3* in their framework for viability) that can both aggregate data from the Units being monitored AND go into said systems to perform audits. So the key here is that the KPIs are not directly (i.e automatically) tied to (re)allocation of funding to grant programmes, but rather serve to trigger audits (i.e. to trigger more in-depth reviews and sense making).

So ideally we want a mix of Health Metrics (general indicators of health, we did some research on this already via TogetherCrew), Growth Metrics (e.g liquidity, new users, etc), and Strategic Metrics (aligned with the positioning strategy and other strategic initiatives).
This matches the usual funding allocation in corporations that goes after

  • operations (keeping the business processes running day in and day out),
  • sustainability (things like culture, wellbeing programmes, DEI, etc etc that keep you healthy)
  • and strategic initiatives (@AlexLumley is advancing something in this regard. the key here is that a lot of Growth is based on foundations that were laid bit by bit showing no direct impact and so only targeting direct growth metrics leads to shortermism and ultimately being outcompeted)
    I’ll comment more on the metrics in the research proposal when it comes to executing on the work.

But mention it here to suggest that

  1. the system needs to NOT be connected mechanistically to funding allocation but needs to have space for more variety management in reviews. Which means the need to design said mechanism upfront and not leave it as a concern for later (as otherwise it tends to never happen and things backfire dramatically)
  2. suggest a classification of which KPIs apply as a tagging system rather than rigid categories, as different initiatives will fit differently across buckets (and we don’t want to underallocate to those that do say 30-30-30 across 3 categories vs one that does only one category of KPIs).

So e.g. a programme can have a specific focus (RnDAO as a grant programme does research fellowships and venture building with a focus on Collab Tech), and then the grantees can have varied impact across KPIs (the fellow we selected to research rewards and compensation won’t generate much liquidity but addresses a strategic consideration about operational excellence, while the fellow that is researching a protocol for angel investment will likely end up having an impact on liquidity/capital attraction at some point (mid term) and short term impact on the strategic consideration for operational excellence).
As a grant programme, I want to show the aggregate of the impact of grantees, by bottom up picking KPIs (and potentially adding my own data when the standard ones are not fit for purpose, and then as a programme we can argue why that was a good choice and the community/delegates agree or not to give us more $$).

Let me know if those recommendations make sense :slight_smile:

6 Likes