Powerhouse: Operations and Transparency Research Proposal

Dear @404DAO, @WintermuteGovernance, @karelvuong, and @Saurabh,

We appreciate the notes and extensive feedback; we have done our best to address all commentary in a short window. We understand most of the research bounties focus on the incentive side. We also believe that the DAO may want to consider research on improving the operations and performance of the programs themselves. Operations often go overlooked but the consequences are material.

Looking at the list of your questions, we would certainly address two sets of questions proposed by @SEEDGov in addition to many more:

Comparison between STIP/BSTIP and LTIPP: Surveys on protocols regarding the usefulness of the new model: Did they easily understand how to apply? What were the deficiencies? Did they understand the role of the Advisor? Was it helpful? Did they understand the role of the Council? Do they consider it useful?

Regarding the role of the Advisor: General feedback. Do applicants consider it useful? Did they receive fair treatment? Do they believe their inquiries were addressed in a timely and appropriate manner? How do they think the process could be improved? Do they believe two weeks is sufficient for feedback and enhancement of their proposals?

Operational Research: LTIPP/BSTIP

Our goal is to uncover what went right/wrong with LTIPP/BSTIP and provide insights as to why this might be the case. Our research will conclude with how the DAO can fix these issues in the future, focusing on scalability. The initial design of STIP/LTIPP required certain assumptions. For example, how long a certain phase should be, how should steps be sequenced, who should perform which role, etc? These assumptions were made based on prior knowledge. Each time we run an experiment, we want to update our priors to make the most accurate predictions possible. In order to update our priors in this case, we need to perform a rigorous post-mortem on our current operations. With this research, we can evaluate these underlying assumptions and make evidence-based changes. It is worth noting Powerhouse has two PhD’s on staff, one of whom has extensive experience in survey design and psychological measurement.

While there has been a single operational post-mortem, we have yet to see a full top-down analysis of the current programs. This is likely due to the challenge of implementing back to back incentive programs. The current proposal aims to fill this gap. @karelvuong and @Saurabh aptly raised concerns about Openblock’s existing incentive data. While Openblock monitors the distribution of incentives, it does not provide information on the operational status of projects or their self-reported biweekly incentive details, which our proposal aims to address. Some data on ArbGrants should closely track the data on OpenBlock (such as incentives distributed). Other project specific status and reporting information will be unique to the dashboard, as this operational data (e.g. self-reported biweekly status, changes, etc) as this data is only captured via ArbGrants. In the future, the dashboard may also integrate with OpenBlock (assuming the API is open), ensuring a consolidated front-end and UX.

Moving back over to the research, at the core, our proposal seeks to answer the following questions with regard to incentive program operations:

  • What went well?
  • What went poorly?
  • What could be improved in terms of efficiency and transparency?
  • How can we understand the limitations of the program and expand its reach ?

Our research will be conducted in three phases: Workflow Mapping, Stakeholder Interviews, and Synthesis. Each phase is designed to provide incremental insights that build towards a comprehensive understanding and actionable recommendations.

Milestone 1. Business Workflow Analysis and Research Report

Step 1. Elicitation and Stakeholder Analysis

1.1 Mapping The Source

This proposal begins by mapping the overarching LTIPP/BSTIP workflow and creating a visual and written representation to understand the current operational structure. By identifying operational inefficiencies and providing evidence-based recommendations, our research will ultimately lead to cost savings and more effective incentive programs. We will answer the following questions:

What are the major operational phases in STIP/BSTIP and LTIPP?

What are the subcomponents and processes that make up each phase?

What is the appropriate duration for each phase?

Who are the key stakeholders in each phase?

What are their roles and responsibilities?

What are their interdependencies?

Did any deadlines move during implementation? If so, why?

More generally, where were there delays and why?

For Example, within the Onboarding Phase, we may want to understand:

What is the current KYC process? What issues or bottlenecks are there? What is the best way to solve these? Who is managing KYC? Who is mediating the process?

1.2 Preparation for Elicitation

We will select appropriate elicitation techniques such as interviews and document analysis. We will prepare materials, and open ended questions to guide the elicitation process.

1.3 Stakeholder Analysis

After preparing initial elicitation, we use discovery interviews to ensure we identify all stakeholders involved or affected by the project, analyzing their roles, interests, influence, and level of involvement. The outcome will be a stakeholder map or matrix: a visual representation of our collective understanding of these relationships.

Step 2. Analyzing and Documenting Requirements

2.1 Qualitative Stakeholder Interviews

Following the mapping of the core workflow, we will conduct interviews with respective stakeholders based on our findings. This includes open-ended interviews, surveys, and possibly quantitative items (e.g., NPS). Interviews will be conducted with Grantees, Council Members, and those who did not apply to identify gaps and gather comprehensive data. Key questions include:

Grantees.

How did you see your role in the program?

Were you satisfied with the program (NPS)?

What do you think could be improved?

How easy was it to apply? What issues arose during application?

Did you understand the role of the Grants Council?

Did you understand the role of the Advisor?

Was the advisor role helpful?

How was the experience working with the data provider (i.e. Openblock)?

How was the quality of service? Were responses received in a timely manner?

How was communication from the program manager, council, and DAO?

Do you believe the advisor process was fair and equitable (specifically with regard to the assignment of a grants advisor)?

Was a two week feedback term sufficient to incorporate and update your proposal?

What operational issues did you run into?

How was your experience of reporting biweekly data?

Council, Program Manager, Advisors.

Similar thematic analysis as above, to be informed by initial qualitative research.

2.2 Analyzing and Documenting Requirements

Categorize, analyze, and prioritize requirements gathered during elicitation. At this stage, we start drafting the detailed requirements specifications, user stories, use cases, and acceptance criteria. These ground our visual models (e.g., wireframes) to represent requirements.

Step 3. Synthesis and Final Report

3.1 Final Research Report

After data discovery, we will interpret and compile the data into a comprehensive report. This step ensures the proposed solution meets the identified requirements and delivers the expected value. The report will include:

3.1.1. Visual Mapping.

A visual mapping of the current workflow, capturing all major operational phases, subcomponents, and processes. Additionally, a visual mapping of a proposed improved workflow, showing optimized processes, roles, and interactions.

3.1.2 Expository Copy.

A narrative document explaining the proposed changes and their expected impact, providing a rationale for each improvement with detailed descriptions of each step in the current and proposed workflows.

3.1.3 Empirical Analysis.

A detailed report summarizing the findings from the surveys and interviews, including key metrics (e.g., NPS scores) and qualitative insights.

3.1.4 Proposed Solution.

Recommendations for software and process development, based on the empirical analysis and workflow mapping, focusing on the largest impact and ROI for automation and software implementation. The roadmap will include the following where appropriate and necessary

3.2 Roadmap & Implementation Plan

Timelines, resources, and responsible parties. In it we address any transition requirements, such as data migration, training, and change management.

3.2.1 Performance Metrics Plan.

A document outlining the KPIs, monitoring mechanisms, and evaluation criteria to assess the success of the proposed solution.

3.2.2 Post-Implementation Review Report.

A report summarizing the outcomes of the implementation, stakeholder feedback, and recommendations for future improvements.

Milestone 2. LTIPP/BSTIP Transparency Dashboard

Independent of the Business Workflow Analysis and Research Report, we intend to operationalize a live data dashboard to monitor BSTIP/LTIPP incentive distribution and biweekly reporting information. This is necessary because while Powerhouse has developed and implemented a reporting workflow at its own cost, (ArbGrants.com), it will require additional development to deliver an at a glance dashboard. You can see an example for MakerDAO’s dashboard powered by Powerhouse Fusion here: MakerDAO Expenses.

Grant Terms and Budget

The proposed statement of work is significant and extensive, while the proposed ARB budget is markedly below market rate for this scope of services. This mismatch was consciously done to calibrate the resource request to fit within the constraints of the Research Bounty Budget. We have done so because we believe this work is necessary and best done in advance of the next incentive program design. A modest commitment of ARB simply helps us to defray some operational costs as we continue supporting Arbitrum.

With continued operations in mind, we structured the budget as two independent milestones (30,000 ARB each). In theory, the council could fund only one, rather than both, of the milestones. However, we strongly suggest funding both the dashboard and business analysis, as both will be useful now and into the future.

Our primary objective is to deliver superior services and products to Arbitrum that translate to demonstrable improvements to the program. We appreciate your review and consideration of the proposal.

Specific Commentary

@Saurabh

Thanks for taking the time to engage substantively with the post and provide constructive feedback.

  1. Have you considered expanding the dashboard to contain any of the suggestions in the LTIPP Bounty Ideas document?

The dashboard can be future compatible and include other incentive data, including that of OpenBlock where their APIs allow it. This would be a design decision, though with this scope we wanted to focus on the operational data, as this data is only captured on ArbGrants.

  1. In the current OBL Data requirements for the LTIPP 1 and its schema, all protocols are expected to work on providing on developing a query (on Dune) that showcasing the incentivisation of ARB by 17th of May 2024. What makes Powerhouse’s offering different to what OBL will be tracking in terms of incentivisation for these protocols?

Powerhouse is pulling data from ArbGrants.com, which provides the reporting and transparency workflow for biweekly reports. This means aspects of this data is not present anywhere else, for example, reported changes from week to week, and other self-reported data, etc.

  1. Which stakeholders are you looking to interview for the report and the dashboard?

All relevant stakeholders as noted above. If we missed any, if you could please note them to us that would aid us greatly.

  1. For the report, what are you mapping for in terms of the research?

I believe we have now answered this, but if not, please let us know what other questions you have.

  1. From this Aave grants 1 section what metrics would you track for the stats?

We are tracking the same information for all protocols, which is the data provided in the new biweekly templates. For a short video on ArbGrants, please see here.

  1. Have you considered developing a user-story for protocols to be onboarded in the ArbGrants?

We are currently onboarding protocols now onto ArbGrants and working on an FAQ, intro post, etc., for biweekly reporting. Part of this research is to help improve these processes within ArbGrants and across BSTIP/LTIPP, as well as examine all processes, (program design, grantee selecting, onboarding, verification, launch, continued reporting, and post performance evaluation.

1 Like