Powerhouse: Operations and Transparency Research Proposal

Powerhouse: Operations and Transparency Research Proposal

The following proposal describes a research grant to support Arbitrum Incentive Program operations (STIP/LTIPP). It covers operational research and the deployment of a transparency dashboard to improve incentive data accessibility.

Who is Powerhouse?
Powerhouse began as MakerDAO Sustainable Ecosystem Scaling (SES) Core Unit in the MakerDAO supporting the incubator program and building the operational infrastructure of one of the largest DAO experiments to date. As SES, Powerhouse team members helped teams to organize operations and connect them with relevant service providers. We learned a lot during the two years of operating this incubator and are now working at translating our best practices into open-source software for DAOs.

Powerhouse Builds Open-source Operations Platforms
We translate business workflows, processes, and information systems into software to power large on-chain organizations. Powerhouse retains a full stack operations staff and technical staff, including Business Development, Business Analysts, Marketing, and a Full Stack Software Development team. Powerhouse focuses on modular open source systems as public goods. This modular approach reduces implementation time and accelerates the growth of DAOs. The objective of Powerhouse is to help DAOs scale by overcoming coordination failure.

Powerhouse Uses Embedded Project Teams
Powerhouse retains a complete project team to fulfill the proposed Research Bounty scope. The team will manage and implement the SOW described below: the workflow analysis in Milestone 1 and the delivery of a transparency dashboard in Milestone 2.
The workflow analysis will review and map the current LTIPP workflows with recommendations for improvement. Independent of this analysis, the grant also delivers a transparency dashboard to permit easy access to LTIPP incentive data.

LTIPP and STIP are Complex

STIP and LTIPP are ambitious initiatives designed to rapidly increase the adoption of Arbitrum through liquidity incentives. The LTIPP strategy involves a broad distribution of ARB tokens within a short period. Due to the large scale and complexity of this distribution, stringent operational practices are necessary to ensure the program’s success.

Powerhouse Software as the Solution

Powerhouse provides several software products that together form the Powerhouse Platform. Teams begin by using software to structure and implement their operational workflows (Powerhouse Connect). These workflows capture data which provides the information to an analytics backend enabling data availability (Powerhouse Switchboard) which is presented and interacted with publicly via a dashboard (Powerhouse Fusion). Access and roles in workflows are managed using Ethereum addresses on Arbitrum to provide identity and authentication (Powerhouse Renown). All Powerhouse Software modules are open-source to build on and extend the public commons. For more information regarding each product, please see a recent keynote here, with Powerhouse Product Lead, callmeT.


Figure 1. An Overview of the Four Powerhouse Products

ArbGrants.com: Live Operational Software for Arbitrum LTIPP

Powerhouse partnered with Stablelab to prepare a software proof of concept. First, we performed a requirements analysis and examined the current critical path blockers for Arbitrum LTIPP and STIP Program Management. We identified the largest pain point was organizing ongoing incentive reporting. Powerhouse then designed an ideal workflow and reviewed the proposed user experience with the Grants Council and Program Manager. The workflow was intended to replace the current incentive reporting workflow on the Arbirtum Discourse Forums.

Powerhouse then developed and implemented this reporting workflow as an open source software module. The module is currently hosted at https://www.arbgrants.com and supports the reporting and organization of bi-weekly incentive information. This workflow makes accessible information for use by the LTIPP Program Manager, the Grants Council, Incented Projects, and the community at large.

The software captures information that was previously hard to consolidate. It offers the 100 protocols a reliable and standardized way to report their biweekly metrics. At a later point in time, Powerhouse may prepare a proposal to provide Arbitrum an option for retroactive funding if they desire.

Today, in this proposal, we present an opportunity to continue the momentum and extend this work through further business analysis and a transparency dashboard to improve presentation and accessibility of this LTIPP incentive data.

Next Step: Visualizing Incentive Data and Preparation for Full LTIP
Now that we have a workflow to report data, the next step is to prepare a custom dashboard to make this data accessible. Using Connect, Fusion, and Switchboard, we will prepare a dashboard to report relevant metrics. Attached below are two example screenshots of a grantee directory and dashboard. This first scope of work establishes a dashboard to view LTIPP information reported and will be forward compatible with additional Connect data streams should Arbitrum desire.

In order to prepare Arbitrum for LTIP, we will proceed in parallel with a deeper workflow and business analysis on the existing STIP and LTIPP program implementations. Through this analysis, we intend to map existing workflows and identify opportunities to leverage software to improve program efficiency and transparency.


Figure 2. An example of the LTIPP grantee directory.


Figure 3. An example of the LTIPP Overview Dashboard.

Statement of Work

Term

The statement of work is anticipated to span approximately 12 weeks from June 10 to September 10. Both Milestones will run concurrently and conclude no later than 12 weeks from initiation. Powerhouse will meet weekly with the LTIPP Program Manager, who will act as the Arbitrum interlocutor, to provide status updates and report progress.

Milestone 1. STIP/LTIPP Workflow Business Analysis and Report

Milestone 1 prepares a workflow analysis to map and describe the current LTIPP process, identifying areas for improvement. The scope of the report will include an analysis of the following phases:

  • RFP preparation and presentation;
  • Proposal Creation, Submission, and Review;
  • Project (Grantee) onboarding; incentive reporting;
  • and Performance Reporting and Evaluation.

The focus will be to identify opportunities to improve existing processes, with the express goal of later translating these processes into scalable open-source software modules. The process will include planning and qualitative interviews with key stakeholders (e.g. Program Manager, Grants Council Members, and Grantees). This workflow analysis and research will help Arbitrum establish a calibrated operational budget for the full scale LTIP to support both the requisite operations as well as the necessary software to facilitate scaling and transparency.

Deliverables

1.1 Project Roadmap and Gannt. Powerhouse scopes the high level research phases in collaboration with key stakeholders and prepares a project roadmap.

1.2 Qualitative Stakeholder Interviews. Powerhouse will identify key stakeholders and perform qualitative interviews in order to identify outstanding issues and potential remedies.

1.3 Final Report. Powerhouse delivers a final report which includes a visual mapping of existing workflows and stakeholder actions, a review of identified issues, and proposed technical solutions and improvements where available.

Milestone 2. LTIPP Transparency Dashboard

Milestone 2 supports LTIPP transparency through an informational dashboard. The project involves connecting the information provided from the grant reporting workflow into a software product for use by key stakeholders and the community. The dashboard will display all data captured by ARBgrants, including the biweekly ARB distributed, each incentivized contract, and the total ARB distributed. It will feature graphs plotting ARB distribution against each two week incentive term. The dashboard will dynamically update as incented projects submit new data allowing for near real-time display.

Deliverables

2.1 Project Roadmap and Gannt. Powerhouse scopes the high level grant phases in collaboration with key stakeholders and delivers a project roadmap.

2.2 Qualitative Stakeholder Interviews. Powerhouse performs qualitative interviews in order to identify requirements and stakeholder preferences for dashboard metrics and UX.

2.3 Dashboard Development. Powerhouse develops the required dashboard frontend to connect to the existing backend for arbgrants.com and the Switchboard API. Implementation will be similar to the mocks and video provided.

2.4 Dashboard Live Operations. Powerhouse delivers and hosts a live dashboard for use by Key Stakeholders and the community.

Grant Terms

Grant terms and conditions are delineated in the RFP here. The transparency dashboard software project will be available as AGPL. Powerhouse will continue to independently develop the underlying infrastructure and future platform updates will benefit this ArbGrants transparency dashboard and the Connect Product.

30,000 ARB due Upon Completion of KYC (June 10 Expected)

30,000 ARB due upon Delivery of Milestone 1 and 2 and all underlying deliverables (Sept 10 Expected)

3 Likes

I like your desire to structure reports under the LTIPP program.
A couple of things confuse me:
1. Reports are provided by the projects themselves and can provide any information that is convenient for them.
I would like to be able to automate the process of obtaining a report with minimal influence of the human factor.
2. There are several programs: LTIPP, STIP, STEP.
Why do you use only one of the programs?
I would like to have the same approach to all Arbitrum programs.

1 Like

Thanks for your reply, cp0x. We would agree with all of your points and appreciate you raising them.

Powerhouse Enabling Automation

You first state that by allowing projects to submit their own information, it increases the likelihood of operator error or the submission of false data. You suggest that automation would help reduce these errors.

We definitely agree. The aim of our Powerhouse software platform is to structure and automate processes as much as possible, including the import and export of data. As a proof of concept, we started with a constrained scope. This means we focused on demonstrating a simple use case: operationalizing the forum-based self-reporting process and replacing it with a standardized software workflow.

Moving ahead, the objective of this proposal is to first map the existing workflows and processes of STIP and LTIPP. Next, we will design a roadmap to help us arrive at a streamlined workflow that includes more automation with external data providers and researchers, such as OpenBlock or similar.

It is worth noting that each data submission is tied to cryptographic signatures. This means we are able to monitor who or what submits each piece of data. Any input provided into the documents by humans is also available to an API or bot, so the current architecture and document are already ready for automation.

Powerhouse Consolidating Grant Data

Secondly, you note the multiple Arbitrum grant programs (STEP, STIP, LTIPP) and would like to consolidate grant information in one platform.

As we have partnered up to support StableLab, we began focusing on LTIPP. This was primarily due to its large scale, which requires a scalable solution, as well as the modest lead time we had to design and implement the proof of concept.

As the STIP bridge begins, we also hope and expect these projects to use the same workflow and software, which can be easily adapted. You will note the dashboard example has a “type” column, which indicates LTIPP or STIP, allowing for filtering or separate dashboards across projects. With regards to STEP, there is always the ability to iterate and integrate additional workflows for these projects. Our objective is to allow DAOs and other organizations to consolidate information and workflows through open-source software and decentralized infrastructure.

1 Like

Hi @Lumen,

thanks for putting forward a Research Proposal! We certainly think there is value in the product and review you are offering, however, we are unsure how relevant the scope of the work and deliverables are to the goals of the research bounty and/or the questions provided by council members here. Personally, we are looking for something a bit more prescriptive in its deliverables and findings which we can use to then improve future incentive programs primarily on the incentive side. We would also like to see a more defined scope as to what questions you plan on answering and why this will be beneficial to the DAO.

3 Likes

Thank you for the application @Lumen. While we appreciate the product and dashboard you have proposed, we’d like to echo similar thoughts as @WintermuteGovernance around the scope of the LTIPP research bounties. We are looking for more conclusive analysis and recommendations from the research. Your application in the current form fails to address this critical aspect of the research bounties. Pulling from the Council’s proposed questions may help provide some direction to your proposal.

1 Like

Hi @404DAO @WintermuteGovernance,

Thanks for the feedback. We are formulating a response at this time to clarify the proposal. We believe the proposal meets the requirements and objectives of the program but does require more concretization. Specifically, it was clear the portion that constitutes business analysis and research merits further explanation and expansion.

We appreciate the time and hope to have a reply posted by EOD Friday June 14, Pacific Time, for further consideration.

1 Like

Powerhouse’s proposition

Powerhouse has seeked to develop an understanding into the critical blockers for the Arbitrum LTIPP and STIP Program Management. They’re looking to further their visualisation with a transparency dashboard and improve the comprehensive understanding the ongoing LTIPP incentive data with a report.

Measuring Effectiveness

We’d like to have further clarification on the following:

  1. Have you considered expanding the dashboard to contain any of the suggestions in the LTIPP Bounty Ideas document?
  2. In the current OBL Data requirements for the LTIPP and its schema, all protocols are expected to work on providing on developing a query (on Dune) that showcasing the incentivisation of ARB by 17th of May 2024. What makes Powerhouse’s offering different to what OBL will be tracking in terms of incentivisation for these protocols?
  3. Which stakeholders are you looking to interview for the report and the dashboard?
  4. For the report, what are you mapping for in terms of the research?
  5. From this Aave grants section what metrics would you track for the stats?
  6. Have you considered developing a user-story for protocols to be onboarded in the arbgrants?

Summary

Powerhouse’s proposition does not appropriately fit within the scope of the Research ideas. Further, the proposal would be duplicating the existing work that OpenBlock Labs is looking to work on in terms of tracking incentive distribution, identifying each incentive contract, and showing the total incentives for ARB. We would like to see some improvement for this proposal for it to be beneficial within the Arbitrum ecosystem and DAO.

1 Like

Thanks for submitting your application, @Lumen!

As echoed by the other council members, I do not believe this currently fits in the intended scope and objective of the LTIPP Research Bounty design and does not adequately respond to the council proposed questions. While I can see the merit in the Powerhouse product, the proposal suggests another product offering that duplicates some existing and more widely adopted efforts led by OpenBlock Labs across the current Arbitrum incentives landscape. Unfortunately, I will not be recommending that we move forward with this proposal in its current form at this time.

1 Like

Hi Karel,

Thanks for your note. We would hope you are open to examining the updated proposal which will be posted later today which addresses the research aspect more concretely. Specifically, the proposal addendum shows how we address some of the questions posed by SEEDLatam regarding program improvements and operations.

Dear @404DAO, @WintermuteGovernance, @karelvuong, and @Saurabh,

We appreciate the notes and extensive feedback; we have done our best to address all commentary in a short window. We understand most of the research bounties focus on the incentive side. We also believe that the DAO may want to consider research on improving the operations and performance of the programs themselves. Operations often go overlooked but the consequences are material.

Looking at the list of your questions, we would certainly address two sets of questions proposed by @SEEDGov in addition to many more:

Comparison between STIP/BSTIP and LTIPP: Surveys on protocols regarding the usefulness of the new model: Did they easily understand how to apply? What were the deficiencies? Did they understand the role of the Advisor? Was it helpful? Did they understand the role of the Council? Do they consider it useful?

Regarding the role of the Advisor: General feedback. Do applicants consider it useful? Did they receive fair treatment? Do they believe their inquiries were addressed in a timely and appropriate manner? How do they think the process could be improved? Do they believe two weeks is sufficient for feedback and enhancement of their proposals?

Operational Research: LTIPP/BSTIP

Our goal is to uncover what went right/wrong with LTIPP/BSTIP and provide insights as to why this might be the case. Our research will conclude with how the DAO can fix these issues in the future, focusing on scalability. The initial design of STIP/LTIPP required certain assumptions. For example, how long a certain phase should be, how should steps be sequenced, who should perform which role, etc? These assumptions were made based on prior knowledge. Each time we run an experiment, we want to update our priors to make the most accurate predictions possible. In order to update our priors in this case, we need to perform a rigorous post-mortem on our current operations. With this research, we can evaluate these underlying assumptions and make evidence-based changes. It is worth noting Powerhouse has two PhD’s on staff, one of whom has extensive experience in survey design and psychological measurement.

While there has been a single operational post-mortem, we have yet to see a full top-down analysis of the current programs. This is likely due to the challenge of implementing back to back incentive programs. The current proposal aims to fill this gap. @karelvuong and @Saurabh aptly raised concerns about Openblock’s existing incentive data. While Openblock monitors the distribution of incentives, it does not provide information on the operational status of projects or their self-reported biweekly incentive details, which our proposal aims to address. Some data on ArbGrants should closely track the data on OpenBlock (such as incentives distributed). Other project specific status and reporting information will be unique to the dashboard, as this operational data (e.g. self-reported biweekly status, changes, etc) as this data is only captured via ArbGrants. In the future, the dashboard may also integrate with OpenBlock (assuming the API is open), ensuring a consolidated front-end and UX.

Moving back over to the research, at the core, our proposal seeks to answer the following questions with regard to incentive program operations:

  • What went well?
  • What went poorly?
  • What could be improved in terms of efficiency and transparency?
  • How can we understand the limitations of the program and expand its reach ?

Our research will be conducted in three phases: Workflow Mapping, Stakeholder Interviews, and Synthesis. Each phase is designed to provide incremental insights that build towards a comprehensive understanding and actionable recommendations.

Milestone 1. Business Workflow Analysis and Research Report

Step 1. Elicitation and Stakeholder Analysis

1.1 Mapping The Source

This proposal begins by mapping the overarching LTIPP/BSTIP workflow and creating a visual and written representation to understand the current operational structure. By identifying operational inefficiencies and providing evidence-based recommendations, our research will ultimately lead to cost savings and more effective incentive programs. We will answer the following questions:

What are the major operational phases in STIP/BSTIP and LTIPP?

What are the subcomponents and processes that make up each phase?

What is the appropriate duration for each phase?

Who are the key stakeholders in each phase?

What are their roles and responsibilities?

What are their interdependencies?

Did any deadlines move during implementation? If so, why?

More generally, where were there delays and why?

For Example, within the Onboarding Phase, we may want to understand:

What is the current KYC process? What issues or bottlenecks are there? What is the best way to solve these? Who is managing KYC? Who is mediating the process?

1.2 Preparation for Elicitation

We will select appropriate elicitation techniques such as interviews and document analysis. We will prepare materials, and open ended questions to guide the elicitation process.

1.3 Stakeholder Analysis

After preparing initial elicitation, we use discovery interviews to ensure we identify all stakeholders involved or affected by the project, analyzing their roles, interests, influence, and level of involvement. The outcome will be a stakeholder map or matrix: a visual representation of our collective understanding of these relationships.

Step 2. Analyzing and Documenting Requirements

2.1 Qualitative Stakeholder Interviews

Following the mapping of the core workflow, we will conduct interviews with respective stakeholders based on our findings. This includes open-ended interviews, surveys, and possibly quantitative items (e.g., NPS). Interviews will be conducted with Grantees, Council Members, and those who did not apply to identify gaps and gather comprehensive data. Key questions include:

Grantees.

How did you see your role in the program?

Were you satisfied with the program (NPS)?

What do you think could be improved?

How easy was it to apply? What issues arose during application?

Did you understand the role of the Grants Council?

Did you understand the role of the Advisor?

Was the advisor role helpful?

How was the experience working with the data provider (i.e. Openblock)?

How was the quality of service? Were responses received in a timely manner?

How was communication from the program manager, council, and DAO?

Do you believe the advisor process was fair and equitable (specifically with regard to the assignment of a grants advisor)?

Was a two week feedback term sufficient to incorporate and update your proposal?

What operational issues did you run into?

How was your experience of reporting biweekly data?

Council, Program Manager, Advisors.

Similar thematic analysis as above, to be informed by initial qualitative research.

2.2 Analyzing and Documenting Requirements

Categorize, analyze, and prioritize requirements gathered during elicitation. At this stage, we start drafting the detailed requirements specifications, user stories, use cases, and acceptance criteria. These ground our visual models (e.g., wireframes) to represent requirements.

Step 3. Synthesis and Final Report

3.1 Final Research Report

After data discovery, we will interpret and compile the data into a comprehensive report. This step ensures the proposed solution meets the identified requirements and delivers the expected value. The report will include:

3.1.1. Visual Mapping.

A visual mapping of the current workflow, capturing all major operational phases, subcomponents, and processes. Additionally, a visual mapping of a proposed improved workflow, showing optimized processes, roles, and interactions.

3.1.2 Expository Copy.

A narrative document explaining the proposed changes and their expected impact, providing a rationale for each improvement with detailed descriptions of each step in the current and proposed workflows.

3.1.3 Empirical Analysis.

A detailed report summarizing the findings from the surveys and interviews, including key metrics (e.g., NPS scores) and qualitative insights.

3.1.4 Proposed Solution.

Recommendations for software and process development, based on the empirical analysis and workflow mapping, focusing on the largest impact and ROI for automation and software implementation. The roadmap will include the following where appropriate and necessary

3.2 Roadmap & Implementation Plan

Timelines, resources, and responsible parties. In it we address any transition requirements, such as data migration, training, and change management.

3.2.1 Performance Metrics Plan.

A document outlining the KPIs, monitoring mechanisms, and evaluation criteria to assess the success of the proposed solution.

3.2.2 Post-Implementation Review Report.

A report summarizing the outcomes of the implementation, stakeholder feedback, and recommendations for future improvements.

Milestone 2. LTIPP/BSTIP Transparency Dashboard

Independent of the Business Workflow Analysis and Research Report, we intend to operationalize a live data dashboard to monitor BSTIP/LTIPP incentive distribution and biweekly reporting information. This is necessary because while Powerhouse has developed and implemented a reporting workflow at its own cost, (ArbGrants.com), it will require additional development to deliver an at a glance dashboard. You can see an example for MakerDAO’s dashboard powered by Powerhouse Fusion here: MakerDAO Expenses.

Grant Terms and Budget

The proposed statement of work is significant and extensive, while the proposed ARB budget is markedly below market rate for this scope of services. This mismatch was consciously done to calibrate the resource request to fit within the constraints of the Research Bounty Budget. We have done so because we believe this work is necessary and best done in advance of the next incentive program design. A modest commitment of ARB simply helps us to defray some operational costs as we continue supporting Arbitrum.

With continued operations in mind, we structured the budget as two independent milestones (30,000 ARB each). In theory, the council could fund only one, rather than both, of the milestones. However, we strongly suggest funding both the dashboard and business analysis, as both will be useful now and into the future.

Our primary objective is to deliver superior services and products to Arbitrum that translate to demonstrable improvements to the program. We appreciate your review and consideration of the proposal.

Specific Commentary

@Saurabh

Thanks for taking the time to engage substantively with the post and provide constructive feedback.

  1. Have you considered expanding the dashboard to contain any of the suggestions in the LTIPP Bounty Ideas document?

The dashboard can be future compatible and include other incentive data, including that of OpenBlock where their APIs allow it. This would be a design decision, though with this scope we wanted to focus on the operational data, as this data is only captured on ArbGrants.

  1. In the current OBL Data requirements for the LTIPP 1 and its schema, all protocols are expected to work on providing on developing a query (on Dune) that showcasing the incentivisation of ARB by 17th of May 2024. What makes Powerhouse’s offering different to what OBL will be tracking in terms of incentivisation for these protocols?

Powerhouse is pulling data from ArbGrants.com, which provides the reporting and transparency workflow for biweekly reports. This means aspects of this data is not present anywhere else, for example, reported changes from week to week, and other self-reported data, etc.

  1. Which stakeholders are you looking to interview for the report and the dashboard?

All relevant stakeholders as noted above. If we missed any, if you could please note them to us that would aid us greatly.

  1. For the report, what are you mapping for in terms of the research?

I believe we have now answered this, but if not, please let us know what other questions you have.

  1. From this Aave grants 1 section what metrics would you track for the stats?

We are tracking the same information for all protocols, which is the data provided in the new biweekly templates. For a short video on ArbGrants, please see here.

  1. Have you considered developing a user-story for protocols to be onboarded in the ArbGrants?

We are currently onboarding protocols now onto ArbGrants and working on an FAQ, intro post, etc., for biweekly reporting. Part of this research is to help improve these processes within ArbGrants and across BSTIP/LTIPP, as well as examine all processes, (program design, grantee selecting, onboarding, verification, launch, continued reporting, and post performance evaluation.

1 Like