Team 10: Grantser: Create Arbitrum Grant Proposals That Get Funded

Grantser: Create Arbitrum Grant Proposals That Get Funded

Arbitrum GovHack Track:

DAO operational excellence

Challenge Statement: Create a high-quality, unique grant proposal that gets funding

Members: Jacob Habib, Ivan Manvelov, Fabio Anaya

Team Lead: @jahabeebs on X or Telegram or Discord

Pitch: Grantser

Proposal:

Abstract

Grantser is a public good platform for creating high-quality, unique grant proposals for Arbitrum. It’s a boon for the Arbitrum ecosystem with two components: a repository of successful grants and an AI tool for giving feedback on a draft proposal, calculating a similarity score, and evaluating the proposal against a checklist of Arbitrum-specific grant requirements.

Motivation

During the Arbitrum GovHack, multiple teams identified the following problems with the grant proposal process:

  • It’s difficult to know how to prepare a proper grant proposal that’s eligible to get funded and gain community support.
  • The grant proposal process is too slow–especially for smaller projects.
  • Grant applicants who submit subpar grant proposals have to spend extra days or weeks iterating on feedback from the DAO, or even worse, get no feedback and give up on the Arbitrum ecosystem.

Grantser is a platform built to address these issues. It’s designed to empower individuals and teams to create proper grant proposals faster. DAO members won’t have to continue responding to dozens of subpar grant proposals with the same advice. Additionally, DAO members will feel empowered to know that the time they invest in improving one grant proposal won’t be lost to time and continue improving draft proposals with no extra effort for them.

Rationale

  • Sustainable: The advice given to proposals made in Tally and Discourse won’t be lost to time–the valuable conversations that DAO members have around proposals will be used to train models that refine draft proposals without human effort.
  • Socially inclusive: Members of all levels will be able to engage effectively with the grants process, regardless of knowledge, resources, geography, language, and life experience.
  • User-focused: By creating high-quality proposals, we can create a better foundation for the products that grant recipients build.
  • Neutral and open: Grantser is meant to be a DAO neutral infrastructure product and consequently, it will encourage competing products within different DAOs both within and outside of the Arbitrum ecosystem.

Key Terms

  • Fine-tuning: Pretraining a machine learning model and training it on a smaller, targeted data set. For example, training a generic LLM model on the specific conversations within the Arbitrum Discourse.
  • LLM: Large language model. A machine learning model that can comprehend and generate human language text by analyzing massive amounts of human language data. The GPT LLM, used in ChatGPT, is the most famous.

Specifications

  • Front-end application: a TypeScript Next.js application built with a neutral component library (like shadcn/ui). This stack is very popular among web developers and makes it easy to build and maintain the application.
  • AI chatbot: Part of the proposal is doing a cost-benefit analysis of the technical options for the chatbot, however, the infrastructure of the chatbot would look like the following:
    • An OpenAI/Replicate or other hosted fine-tuned model: in this case, we would not need to populate or maintain a vector database, we would access the model through an API call on the front-end application, and a user could even talk to the chatbot about Arbitrum DAO Discourse proposals via the OpenAI/alternative provider UI if they were provided the model name.
  • Domain: a domain would be purchased through a DNS provider like Namecheap or Cloudflare.

Steps to Implement (see comment for details)

The steps to implement the AIP, including associated costs, manpower, and other resources for each step where applicable. AIPs that involve transactions with third parties (such as grants) will need to ensure that applicable legal documentation and procedures are also included.

  • Domain
  • Front-end development (1-2 mid to senior-level developers)
  • Research & Model training - (1-2 mid to senior-level developers)
  • Business development

Timeline

  • Start date: April 1st
  • Milestone 1 (April 15): The training methodology for the AI chatbot should be finalized after doing a cost-benefit analysis of training various models on Discourse data (Mistral, ChatGPT, Replicate).
  • Milestone 2 (May 1st): The following front-end work should be completed:
    • A simple landing page explaining the purpose of the application with a link to launch the application.
    • Completed front-end application minus a payment gateway.
    • A crude version of the AI chatbot–the relevant API should be integrated but not be working effectively still.
  • Milestone 3 (May 15th): The training of the AI chatbot should be completed, and it should be working effectively. The payment gateway should also be integrated into the application.
  • Milestone 4 (June 1st): The full launch of the application, including all planned features and the AI chatbot working as intended.
  • Mileston 5 (August 1st): The incentive program should be completed and evaluated if the requirements have been met

Overall Cost (see my comment for cost breakdowns)

  • Initial lump sum cost: $20,000
  • Success incentive: Additional $20,000 upon meeting specific criteria

Budget (see my comment for cost breakdowns)

Web Development & Maintenance

  • Front-end landing page + platform development + necessary integrations:
    • $6,052
  • Domain:
    • $48
  • Ongoing AI training & maintenance for 4 years:
    • $9,600

AI Training Costs

  • Cost-benefit analysis of various models:
    $400
  • Research and written open-source training methodology for pulling data from Tally & Discourse, creating JSONL files for model training:
    • $400
  • Training costs on Tally & Discourse text:
    • $500
  • Data cleaning, data labeling, training models on successful and failed Tally grants proposals since DAO founded:
    • $1,000

Business Development

  • Cold outreach, networking with other DAO stakeholders via Twitter and LinkedIn, participating in conferences:
    • $2,000

Retroactive Incentive Program (see my comment for details)

  • Total: $20,000
    • Developer(s) will receive a $15,000 incentive if certain criteria are met by August 1st
    • Business development representative(s) will receive a $5,000 incentive if certain criteria are met by August 1st

Long version of proposal (over 1000 words) with more budget details and exact steps to implement:

Abstract

Grantser is a public good platform for creating high-quality, unique grant proposals for Arbitrum. It’s a resource for the Arbitrum ecosystem with two components: a repository of successful grants and an AI tool for giving feedback on a draft proposal, calculating a similarity score, and evaluating the proposal against a checklist of Arbitrum-specific grant requirements.

Motivation

During the Arbitrum GovHack, multiple teams identified the following problems with the grant proposal process:

  • It’s difficult to know how to prepare a proper grant proposal that’s eligible to get funded and gain community support.
  • The grant proposal process is too slow–especially for smaller projects.
  • Grant applicants who submit subpar grant proposals have to spend extra days or weeks iterating on feedback from the DAO, or even worse, get no feedback and give up on the Arbitrum ecosystem.

Grantser is a platform built to address these issues. It’s designed to empower individuals and teams to create proper grant proposals faster. DAO members won’t have to continue responding to dozens of subpar grant proposals with the same advice. Additionally, DAO members will feel empowered to know that the time they invest in improving one grant proposal won’t be lost to time and continue improving draft proposals with no extra effort for them.

Rationale

  • Sustainable: The advice given to proposals made in Tally and Discourse won’t be lost to time–the valuable conversations that DAO members have around proposals will be used to train models that refine draft proposals without human effort.
  • Socially inclusive: Members of all levels will be able to engage effectively with the grants process, regardless of knowledge, resources, geography, language, and life experience.
  • User-focused: By creating high-quality proposals, we can create a better foundation for the products that grant recipients build.
  • Neutral and open: Grantser is meant to be a DAO neutral infrastructure product and consequently, it will encourage competing products within different DAOs both within and outside of the Arbitrum ecosystem.

Key Terms

  • Fine-tuning: Pretraining a machine learning model and training it on a smaller, targeted data set. For example, training a generic LLM model on the specific conversations within the Arbitrum Discourse.
  • LLM: Large language model. A machine learning model that can comprehend and generate human language text by analyzing massive amounts of human language data. The GPT LLM, used in ChatGPT, is the most famous.

Specifications

  • Front-end application: a TypeScript Next.js application built with a neutral component library (like shadcn/ui). This stack is very popular among web developers and makes it easy to build and maintain the application.
  • AI chatbot: Part of the proposal is doing a cost-benefit analysis of the technical options for the chatbot, however, the infrastructure of the chatbot would look like the following:
    • An OpenAI/Replicate or other hosted fine-tuned model: in this case, we would not need to populate or maintain a vector database, we would access the model through an API call on the front-end application, and a user could even talk to the chatbot about Arbitrum DAO Discourse proposals via the OpenAI/alternative provider UI if they were provided the model name.
  • Domain: a domain would be purchased through a DNS provider like Namecheap or Cloudflare.

Steps to Implement

The steps to implement the AIP, including associated costs, manpower, and other resources for each step where applicable. AIPs that involve transactions with third parties (such as grants) will need to ensure that applicable legal documentation and procedures are also included.

  • Domain: Purchase the domain for the project using granster or a similar name in Namecheap/Cloudflare and configure the DNS records in Vercel under my existing LLC’s organization.
  • Front-end development (1-2 mid to senior-level developers):
    • Developer(s) should first spend time planning the content of each page of the platform. This includes the sections and copy of the landing page and the pages and copy of the platform.
    • Developer(s) should work on building out a simple, static landing page (this should last no more than 3 hours of the total development time). The landing page should have a button that links to the main platform.
    • Developer(s) should have finished the landing page and moved on to working on building out the main application. The main application should include at a minimum a way to sign in with a wallet, a repository of successful grant proposals, and a page to post a draft grant proposal and get feedback from an AI interface. The chatbot should be integrated with the proper platform’s API (like OpenAI) but the model may still not be the correct, fine-tuned one yet.
  • Research & Model training - (1-2 mid to senior-level developers); this can be done in parallel with the front-end development.
    • Do research on the current AI hosting options and calculate the fine-tuning costs for each depending on the provider selected. By the end of this step, there should be a plan (8 hours max).
    • Write an open-source methodology for pulling data from Discourse and Tally, creating the files for model training (8 hour max).
    • Data cleaning, data labeling, training models on data pulled from Discourse and Tally (20 hours max).
    • Any bugs with the fine-tuned model should be addressed after this step.
  • Business development:
    • Begin reaching out to grant proposal authors on the Arbitrum DAO forum, launch on ProductHunt, and publish to other directory platforms, market the platform through social media. See the incentive program section to find specific benchmarks.
    • Contact delegates and leaders from other DAOs via social media to identify which DAOs are most interested in having this platform as a public good. See the incentive program section to find specific benchmarks.

Timeline

  • Start date: April 1st
  • Milestone 1 (April 15): The training methodology for the AI chatbot should be finalized after doing a cost-benefit analysis of training various models on Discourse data (Mistral, ChatGPT, Replicate).
  • Milestone 2 (May 1st): The following front-end work should be completed:
    • A simple landing page explaining the purpose of the application with a link to launch the application.
    • Completed front-end application minus a payment gateway.
    • A crude version of the AI chatbot–the relevant API should be integrated but not be working effectively still.
  • Milestone 3 (May 15th): The training of the AI chatbot should be completed, and it should be working effectively. The payment gateway should also be integrated into the application.
  • Milestone 4 (June 1st): The full launch of the application, including all planned features and the AI chatbot working as intended.

Overall Cost

  • Initial lump sum cost: $20,000
  • Success incentive: Additional $20,000 upon meeting specific criteria

Budget

Proposal includes 1-2 mid to senior-level contract developers collaborating with 1-2 business development contractors

Web Development & Maintenance

  • Front-end landing page + platform development + necessary integrations:
    • Approximately 60 hours of work at $100/hr = $6,052
  • Domain:
    • $12/yr recurring for 4 years = $48
  • Ongoing AI training & maintenance for 4 years:
    • $50/hr for around 4 hours of work per month = $9,600

AI Training Costs

  • Cost-benefit analysis of various models:
    • 8 hours at $50/hr = $400
  • Research and written open-source training methodology for pulling data from Tally & Discourse, creating JSONL files for model training:
    • 8 hours at $50/hr = $400
  • Training costs on Tally & Discourse text:
    • $500 USD
  • Data cleaning, data labeling, training models on successful and failed Tally grants proposals since DAO founded:
    • 20 hours at $50/hr = $1,000

Business Development

  • Cold outreach, networking with other DAO stakeholders via Twitter and LinkedIn, participating in conferences:
    • 40 hours at $50/hr = $2,000

Retroactive Incentive Program

  • Total: $20,000
    • Developer(s) will receive a $15,000 incentive if the following criteria are met by August 1st:
      • The AI model for providing feedback and evaluating draft proposals is an effective and reliable tool for evaluating draft proposals that should be able to evaluate 5 draft proposals and correctly fill out a checklist of Arbitrum-specific grant requirements.
      • The AI model should be able to identify any similar proposals and note them in its feedback.
      • The front-end platform looks professional, mobile-responsive, and is free of obvious software defects.
    • Business development representative(s) will receive a $5,000 incentive if the following criteria are met:
      • The platform has been used by at least 25 future or current grant users and at least 1000 site visitors since launch.
      • The project proposal process has been initiated for at least 2 other DAOs (besides Arbitrum DAO) for integrating this public good into their communities to continue to grow this public good.
1 Like