[Request for Feedback] Meaningful, Intelligent, and Instant Retrospectives for Grant Programs

Hello, Arbitrum community!

My team and I are developing FairAI, a decentralized AI marketplace. In our platform, businesses can submit requests for problems they have, and developers can provide open-source AI solutions to address those real-world issues. These solutions are then automatically connected to our platform’s enlisted hardware providers, ensuring transparent and verifiable AI computation. This setup allows end-users and businesses to easily access and retain high-quality AI responses while maintaining more control over the model and its operator without incurring infrastructure costs. You can learn more about our project through our Linktree, and you can check an image explaining this concept on the image below:

We use USDC on Arbitrum for everything related to payments in our platform. As such, we started attending some Arbitrum events. By speaking with members of this community at these events, we realized that this technology could actually be really useful for the DAO and the ecosystem overall since you can use our platform to get tailored and open-source AI solutions that a DAO community can scrutinize due to our system’s transparency and verifiability.

Following this line of thought, we submitted two grant proposals under Questbook’s “Arbitrum New Protocols and Ideas 2.0” program that would be part of the AI system mentioned above. With the first proposal, we suggested the creation of two chatbots and their respective interfaces for querying information related to the STIP and LTIPP programs. This proposal has been accepted and developed, and you can preview the solution in the images below, or explore it in our marketplace. We are still awaiting the final decision on the second proposal, which focuses on building a multi-modal grant analyzer for the Questbook grants.

To advance our idea even further, we are now participating in the CollabTech hackathon, powered by RnDAO. For the hackathon, we are creating a PoC AI solution where any interested party can generate a meaningful and instant retrospective analysis of the last LTIPP program. You can check the readme we submitted to the hackathon here and a preview of our current work in the video below:

Video Link

With all this context, we finally arrive at the purpose of this post. Because we believe that collaborative technology can’t be achieved without collaborative feedback, we would like to make our tool even more effective by inviting this community to start a meaningful and healthy discussion about this idea. Any comments, from suggestions to criticisms, are welcome.

To help with the feedback, here are the example questions we are currently considering for the report, that we gathered from speaking with multiple community members and going to Arbitrum governance meetings:

  1. What is the relationship between the number of ARBs that projects requested and their success in the LTIPP program?
  2. How does [a project] perform compared to similar projects?
  3. What are the top 5 projects that have achieved success based on what they specified in their grant proposals?
  4. What do the most successful projects have in common, considering the objectives they set in their grants?
  5. What is the relationship between the categories in which projects fall and their results in the LTIPP program?
  6. Which project brought more whales to the Arbitrume ecosystem?

We would also love to hear more about specific questions like these you might have regarding any Arbitrum grant program, not only LTIPP.

3 Likes

Hi!

Here you can find a list of research areas the LTIPP Council considered relevant for funding. Maybe it’s useful for you :slight_smile:

Just to clarify, that funding has already been distributed. You can view the ongoing research in this thread. Perhaps you could reach out to the researchers to support your work

Hi Pedro,

Thank you for your feedback!

We have already considered the first document in our research during the hackathon, but thank you for sharing it :slight_smile:

We are aware that the LTIPP has already been distributed and that research on it is underway. Our goal is to develop a system that can conduct inexpensive, instant, and meaningful research on any incentive program. To demonstrate the feasibility of this approach, we chose to use the LTIPP data on our PoC precisely because you have other reports that can be used to compare the results of our model, like the ones you mentioned. The main one we used in our personal comparison was this one from OpenBlock. This approach is generic, so if this solution works for the LTIPP program, it can also be applied to other previous or future incentive programs.

We are now trying to obtain feedback on the questions people would like answered in our reports. This will help us create them in the most effective way possible. The tip about being in contact with the researchers of those other reports is a good one, we will push for that to happen :slight_smile: