Arbitrum D.A.O. Grant Program Season 3 - Official thread

This post marks the official start of the D.A.O. Grant Program Season 3!

The Arbitrum D.A.O. (Domain Allocator Offering) Grant Program is a 1 year program divided in five domains, aimed to be the entry point of grants in the Arbitrum ecosystem and support builders aligned with the vision of the DAO. As per the snapshot snapshot and tally votes, the program will support the following areas:

  • New Protocols and Ideas: a general bucket encompassing protocols, platforms, governance tooling, and other projects that don’t specifically fall into other domains
  • Education, Community Growth and Events: a domain focused on physical events and educational materials for Arbitrum
  • Gaming: a domain dedicated to web3 gaming infrastructure, web3 KOL gaming activities, and all video gaming-related projects.
  • Dev Tooling on One and Stylus: a technical domain oriented toward developer tooling and promoting Arbitrum One and Stylus adoption.
  • Orbit Chains: a domain oriented toward the expantion of dApps into specific Orbit chains, deployment of technical solution aimed to address the current user experience fragmentation and in general bootstrapping of solution built on top of the 2024-2025 Offchain Labs roadmap

For further info on the five domains, check the related page in the information hub. Note that being the program modular in nature, the DAO might want to vote in future to add further domains to the current list.

Teams that are interested can apply here: https://arbitrum.questbook.app/

FAQ

How does the grant process work? Is there a maximum amount I can ask?

After creating a wallet in Questbook, you can submit a proposal in one of the five domains, following the appropriate template. In your proposal, you may request up to 25,000 USDC, which will be reviewed solely by the Domain Allocator managing that domain. If you request up to 50,000 USDC, your proposal will require review by a second Domain Allocator as well.

Once your proposal is submitted, you will receive a response directly in the comment section of your proposal.

For more details, please refer to the guide on how to apply and the FAQ.

Where can I find the RFP and the Rubric of each domain?

All information about the program, including the RFP for each domain, the rubrics, and the KPIs, can be found in the information hub we have created: https://arbitrumdaogrants.notion.site/. We followed a structure similar to the one used in the UAGP program and incorporated some content from it, as it proved to be highly effective. (Thanks @Areta, for the awesome work!)

What is the procedure to apply for a grant?
  • Go to arbitrum.questbook.app
  • Create a wallet
  • Choose one of the domains and click “Submit New” in the top left corner.
  • Complete the form, ensuring you answer every question.

For more information on how to create a wallet in Questbook, please check here.
For details on the application process, please refer to the “How to Apply” section.

Who is the team running the program?

You can find more info on the team here.
The program is lead by Jojo who is the program manager; the domains are managed by Castle Labs, SeedGov, MaxLomu, Flook and Juandi

I have other questions that were not covered here, where can i find the info?

We have created a dedicated FAQ section in the information hub. If you still have any questions, feel free to reach out to PM Jojo! You can find the contact details here.

How can the delegates track the progress of the program?

First and foremost, everyone is welcome to ask questions in this thread or contact the PM directly here or on Telegram.

The PM will also post a report in the forum on a monthly basis, mid-month, to highlight the current status of the program. Additionally, this information will be shared during GCR calls. All reports will be aggregated in the reports section of the information hub.

Both the written reports and verbal updates, as well as any oversight on the program, may be adjusted in the future to align with any new framework that is voted on and approved by the DAO.

Is the program going to change through the year?

The program has been approved to run for one year, until March 2026, or until the allocated funds are depleted.

Given the rapid evolution of the crypto space, we anticipate adapting the program and RFPs over time to align with the overarching goals set by the DAO, starting with the SOS proposals. This does not mean there won’t be room for innovation or experimentation. However, the entire team agrees that our primary objective is to support the DAO in achieving high-level goals that delegates, the Foundation, and OCL collectively recognize as valuable.

Where can I find all the resources, info and links?
  • The main information hub in notion will be constantly updated with new info, updated RFP, data and reports through the time
  • This very thread will be the main communication channel with the DAO
  • The Questbook Discord is the point of contact for grantees that have further question on the program and want to communicate in a more agile way


REPORT

1st Report - 17th March 2025 to 17th April 2025
2nd Report - 18th April 2025 to 17th May 2025
10 Likes

The first monthly report, with a bit of delay due to holidays, is live on our website!

TLDR:

  • referencing period: 17th of March to 17th of April (first 30 days of operation)
  • 188 proposals in total, of which 13 were approved
  • biggest focus/lift has been in creating the material/notion site, coordinating the team, manage the platform and bootstrap activities
  • due to this we don’t have too many approved proposals so far, but we forecast the activity to onramp during the next 60 days
  • the report is relative barebones in term of specific consideration, since it takes in account the very initial period, but the goal from the next one is to also give an insight for each domain by each DA
  • focus of the team right now is better coordination with both Foundation and OCL to leverage their needs and expectations
  • to tailor in relation to the DAO, we are waiting for a finalization and vote of SOS proposals to see how we can tap into what will be decided
2 Likes

so… what’s going on with the New Protocols and Ideas domain acceptance rate?

More details and metrics can be seen in this spreadsheet I quickly whipped up.

it’s the only domain that has allocated the lowest amount of funds until now, 2.7%, and it has already rejected quite a few projects that in my opinion should be funded (I even personally recommended to a few of them to apply there), and the reasoning for rejection and debrief questions are always the same. It feels like @CastleCapital is either being too conservative with allocating these funds, or they just don’t care enough to spend the proper time to vet the projects that apply and reject them too soon. I would love to understand why there is this very obvious discrepancy in acceptance rate.

This is the main cause for good builders to shy away from Arbitrum, forever. When they get rejected a “smallish” grant, from the only program that is supposed to be the allocator for early stage builder in the whole Arbitrum ecosystem, it sends a very bad signal.

One of the examples is what happened with Lighthouse Labs for example, for their Signals Protocol, that won 1st place on the Collabtech Hackathon in November last year and by rejecting them, we are basically saying “thank you for playing, go somewhere else to get support for your protocol”. They also shared their views here.

Hello and gm. Let’s go step by step here.

Thanks for spinning this spreadsheet up. The numbers look indeed correct.

And this number is correct as well.

I can totally see how you would perceive some projects worth a funding. While any program has guidelines and a set of accompanying docs such as RFP and others, there is always a certain degree of subjectivity in the evaluation of a grant/investment, and in general in any job that requires handling capital to third parties for them to manage. So, I really see your point when you say that in your opinion they should have been funded.
At the same time, my general take would be that, if the DAO elected a set of people to do these evaluations, these people should be trusted and their judgment should be taken in account also considering that degree of subjectivity.

The reason is, honestly, quite simple. Castle has a very specific internal framework for approval and rejection, with two main steps: in the first one they ask a specific set of questions (that are derived but also altered from their other internal framework, an investment one). If the proposer passes this first set, then they go more in deep with tailored questions.
When you see a lot of rejections with very similar rationale is actually an indication of the consistency of this framework: most proposers are currently not able to pass this first step and so they get rejected for similar reasons.
Note that other DAs have similar frameworks as well: most of them have a specific set of initial questions with follow ups after certain details are disclosed.

First, thanks for the link to their answer in the forum. I wasn’t tagged there nor they did contact me in private and would have been good to have awareness (on my side) of this answer. So will reach out to them.

I want to go a bit in deep on this proposal. There is a lenghtly discussion in questbook about the valuation which is worth sharing partially here since it was not posted for whatever reason.


What you see here is the rational of rejecting.
I would like to point out that

  • Before this 800 words review, there were certain questions from Castle to the proposer which were answered in length
  • After this proposal, there was a followup from both the proposer and Castle on the review itself and other details.

One thing stands out: the team is for sure committed to the product, and it shows by their answers. At the same time,

  1. the biggest question mark (and not the only one) is about sustainability which was not properly addressed and was cardinal in this review
  2. governance tooling are not a main focus in the “new protocols and ideas” for this season.


All of the above is something that was really worth addressing and I thank you for reaching out on these points.
I want also to add my personal point of view as program manager of this season and will try to be brief:

  • in an agnostic way in regard to the quality of proposals, I am relative neutral right now on underspending vs spending at capacity; but underspending offers the advantage, if/when the right time comes, to heavily invest in good teams and opportunities. The opposite is instead not possible
  • the DAO vote gave us the mandate to finance good projects through the judgment of the current DAs team with a certain yearly budget. We don’t have the mandate to spend a certain amount of capital per month, nor to even allocate the whole amount assigned to the program as a whole
  • the “new protocols and ideas” domain is the most general bucket, and also the one that can be more elastic in relation to DAO’s needs. We are, right now, in a transitional phase, with the OpCo being stood up and the SOS proposals being aggregated. I am personally fine in being more spending averse now in the general bucket, waiting for a time in which there is more clarity in the DAO as a whole
  • the domains are quite different between each other in term of verticals and goals and very different in term of people behind that manages them. There are intrinsic differences due to these that translates, among others, in different spending rates.

I am totally fine with @CastleCapital and specifically ndw/chilla being conservative in spending if they don’t find proposals that, in their opinion, are good enough. The last thing that we want, as a DAO, is to just throw money away.
I want to emphasize that they could just avoid all of this noise by simply increasing the pace of approvals. Not a lot of people would complain. This would be the “easy” route. The hard route is to try and focus on the teams that have the highest potential and alignment with Arbitrum, up to deciding on spending money at a slower pace if these teams are not to be currently found, according to their judgment.




I want to know focus on a few key points that should probably be highlighted as well.

Quite a strong take. You are stating that “this is the main cause for good builders to shy away from Arbitrum, forever”. Do you have any data or conducted any meaningful number of interviews to protocols and builders that decided to not build in Arbitrum due to either a lack of grant programs or direct rejection in the grant ask from the D.A.O program?

I can see your point. At the same time is extremely unrealistic to think that, if a project was supported in one of the program/initiative of our DAO or of other DAOs, should automatically be supported in all or most of other programs we have running.
Having every project that got a grant from thrive/plurality, questbook, foundation, our dao, get all a second, a third and a fourth grant in other programs, would mean for us to just waste our treasury.

In my personal view, being allocated a grant and complete that grant successfully, or being first in an hackaton, or anything similar, is a very good data point to consider to judge a follow up. But is also just that: a single data point, and not a blank sheet to be necessarily entitled to further support.
And no, this doesn’t clash with the vision of a funnel for builders that both me and others posted, but is instead a sounding and logical way to approach a situation in which we have builders in an amount of several order of magnitude above the capital and support we can offer.

As a final note, this is the point that kept me thinking for the last few hours.
The question about approval rate in this specific domain was already asked by you in the last GCR call, and I provided an answer. Maybe it was too brief due to also the time we all had at hand, so I can understand why you wanted more details.
You could have reached out to me or to CastleCap in relation to this question instead of going directly to the forum; regardless, this is (kinda) fine as well tho.

But

  • wanting to ask this question
  • not reaching out to the PM of the program or to the DA or to the company for which the DA works for to get clarification
  • posting on a public forum statements containing insinuations about the work ethics of the individuals involved

is something that doesn’t provide any value to our program, to our DAO or to Arbitrum as a whole.

@paulo, @jojo — thank you both sincerely for the time and thought you’ve put into this discussion.

We have spent considerable time as a team reflecting on the process and our own approach to the application. I want to acknowledge up front: winning a hackathon or having prior DAO engagement does not entitle any project to a grant. We agree fully that every application must be evaluated against the DAO’s approved rubric for New Protocols and Ideas.

That said, I want to respectfully clarify a few points:

@jojo, isolating individual rejection criteria out of the broader context (such as “governance tooling not being core to this season”) does not present the full picture of what was a deeply iterative and collaborative review process. This also differs from the stated objective at the beginning of this post.

In our case, @Castle had some misconceptions and questions, which we diligently addressed over multiple rounds. Despite this, the final response was an immediate refusal with no opportunity for re-review. We found this surprising and disappointing, especially after the effort to clarify and align.

Additionally, the public scoring of 1/5, even after multiple clarifications, leaves a permanent visible mark on our project that does not fairly reflect the spirit of the engagement or our hard work over the past six months.

The two main points cited in the rejection were:

  1. Sustainability concerns not properly addressed — we respectfully disagree. We provided detailed responses on both our long-term plans and business model.
  2. Governance tooling not being a core focus of the New Protocols and Ideas domain — yet the published program description explicitly includes “governance tooling” under that domain’s scope .

We had no avenue for review or escalation than to accept the decision despite a clear disparity between the stated goals and rubric. We hopefully offered some constructive feedback and understand the subjective nature of any grant process. However, I want to be clear: we do not want our project or its perception in the ecosystem to suffer as a result of this process.

We remain committed to attempting to build within the Arbitrum ecosystem and hope our experience can be used as a constructive datapoint for further refining the program.

3 Likes

shouldn’t these questions be in the application template then? If they are all the same…

who decided that?

but there isn’t consistency on this between the different domains. that’s what I pointed out. and as the PM, you @JoJo should look into it and try to make it more consistent I think, that’s why I commented this. I agree that maybe right now, with all the changes in the DAO, there isn’t a clear direction of what to approve, but that shouldn’t be an excuse to reject builders.

Actually yeah. I spent last week in Amsterdam for ETHDam and a few other meetups, and during that week I met at least 3 builders that whenever I said I was a delegate at Arbitrum DAO, immediately had to tell me their stories of how Arbitrum doesn’t value builders like them, because they got rejected from previous rounds of grants on Arbitrum. One of them even showed me the telegram chat between his team and you @JoJo when you were the DA for this New Protocols and Ideas domain last season, saying that you were dismissive of their project and that that was why they didn’t focus more on Arbitrum. This is anecdotal evidence of course, so it is worth what it’s worth… but I really think we can’t, as an ecosystem, be rejecting small grants like these, in this way, and claim to be builder friendly. The testimonials on @krst thread are also quite revealing of this sentiment, I think.

regarding this, I would ask you to quote my full sentence and to read it in context.

I was not insinuating anything about anybody’s work ethic. I was wondering what would be the reasons for this discrepancy, and I came up with 2 options: they are being too conservative, which you explained why but I would love to hear it from @CastleCapital directly and that’s why I tagged them in my previous post and not you; or if it’s just incompetence or complacency, which is something every delegate in this DAO should be on the lookout for and should inquire about. And that’s what I did. I have no conflict whatsoever with Castle Capital and I think they actually do great work, but in this case, I just don’t understand what they are doing and why, especially because I’ve been recommending good builders to get into Arbitrum and apply to these grants and they keep getting rejected.

So, I would still like to hear, from Castle Capital directly, why their acceptance rate for the New Protocols and Ideas domain is way lower than in the other domains.

Hello and thanks for your feedback. Want to first clarify something

To clarify, my response above was to Paulo and what he raised; i quoted your application since he mentioned and it wasn’t an answer to your team specifically.

As i said I was not aware of your post and I wanted to reach to you on this, I guess we could do it here.

Let’s go step by step here as well.

Reading the application, i see that after your submission there was a detailed back and forth between you and the DA before the review. This convo also extended after the review itself.
I can understand the disappointment about a rejection after you answered the questions from Castle. Don’t get me wrong: a team that is able to get on the line right away, and answer in detail, is always a positive factor. But is not necessarily enough, specifically the content of the answer provided was, likely, in this case not enough to move the needle.

Would like to go a bit in the details here. This was the question about the topic from Castle

And this was your answer

I will be extremely honest here: this didn’t properly address the question. The question was about how to get the application from being used by 10 users to being used by 100 users to being used by 10000 users, regardless of the open source and non profit nature. It was not necessarily about finding further funds (although, this for us is always a concern: public good and open source doesn’t mean unfortunately that you don’t have to pay the bills, and this is why further grants were mentioned for example).
The question had a deeper meaning, specifically as stated about the growth of the product.
We have plenty of working products in Arbitrum, completed, functional, that are neither interconnected to each other nor used by a meaningful amount of users. Your answer was about completing the product and, after that, you would start scaling. But you didn’t provide a strategy, detail, a plan, and a business model. The goal is to have projects that are framed inside a vision that can leverage all components in a way that the total value is higher than the sum of the single part.

The fact that one topic is mentioned doesn’t mean that is currently cardinal or the focus of the ecosystem.
Can we accept governance tooling proposals? Of course
Is the “new protocol and ideas” currently focused mostly on governance tooling? No.
In previous season we had several projects related to this vertical (proposal.app, the dashboard and reporting from curia to just name a couple). Compared to last year, the governance approach has changed and shifted in the last quarter with the vision posted by the Foundation just to name one. This doesn’t mean that we don’t need good governance tooling, just, that they are not a high priority. Even just with a first analysis of the SOS proposals, is clear that the submission of the DAO are focused on other verticals:

  • Onboarding institutions
  • Distribution / User Acquisition Channels
  • Verticals, workstreams, and operational efficiency
  • DeFi as the core pillar
  • Supporting builders
  • Giving premium to ARB
  • AI & Social
  • DePIN

Note that I am not providing an evaluation here of what is important, but merely reporting the main topics of discussion and how governance tooling is, indeed, not part of this very broad and horizontal discussion currently in our DAO.

Again, this doesn’t mean that the domain can’t accept proposals that are outside the scope of what I just mentioned, but obviously they could be deemed, in some cases, less important.

The scoring is a sum of 11/25, divided in a total 5 categories. I can understand how it can be frustrating not scoring high enough; at the same time a rationale was provided for each category:

  1. Team competence: Strong technical team, with clear experience. That said, the full picture is a bit dev-heavy, without clarity on product or growth.
  2. Innovation and novelty: Interesting governance mechanics, but overall it feels like an iteration on existing tools rather than a clear leap forward. The unique angle still needs sharpening.
  3. Ecosystem Alignment: DAO tooling aligns with Arbitrum’s direction, but the lack of a unique contribution or integration plan makes it feel less compelling as of now.
  4. Feasibility and Implementation: The tech plan looks doable, but without insight into progress made post-grant, it’s tough to gauge momentum. Missing well-defined growth strategy and adoption plan creates doubts long-term.
  5. Measurable Impact: KPIs are mostly dev-focused. Little around usage, traction, or how Arbitrum governance behavior would shift in practice

Reading this feedback I sincerely can see there is enough, for your team, to reflect on what didn’t work from our point of view in the application. I don’t think the rejection was just superficial or dismissive of your work, considering that this was just the 300 character mini review of each point of the rubric, and not the overall general review in the comment that is way more long and articulated.

You did, indeed, provide meaningful feedbacks and clarification even after the rejection. And Castle as well kept the line open, engaging in the convo, providing more insight into their decision process and further questions. These are surely valuable conversation and I thank you for being this committed because it also helps us better understanding builders, how they approach the program, and what we can generally improve over time. For this reason, I want to thank you for taking the time not only to answer in the Questbook comments after the rejection, but also to post here in the forum about your experience.

The template is already rather long, and there are already 4+ questions tailored for each domain beside the general ones. The further questions are part of the interview and in general there is always an interview with grantee, is impossible to put all the info needed in one shoot.

A more detailed answer on this was posted above, will report here for clarity

I partially already answered this above, want to report and add a few details.

One thing i want to mention tho: while the education/event domain is plugged in with the AF/OCL people related to event, the gaming domain is plugged in with AF/OCL people from gaming, Orbit works with people at OCL for orbit, and Developer tooling is starting to interface with the devrel people of AF/OCL, new protocol and ideas is currently not benefitting from any connection in particular due to the generalistic nature that we have (we do have synch with foundation but they are not specific for this domain). This also means that the other domains have a relative easier life in term of inflow of application + direction from important stakeholders. We will be looking to improve in this sense the protocol domains. This won’t necessarily mean a change of spending rate tho.

@CastleCapital is more than able to answer any of your concern, both in public and in private. That said, since I am the PM of the program, I asked them to leave the answer to me: due to my role, I can (and I should) speak on behalf of the DAs when needed, especially because not only I own the high level vision of the program but I can also invest my time in comms with delegates and the DAO while DAs can focus on what matters, which is evaluating proposals and interviewing builders.
As a side note, we talk a lot about professionalization of the DAO lately. My personal opinion is that this professionalization also spurs from respecting the single roles in the various programs and org in general. Not trying to corner a single individual/entity when not only there is a party that has both the role and capacity to answer certain questions, but when these questions are also being answered, is indeed part of a maturity level business wise that we should all strive and expect to achieve if we collectively want to put Arbitrum in the major league of player in the next 2, 5 and 10 years.

I’ll purposely overlook on a series of statements that lack context and, potentially, substance. I’ll instead answer with precise numbers: in 59 days, the “new protocols and ideas” domain has received a total request of around $2,500,000 in dollars give or take, excluding what we deem non legal proposals asking for example for 200k. This domain has, currently, a total of $250,000 to spend in term of fundings in our safe. In a year, it should have $1,250,000 to spend, a capital to which we don’t currently have access as specified in the previous report. By annualizing the numbers, it would mean a total amount requested of 15 millions of dollars versus 1.25 millions available or, to translate, an availability of $0,83 to disburse on an ask of $10,00.
To go back to what you posted

i can say that while your statement is for sure driven by the will of onboarding as many builders as possible in our ecosystem, it doesn’t have a match with the reality of this grant program or any grant program for what it matters.
For what is worth, the solution is not necessarily to increase the capital allocated: I am pretty sure that if we did just bump the capital from 6 to 60 millions for the program, we would still have a ratio of 1:5 or 1:10 on capital available versus general demand.

Focusing on good spending and good teams accordingly to the needs of the ecosystem is valuable and will allow us to leverage the capital at hand. Building a comprehensive structure for builders in which this program, or any other grant program, is just a piece of the puzzle, is another potential way to onboard builders.
Spending more, spending at a faster pace, is neither sustainable nor a solution.

Hi @JoJo,

Thanks for taking the time to engage on this. I spent a lot of time working on the Lighthouse application, and I have some issues with what you have presented here.

You mentioned a process that includes:

This sounds great, and is exactly what we were looking forward to! However, it did not happen.

If you look at the Questbook comment chain, the third sentence of the very first communication we received definitively states:

That said, after reviewing everything, we’ve decided not to move forward with grant support.

It was only after this rejection that they asked 2 questions, but neither of them were appropriate as they had both been answered in the very first section of our application.

As one explanation of the rejection, we were told “Compared to similar projects like http://x23.ai/, it’s hard to see how this stands out,” but x23.ai has absolutely nothing to do with our project.

Another explanation was included in the rubric:

overall it feels like an iteration on existing tools rather than a clear leap forward. The unique angle still needs sharpening.

When we pushed back on this, the reviewer was not able to explain what existing tools we have iterated on. The only overlap they could find was other projects that happen to employ the concept of locking tokens, which is a pretty basic building block used by many technologies.

You claimed the rubric scoring was correct, but how could that be possible if the rubric was completed at a time when we were being compared to unrelated projects and the reviewers were penalizing us based on their own misunderstandings?

Overall, we received 3 replies on Questbook, each one simultaneously rejecting us AND asking more already-answered questions. The third rejection we received stated:

The additional detail definitely helped clarify a few of the questions we had around the mechanism and your ties to the ecosystem.

If they had questions, why weren’t we allowed to answer them before being rejected and dismissed? And if their questions were then answered, why wouldn’t they update the scoring?

You’ve defended the situation by explaining how the process should work. But we aren’t taking issue with the process, we are pointing out that the system you described wasn’t followed. And now it looks like you are trying to claim the assessment we received was correct all along, when the responses from the reviewers clearly demonstrate that they rejected us without even understanding our application.

1 Like

Up to a certain amount of information can be put in a template and in a proposal for a grant, so followup questions are normal.

Is also fine, in my view, that an evaluator initially misunderstand a proposal and then, after clarifications, goes back to analyse it. I would even say that the capacity of going back to a proposal after having new information, or after recognizing something wasn’t clear enough, is a good and welcomed practice.

This is what indeed happened in this case; while the new info were analysed from scratch it didn’t change the outcome for the reasons posted below:

I really want to stress how, in this review, I personally read appraisal for your team and the work done so far, with constrains spinning up from it being a “nice to have” but not a “must have” as the cloudflare ceo would say due to what is mentioned here (not so strong differentiation, sustainability plan and adoption plan not clear enough, deep enough integration in the DAO) summed with my previous point about funding available versus total demand (well below a 1:10 ratio).

Let me know if I can further help.

You have said there is no “strong differentiation”, but that must mean a similar project exists. What is that project?

I agree that you are not required to score us highly just because others in the Abitrum ecosystem do, but when we have been recognized and awarded for innovation by RnDAO, Arbitrum Stylus, and the Uniswap Foundation, and when a similar project has not been presented, it is reasonable to question whether a score of 1/5 for innovation is accurate.

1 Like

I do appreciate all the questions. I don’t necessarily think tho that the public forum of Arbitrum is the best location to discuss in such detail a single reviewed proposal. We did setup together, more than a month ago, a shared group between your team, paulo and me, so you can ping me there if you want,

This isn’t just about a single grant rejection. It’s about a deeper breakdown between the process that’s described, and the one that’s actually experienced by builders — especially those entering the ecosystem for the first time.

All we did is document our experience. Respectfully. Transparently. Because it matters for other builders.

Let’s be clear: we never questioned anyone’s ethics or made personal insinuations. We pointed to structural problems — deviations from the rubric, review execution, and scoring — using our case as a concrete example.

We need only look at this thread. You chose to selectively highlight partial points to support your own framing.

All we have subsequently done is clarify facts.

This thread itself is a perfect example of the issues at play.

You have repeatedly attempted to discredit us and when we respond (after the selective positioning) you attempt to move this private when we pointed out the inaccuracies in your statements.

We have no more to say on the matter, the DAO can be the judge of the facts.

We’re putting our reputation on the line not to get a grant — but to stand up for values we believe the DAO should live by: transparency, fairness, and accountability.

This is about more than just our project. It’s about how we treat builders — and how a system evolves when critique isn’t just tolerated, but welcomed.

1 Like

Hello and thanks again for reaching out.

If you felt discredited by what I posted, I am sorry because it was not the intention at all. At the same time, I don’t think this has happened in the convo above, which is out here for everybody to read.

Taking the convo private is something I invited you to do only after 5000+ words publicly written in here, for one reason only: in our grant program, so far, we have had more than 300 proposals. It’s totally fair to take to the DAO, since this is a DAO program, any concern you might have; at the same time, this way of communicating doesn’t scale because if all the 90% / 270 rejected projects would communicate like this, we would spend time in the forum only trying to explain our actions instead of running the program.

I totally agree. This is exactly the reason why

  1. Questbook, and the evaluations of the DAs, are all public for all delegates to read
  2. There are monthly reports from the team published in our website and in the forum
  3. The team is also in the biweekly governance calls to report what happens

In addition to this, as you see we also take a (considerable) amount of time to address concerns like yours in public, and we also allow teams to directly reach out when they have issue since some, like your team, have a direct line on telegram with me well established over time.


Thank you for all the feedbacks.

yeah… enough said. we can’t say we support builders and treat them like this here at Arbitrum.

I guess for me the TL;DR: is

Two applications have been approved so far. One of them was approved without a scoring rubric submitted, and the comments suggest a private call happened. So not very transparent.

The other application was met with a warm response and a list of questions for them to answer, so they could be guided through the application process to success.

Our application received no relevant questions, and was denied (presumably) because the reviewers didn’t understand it. When we tried to bring to attention to the issue and ask for a second look, we were told the system is working as planned with no other assistance offered.

Three very different experiences, and the point of this thread is just to highlight the lack of consistency and transparency that so many people find frustrating.

1 Like

The second monthly report is live on our website!

TLDR:

  • referencing period: 18th of April to 17th of May (second 30 days of operation)
  • 323 proposals in total, of which 30 were approved
  • focus is shifting in coordinating with Arbitrum Foundation and Offchain Labs
  • as expected, rate of approval has increase in the second month, with a 30% increase compared to the previous 30 days
  • in this second report we have added insighta on the dealflow from each Domain Allocator
  • we have had specific initiatives such as a Twitter space to advertise the program and talk with builders and a campaign aimed to increase proposals for Devconnect 2025
1 Like