Blog prize for the big questions about AI
The not-so-secret point of this whole contest is so that I can hire a research collaborator
There has never been a time where excellent intellectual output on the right question has been more valuable or more urgent. Compelling answers about the big questions about AI can inform the most important economic and foreign policy decisions that will ever be made, the deployment of (at least) hundreds of billions of philanthropic dollars, and the training and governance of superintelligences.
I’m announcing a $20,000 blog prize in order to find people who will excel at researching and thinking through these problems. The not-so-secret point of this whole contest is so that I can hire a research collaborator to think through questions like this hand in hand with me. See more at the end.
Pick a question below, and spend no more than 1,000 words answering it. 1st, 2nd, and 3rd place will get $10,000, $6,000, and $4,000 respectively. I’ll publish the winning entry (and potentially the runner ups) on my blog. Please submit by May 10th, 11:59 PM PST.
Questions - choose one
A couple years ago, there was this idea that AI progress might slow down as we make further progress into the RL regime. 1. Because as horizon lengths increase, the AI needs to do many days’ worth of work before we can even see if it did it right, so if we’re still in a naive policy gradient world, the reward signal / FLOP goes down, and 2. We’d crossed through many OOMs of RL compute from GPT 4 to o1 to o3, and it would not be feasible to replicate that many OOMs increase in compute immediately again. But AI progress seems to have been fast nonetheless - even potentially speeding up if rumors about Spud or Mythos are to be believed. What gives? What did that previous intuition pump that motivated longer timelines miss?
What’s the most plausible story where foundation model companies actually start making money? If you consider each individual model as a company, then its profits may be able to pay back the training cost. But of course, if you don’t train a bigger, more expensive model immediately, then you stop making money after 3 months. So when does the profit start? Maybe at some point scaling will plateau, but if progress at the frontier has slowed down, then the combination of distillation and low switching costs (cloud margins result from high switching costs) makes it really easy for open source to catch up to the labs, eating into their margins. So how do the labs actually start making money?
With OpenAI’s new raise at an $852B valuation, OpenAI Foundation’s stake is now worth $180B. Anthropic’s cofounders have pledged to donate 80% of their wealth. Nobody seems to have a concrete idea of how to deploy 100s of billions (soon trillions) of wealth productively to “make AI go well”. If you were in charge of the OpenAI Foundation right now, what exactly would you do? And when? It’s not enough to identify a cause you think is important, because that doesn’t answer the fundamental problem of how you convert money to impact. Identify the concrete strategy you recommend pursuing.
What should countries which are not currently in the AI production chain (semis, energy, frontier models, robotics) do in order to not get totally sidestepped by transformative AI? If you’re the leader of India or Nigeria, what do you do right now?
Rules and tips
Please don’t let a lack of domain expertise dissuade you from entering. I’m looking for someone who can ramp up fast on unfamiliar topics and think clearly.
Each entrant may submit only once.
You are still eligible for this essay competition even if you’re not interested in the researcher role. Nor does winning this competition guarantee that you will be offered the role.
You’re welcome to use LLMs to help you research, but I specifically picked these questions because I’ve found LLM answers to them unsatisfying. On these kinds of ambiguous questions, LLMs are too all over the place. For example, they’ll identify 5 plausible answers but not have the context and taste to identify the crucial factor and iron out its implications.
You only have 1000 words - make them count. People have the habit of spending the first paragraphs clearing their throat - avoid that.
Why am I hiring for a researcher?
I want my podcast/blog to move from just asking questions about AI to actually helping answer them. But there ars too many important questions, and I need a collaborator to build up context on them all, to explore dozens of fractal sub-questions, to consider the rebuttals and syntheses, and to sharpen each others thinking.
The questions I want us to explore are very broad while at the same time requiring deep technical analysis across many domains to actually answer.
Why am I hiring this way?
Well, I could just put out a job ad for a researcher, but I’ll get 1,000 different resumes, and I’ll have no clue based on that information whether the applicant would be any good at synthesizing lots of technical arguments and information. So I thought, let’s just list out some questions where I genuinely don’t know the answer and would be keen to get some insight.
What this role looks like
Ideally in person in San Francisco, but potentially open to remote.
Will pay competitively
Submit here
If you have questions or comments, I’m hello@dwarkeshpatel.com.

