I think that the answer for the first question is that we will move from using dollars to using compute. We are already seeing this within companies like nvidia where they pay their employees with both money and tokens
On the need to have idiosyncratic agendas pursued over years as an essential motor of discovery: this might be a genuinely AI advantage here. Human discoveries trawl the possibility space in a very sparse manner, relying especially on social pressure to trim the outlandish theories. this is important because human is very g-limited, in term of individuals needed to go deep in each rabbit hole. the time spent developping new measurments is the limiting factor in your stories.
But AI could afford to be - not exhaustive - but much more bayesian in the handling of theories over long scales: spend a few hours of datacenters to push even the long tail of theories to the measurment bottleneck, get a queue of resources for verification (which itself would not have the social pressure via grantmaking etc to deal with), and progress iteratively - the parrallax experiment would have waited for some years, but importantly still adding some probability mass to the correct model! and scientific accelleration would be real.
Regarding your note on Darwin and parallel discovery, I think Guns, Germs, and Steel by Jared Diamond may offer a useful analogy. Diamond makes a similar point about the origins of food production: farming was not really “discovered” or “invented” in a clean, conscious way. It emerged as a by-product of many local decisions over hundreds of years whose consequences were not understood in advance.
The connection seems implicit. Diamond is asking a similar kind of question, but about the origins of food production rather than the origins of natural selection. In both cases, the thing looks obvious only after the surrounding pieces are already in place. Farming looks obvious once you already live in an agricultural world. Natural selection looks obvious once you have deep time, extinction, biogeography, artificial selection, population pressure, and a historical view of nature. But without Lyell, Malthus, and the long accumulation of observations before Darwin and Wallace, the idea was much harder to see.
That also connects to your point about verification. Darwin’s theory was not verified in the same way Newton could verify gravity by running the numbers. It depended on circumstantial, retrospective, and cumulative evidence. You rarely get one decisive experiment. Instead, you get a mosaic of partial evidence that becomes compelling only after enough pieces fit together.
A lot of the broader essay also reminded me of John Gribbin’s Deep Simplicity. In particular, your section on why RL may be disproportionately bad at science seems connected to some basic features of the world itself: chaos, nonlinearity, and emergence. The world is not random, but many systems are chaotic, meaning deterministic processes can still become practically unpredictable because of sensitivity to initial conditions. Nonlinearity means small causes can produce large effects, or large causes can dissipate into small effects. Emergence means that important properties can appear only at higher levels of organization and cannot be easily inferred from the parts alone.
I bring this up because those three features do not make verification impossible, but they make it much harder. They create delayed feedback, noisy causality, and ambiguous credit assignment. That seems central to why scientific discovery is not like math, where the verification loop is tight. In science, especially for big conceptual breakthroughs, the world may only tell you much later whether a research program was progressive.
If the benefits of AI don't go directly to regular people, then perhaps redistribution could look like taking a portion of the profits of AI companies and giving that portion directly to regular people. For instance, the government could institute a property tax on AI data centers and use that money for some form of UBI. This would probably work especially well in the short-term, given all of the recent populist backlash towards datacenters.
Come to think of it, perhaps the Alaska Permanent Fund would be a good model about how this re distributive system could be built.
To answer the first question , for people naturally there will be open source models which are cheaper and SLMs useful for differents sets of tasks. The harness layer should be much more individualistic, and which gives rises to so many companies. But yes , ideally we might need to have ubi for intelligence infrastructure.
“There should be dedicated people to keep a bunch of dormant research agendas alive in case they turn out to be productive upon further investigation.”
This is the role that peer-reviewed publications have been doing.
Plenty of examples of research that proved to be critical only after someone stumbled upon obscure papers that got nowhere initially.
My takeaway from parallel discovery is that new insights don’t need flashes of “divine” inspiration, or really anything special. It mostly just requires drawing connections between existing facts in the scientific literature.
I think that the answer for the first question is that we will move from using dollars to using compute. We are already seeing this within companies like nvidia where they pay their employees with both money and tokens
On the need to have idiosyncratic agendas pursued over years as an essential motor of discovery: this might be a genuinely AI advantage here. Human discoveries trawl the possibility space in a very sparse manner, relying especially on social pressure to trim the outlandish theories. this is important because human is very g-limited, in term of individuals needed to go deep in each rabbit hole. the time spent developping new measurments is the limiting factor in your stories.
But AI could afford to be - not exhaustive - but much more bayesian in the handling of theories over long scales: spend a few hours of datacenters to push even the long tail of theories to the measurment bottleneck, get a queue of resources for verification (which itself would not have the social pressure via grantmaking etc to deal with), and progress iteratively - the parrallax experiment would have waited for some years, but importantly still adding some probability mass to the correct model! and scientific accelleration would be real.
Regarding your note on Darwin and parallel discovery, I think Guns, Germs, and Steel by Jared Diamond may offer a useful analogy. Diamond makes a similar point about the origins of food production: farming was not really “discovered” or “invented” in a clean, conscious way. It emerged as a by-product of many local decisions over hundreds of years whose consequences were not understood in advance.
The connection seems implicit. Diamond is asking a similar kind of question, but about the origins of food production rather than the origins of natural selection. In both cases, the thing looks obvious only after the surrounding pieces are already in place. Farming looks obvious once you already live in an agricultural world. Natural selection looks obvious once you have deep time, extinction, biogeography, artificial selection, population pressure, and a historical view of nature. But without Lyell, Malthus, and the long accumulation of observations before Darwin and Wallace, the idea was much harder to see.
That also connects to your point about verification. Darwin’s theory was not verified in the same way Newton could verify gravity by running the numbers. It depended on circumstantial, retrospective, and cumulative evidence. You rarely get one decisive experiment. Instead, you get a mosaic of partial evidence that becomes compelling only after enough pieces fit together.
A lot of the broader essay also reminded me of John Gribbin’s Deep Simplicity. In particular, your section on why RL may be disproportionately bad at science seems connected to some basic features of the world itself: chaos, nonlinearity, and emergence. The world is not random, but many systems are chaotic, meaning deterministic processes can still become practically unpredictable because of sensitivity to initial conditions. Nonlinearity means small causes can produce large effects, or large causes can dissipate into small effects. Emergence means that important properties can appear only at higher levels of organization and cannot be easily inferred from the parts alone.
I bring this up because those three features do not make verification impossible, but they make it much harder. They create delayed feedback, noisy causality, and ambiguous credit assignment. That seems central to why scientific discovery is not like math, where the verification loop is tight. In science, especially for big conceptual breakthroughs, the world may only tell you much later whether a research program was progressive.
Trump looks stressful.
US is ruined by him.
If the benefits of AI don't go directly to regular people, then perhaps redistribution could look like taking a portion of the profits of AI companies and giving that portion directly to regular people. For instance, the government could institute a property tax on AI data centers and use that money for some form of UBI. This would probably work especially well in the short-term, given all of the recent populist backlash towards datacenters.
Come to think of it, perhaps the Alaska Permanent Fund would be a good model about how this re distributive system could be built.
To answer the first question , for people naturally there will be open source models which are cheaper and SLMs useful for differents sets of tasks. The harness layer should be much more individualistic, and which gives rises to so many companies. But yes , ideally we might need to have ubi for intelligence infrastructure.
“There should be dedicated people to keep a bunch of dormant research agendas alive in case they turn out to be productive upon further investigation.”
This is the role that peer-reviewed publications have been doing.
Plenty of examples of research that proved to be critical only after someone stumbled upon obscure papers that got nowhere initially.
My takeaway from parallel discovery is that new insights don’t need flashes of “divine” inspiration, or really anything special. It mostly just requires drawing connections between existing facts in the scientific literature.
LLMs are famously getting quite good at this.