6 Comments
User's avatar
zeno's avatar

regarding consciousness, in my little pea brain I feel consciousness is just the OS (operating system) of the universe. its outside of our realm of observation because it in itself defines what is observable. its outside of our fish bowl, so to say.

Expand full comment
Nathan Lambert's avatar

The Strogatz nonlinear dynamics book is great. Loved taking his class in undergrad and recommend it widely.

Expand full comment
Steven Adler's avatar

Appreciate you writing this up; the point about ‘how different learning looks now’ really resonated with me.

When I think back on my own times learning hard material in school, I’m struck by how much better YouTube was than my grad school classes - which also had videos available async and on-demand, but they just really hadn’t seemed to model the mind of a learner.

Like, I’d done hours of grad school Fourier transform lectures and understood basically nothing, because they weren’t really trying to get at the intuition, and relatedly hadn’t invested in video animations to make the concepts click. 3B1B is just so, so much better on that front; StatQuest too.

Expand full comment
Vyom's avatar

100% agree on using LLMs to learn hard concepts in natural sciences. I started learning statistical mechanics with no background in physics at all, but using LLMs to make bridges between the topic I'm learning and topics I already know really accelerates learning new subjects.

Expand full comment
Erik Schiskin's avatar

Interesting reading Dwarkesh. Here is 0.02: 1) LLMs cheapen “local learning” but not “global understanding.” 2) The new scarce input is not information—it’s attention + stopping rules. 3) In bio, AI hits the “design” margin harder than the “test” margin. 4) “New physics of consciousness” is (economically) a claim about externalities. 5) The fractal-training bit is secretly about returns to meta-innovation.

Expand full comment
Sharmake Farah's avatar

On the Machines of Loving Grace section, I agree that Dario is way overestimating how much insights can help biology, and absent the scenario being played out much further in the future, I agree human experimentation will almost certainly be required (though I do want to note that even an aligned AI can reduce the regulatory burdens of human experimentation very, very drastically, letting 100-1000 years of progress happen in 1-10 years, contra Dario, and I could absolutely see this happen in worlds where we survive AGI).

To answer a question posed here about why intelligence can route around other bottlenecks:

"The key point that underlies his framework that intelligence can drive a century of progress in 5-10 years : “Things that are hard constraints in the short run may become more malleable to intelligence in the long run. For example, intelligence might be used to develop a new experimental paradigm that allows us to learn in vitro what used to require live animal experiments, or to build the tools needed to collect new data (e.g. the bigger particle accelerator), or to (within ethical limits) find ways around human-based constraints (e.g. helping to improve the clinical trial system, helping to create new jurisdictions where clinical trials have less bureaucracy, or improving the science itself to make human clinical trials less necessary or cheaper).”

it’s interesting to consider why this isn’t true for factors of production today. We live in a (relatively) capital-abundant and labor-scarce world. That is reflected in the labor share of income being 2x as high as the capital share of income. But this has been true for centuries upon centuries. Contra Piketty in “Capital in the 21st Century”, all these capital holders have not been able to get some runaway capital accumulation process going by figuring out a way around labor constraints. Why think that intelligence will be any different than capital in its ability to get around other factors of production? maybe the argument is that intelligence can actually help generate the other factors of production in a way that capital can’t."

The short answer is you already answered why Piketty is likely to be right about our AI future in an article called "Capital in the 22nd Century", about why wealthy capital holders will be able to hold essentially arbitrarily large fractions of wealth post-singularity, and the short version is that capital and labor will become gross substitutes once we can create an AI population that for example never needs to sleep, can be scaled very, very largely and can even improve themselves, meaning we move from a Baumol world to a Jevons world.

(There was a lot of argument on the post that I did see, and I ultimately think the counter-arguments didn't undermine the article's thesis for a multitude of reasons).

Link below:

https://philiptrammell.substack.com/p/capital-in-the-22nd-century

So I was confused about why you even asked the question, but to answer the question, yes, intelligence will be able to generate the other factors of production, and the same holds true of capital.

This is especially true of atomically precise manufacturing/nanotech, but this can even happen so long as you have a self-sufficient robot economy.

The best article on this comes from Tom Davidson and Rose Hadshar on The Industrial Explosion:

https://www.forethought.org/research/the-industrial-explosion

Expand full comment