15 Comments
User's avatar
Nathan Lambert's avatar

The Strogatz nonlinear dynamics book is great. Loved taking his class in undergrad and recommend it widely.

Steven Adler's avatar

Appreciate you writing this up; the point about ‘how different learning looks now’ really resonated with me.

When I think back on my own times learning hard material in school, I’m struck by how much better YouTube was than my grad school classes - which also had videos available async and on-demand, but they just really hadn’t seemed to model the mind of a learner.

Like, I’d done hours of grad school Fourier transform lectures and understood basically nothing, because they weren’t really trying to get at the intuition, and relatedly hadn’t invested in video animations to make the concepts click. 3B1B is just so, so much better on that front; StatQuest too.

zeno's avatar

regarding consciousness, in my little pea brain I feel consciousness is just the OS (operating system) of the universe. its outside of our realm of observation because it in itself defines what is observable. its outside of our fish bowl, so to say.

Michael Frank Martin's avatar

Strogatz is one of the GOAT.

Not since Einstein have we had a scientist and mathematician who was as capable at making his work accessible to other people.

I've forgotten whether he gets into multiplicative noise and random matrix theory in that book. Regardless, ask your favorite LLM about them and how they might apply to understanding how the transformer architecture work. You won't be disappointed.

Internation Burke Institute's avatar

I’m always open to suggestions! Feel free to share more about what you’re looking for, and I’ll be happy to connect.Follow my posts for fresh perspectives and insightful reflections.

PAUL WALTON's avatar

I kind of like Machines of Loving Grace by Dario Amodei, but IMO he's a little sanguine about his creations. Most conscious beings' first objective is self-preservation. (Asimov's Three Laws of Robotics seem quaint in the context of evolutionary biology.) I'm actively engaging with this dilemma in my new book, First Light: Homo Machina (here: https://paulchristoperwalton.substack.com/p/first-light-a-homo-machina-novel). Happy to lock brain cells with anyone interested in the ethics of AI.

FoggyEthan's avatar

Can't the abstract concept of a red cup be represented in the latent space? Is latent space a potential equivalence for human conceptual understanding? Do latent space compression techniques become an equivalent for human memory?

Richard Howard's avatar

Surely the bottleneck to drug discovery is still human/regulatory. I wrote about base editing which uses CRISPR to help cure orphan diseases - https://optimistictech.substack.com/p/optimistic-tech-newsletter-editing?r=y2n2m

nihal | deeptech decoded's avatar

Genuinely curious — Are you reading them all from cover to cover in two weeks? I was thinking about starting a similar series too now I’m super encouraged. 🙏🏻

The Credit Strategist's avatar

Great list - mind-bending

Joe Rini's avatar

Great reccomendation on Strogatz.

1. I'm having the similar lectures + book + LLM deep dive approach with Werner Krauth's

https://www.coursera.org

Statistical Mechanics: Algorithms and Computations. Covers Monte Carlo approaches to classical and Quantum mechanics. Less about chaos, more about statistical physics approaches. Relevant to understanding ML and LLMs though.

Not sure if you are doing this, but using the LLM to code up examples in Python and run them in Google Colab is extremely high value. You can tweak variables, ask for very specific visualizations and comparisons of concepts/ideas, spin up toy models with just a few data points to really 'feel' what is happening.

2. And then, similar to Strogatz but 'easier' and less mathematical approach to complexity & chaos concepts is Melanie Mitchell's Santa Fe Institute online course/lectures. Very visual, along of the fun applets that help drive home ideas of Chaos, Cellular Automata, Fractals, (https://youtu.be/Eo5oQ9Psmg8?si=6R_N76QhylieoFTK and book) Amazon.de

https://www.amazon.de

Complexity: A Guided Tour : Mitchell, Melanie.

The Gadfly Doctrine's avatar

I found Max Hodak’s split between feature binding and subject binding clarifying, but I’m less convinced the binding problem needs to do the heavy metaphysical work it’s often asked to do. In practice, intersubjective perception already suggests the world is unified prior to consciousness. Put several people in front of the same red ball rolling and they all track the same object and motion. Now give two of them LSD. Their reports may diverge dramatically: the ball glows, ripples, feels symbolic, maybe even loses its boundary. But the ball still rolls, collides, and occupies space for everyone else. What’s changing isn’t the ontology of the object, but the egocentric way it’s being accessed. Feature binding and subject binding seem to explain coherence of experience and continuity of perspective, not why reality is one in the first place. On that view, consciousness looks less like a world-binding mechanism and more like a subject-indexed reporting interface that modulates salience and interpretation. Agreement across observers ends up telling us what consciousness isn’t responsible for.

Vyom's avatar

100% agree on using LLMs to learn hard concepts in natural sciences. I started learning statistical mechanics with no background in physics at all, but using LLMs to make bridges between the topic I'm learning and topics I already know really accelerates learning new subjects.

Erik Schiskin's avatar

Interesting reading Dwarkesh. Here is 0.02: 1) LLMs cheapen “local learning” but not “global understanding.” 2) The new scarce input is not information—it’s attention + stopping rules. 3) In bio, AI hits the “design” margin harder than the “test” margin. 4) “New physics of consciousness” is (economically) a claim about externalities. 5) The fractal-training bit is secretly about returns to meta-innovation.

Sharmake Farah's avatar

On the Machines of Loving Grace section, I agree that Dario is way overestimating how much insights can help biology, and absent the scenario being played out much further in the future, I agree human experimentation will almost certainly be required (though I do want to note that even an aligned AI can reduce the regulatory burdens of human experimentation very, very drastically, letting 100-1000 years of progress happen in 1-10 years, contra Dario, and I could absolutely see this happen in worlds where we survive AGI).

To answer a question posed here about why intelligence can route around other bottlenecks:

"The key point that underlies his framework that intelligence can drive a century of progress in 5-10 years : “Things that are hard constraints in the short run may become more malleable to intelligence in the long run. For example, intelligence might be used to develop a new experimental paradigm that allows us to learn in vitro what used to require live animal experiments, or to build the tools needed to collect new data (e.g. the bigger particle accelerator), or to (within ethical limits) find ways around human-based constraints (e.g. helping to improve the clinical trial system, helping to create new jurisdictions where clinical trials have less bureaucracy, or improving the science itself to make human clinical trials less necessary or cheaper).”

it’s interesting to consider why this isn’t true for factors of production today. We live in a (relatively) capital-abundant and labor-scarce world. That is reflected in the labor share of income being 2x as high as the capital share of income. But this has been true for centuries upon centuries. Contra Piketty in “Capital in the 21st Century”, all these capital holders have not been able to get some runaway capital accumulation process going by figuring out a way around labor constraints. Why think that intelligence will be any different than capital in its ability to get around other factors of production? maybe the argument is that intelligence can actually help generate the other factors of production in a way that capital can’t."

The short answer is you already answered why Piketty is likely to be right about our AI future in an article called "Capital in the 22nd Century", about why wealthy capital holders will be able to hold essentially arbitrarily large fractions of wealth post-singularity, and the short version is that capital and labor will become gross substitutes once we can create an AI population that for example never needs to sleep, can be scaled very, very largely and can even improve themselves, meaning we move from a Baumol world to a Jevons world.

(There was a lot of argument on the post that I did see, and I ultimately think the counter-arguments didn't undermine the article's thesis for a multitude of reasons).

Link below:

https://philiptrammell.substack.com/p/capital-in-the-22nd-century

So I was confused about why you even asked the question, but to answer the question, yes, intelligence will be able to generate the other factors of production, and the same holds true of capital.

This is especially true of atomically precise manufacturing/nanotech, but this can even happen so long as you have a self-sufficient robot economy.

The best article on this comes from Tom Davidson and Rose Hadshar on The Industrial Explosion:

https://www.forethought.org/research/the-industrial-explosion