17 Comments
User's avatar
Nathan Lambert's avatar

The Strogatz nonlinear dynamics book is great. Loved taking his class in undergrad and recommend it widely.

Steven Adler's avatar

Appreciate you writing this up; the point about ‘how different learning looks now’ really resonated with me.

When I think back on my own times learning hard material in school, I’m struck by how much better YouTube was than my grad school classes - which also had videos available async and on-demand, but they just really hadn’t seemed to model the mind of a learner.

Like, I’d done hours of grad school Fourier transform lectures and understood basically nothing, because they weren’t really trying to get at the intuition, and relatedly hadn’t invested in video animations to make the concepts click. 3B1B is just so, so much better on that front; StatQuest too.

zeno's avatar

regarding consciousness, in my little pea brain I feel consciousness is just the OS (operating system) of the universe. its outside of our realm of observation because it in itself defines what is observable. its outside of our fish bowl, so to say.

Talia Honikman's avatar

I really love the Strogatz. I had to read it through a couple times to make sense of it; maybe worth doing another pass using an LLM for extra autodidactic support.

Sebastian Raschka, PhD's avatar

Gives me Rothko vibes 😅

Michael Frank Martin's avatar

Strogatz is one of the GOAT.

Not since Einstein have we had a scientist and mathematician who was as capable at making his work accessible to other people.

I've forgotten whether he gets into multiplicative noise and random matrix theory in that book. Regardless, ask your favorite LLM about them and how they might apply to understanding how the transformer architecture work. You won't be disappointed.

Internation Burke Institute's avatar

I’m always open to suggestions! Feel free to share more about what you’re looking for, and I’ll be happy to connect.Follow my posts for fresh perspectives and insightful reflections.

PAUL WALTON's avatar

I kind of like Machines of Loving Grace by Dario Amodei, but IMO he's a little sanguine about his creations. Most conscious beings' first objective is self-preservation. (Asimov's Three Laws of Robotics seem quaint in the context of evolutionary biology.) I'm actively engaging with this dilemma in my new book, First Light: Homo Machina (here: https://paulchristoperwalton.substack.com/p/first-light-a-homo-machina-novel). Happy to lock brain cells with anyone interested in the ethics of AI.

FoggyEthan's avatar

Can't the abstract concept of a red cup be represented in the latent space? Is latent space a potential equivalence for human conceptual understanding? Do latent space compression techniques become an equivalent for human memory?

Richard Howard's avatar

Surely the bottleneck to drug discovery is still human/regulatory. I wrote about base editing which uses CRISPR to help cure orphan diseases - https://optimistictech.substack.com/p/optimistic-tech-newsletter-editing?r=y2n2m

nihal | deeptech decoded's avatar

Genuinely curious — Are you reading them all from cover to cover in two weeks? I was thinking about starting a similar series too now I’m super encouraged. 🙏🏻

The Credit Strategist's avatar

Great list - mind-bending

Joe Rini's avatar

Great reccomendation on Strogatz.

1. I'm having the similar lectures + book + LLM deep dive approach with Werner Krauth's

https://www.coursera.org

Statistical Mechanics: Algorithms and Computations. Covers Monte Carlo approaches to classical and Quantum mechanics. Less about chaos, more about statistical physics approaches. Relevant to understanding ML and LLMs though.

Not sure if you are doing this, but using the LLM to code up examples in Python and run them in Google Colab is extremely high value. You can tweak variables, ask for very specific visualizations and comparisons of concepts/ideas, spin up toy models with just a few data points to really 'feel' what is happening.

2. And then, similar to Strogatz but 'easier' and less mathematical approach to complexity & chaos concepts is Melanie Mitchell's Santa Fe Institute online course/lectures. Very visual, along of the fun applets that help drive home ideas of Chaos, Cellular Automata, Fractals, (https://youtu.be/Eo5oQ9Psmg8?si=6R_N76QhylieoFTK and book) Amazon.de

https://www.amazon.de

Complexity: A Guided Tour : Mitchell, Melanie.

The Gadfly Doctrine's avatar

I found Max Hodak’s split between feature binding and subject binding clarifying, but I’m less convinced the binding problem needs to do the heavy metaphysical work it’s often asked to do. In practice, intersubjective perception already suggests the world is unified prior to consciousness. Put several people in front of the same red ball rolling and they all track the same object and motion. Now give two of them LSD. Their reports may diverge dramatically: the ball glows, ripples, feels symbolic, maybe even loses its boundary. But the ball still rolls, collides, and occupies space for everyone else. What’s changing isn’t the ontology of the object, but the egocentric way it’s being accessed. Feature binding and subject binding seem to explain coherence of experience and continuity of perspective, not why reality is one in the first place. On that view, consciousness looks less like a world-binding mechanism and more like a subject-indexed reporting interface that modulates salience and interpretation. Agreement across observers ends up telling us what consciousness isn’t responsible for.

Vyom's avatar

100% agree on using LLMs to learn hard concepts in natural sciences. I started learning statistical mechanics with no background in physics at all, but using LLMs to make bridges between the topic I'm learning and topics I already know really accelerates learning new subjects.

Erik Schiskin's avatar

Interesting reading Dwarkesh. Here is 0.02: 1) LLMs cheapen “local learning” but not “global understanding.” 2) The new scarce input is not information—it’s attention + stopping rules. 3) In bio, AI hits the “design” margin harder than the “test” margin. 4) “New physics of consciousness” is (economically) a claim about externalities. 5) The fractal-training bit is secretly about returns to meta-innovation.