<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Dwarkesh Podcast]]></title><description><![CDATA[Deeply researched interviews]]></description><link>https://www.dwarkesh.com</link><generator>Substack</generator><lastBuildDate>Mon, 27 Apr 2026 09:51:20 GMT</lastBuildDate><atom:link href="https://www.dwarkesh.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Dwarkesh Patel]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[dwarkesh@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[dwarkesh@substack.com]]></itunes:email><itunes:name><![CDATA[Dwarkesh Patel]]></itunes:name></itunes:owner><itunes:author><![CDATA[Dwarkesh Patel]]></itunes:author><googleplay:owner><![CDATA[dwarkesh@substack.com]]></googleplay:owner><googleplay:email><![CDATA[dwarkesh@substack.com]]></googleplay:email><googleplay:author><![CDATA[Dwarkesh Patel]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Blog prize for the big questions about AI]]></title><description><![CDATA[The not-so-secret point of this whole contest is so that I can hire a researcher]]></description><link>https://www.dwarkesh.com/p/blog-prize</link><guid isPermaLink="false">https://www.dwarkesh.com/p/blog-prize</guid><dc:creator><![CDATA[Dwarkesh Patel]]></dc:creator><pubDate>Fri, 24 Apr 2026 16:37:49 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a7d14f96-c3aa-4305-bdc2-27c509bdbedc_1400x923.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There has never been a time where excellent intellectual output on the right question has been more valuable or more urgent. Compelling answers can inform the most important economic and foreign policy decisions that will ever be made, the deployment of (at least) <a href="https://openai.com/index/scaling-ai-for-everyone/">hundreds of billions</a> of philanthropic dollars, and the training and governance of superintelligences.</p><p>I&#8217;m announcing a $20,000 blog prize in order to find people who will excel at researching and thinking through these problems. The not-so-secret point of this whole contest is so that I can hire a research collaborator to think through questions like this hand in hand with me. See more at the end.</p><p>Pick a question below, and spend no more than 1,000 words answering it. 1st, 2nd, and 3rd place will get $10,000, $6,000, and $4,000 respectively. I&#8217;ll publish the winning entry (and potentially the runner ups) on my blog. Please submit by May 10th, 11:59 PM PST.</p><h3>Questions - choose one</h3><ul><li><p>A couple years ago, there was this idea that AI progress might slow down as we make further progress into the RL regime. 1. Because as horizon lengths increase, the AI needs to do many days&#8217; worth of work before we can even see if it did it right, so if we&#8217;re still in a naive policy gradient world, the reward signal / FLOP goes down, and 2. We&#8217;d crossed through many OOMs of RL compute from GPT 4 to o1 to o3, and it would not be feasible to replicate that many OOMs increase in compute immediately again. But AI progress seems to have been fast nonetheless - even potentially speeding up if rumors about Spud or Mythos are to be believed. What gives? What did that previous intuition pump that motivated longer timelines miss? Feel free to deny premise of question.</p></li><li><p>What&#8217;s the most plausible story where foundation model companies actually start making money? If you consider each individual model as a company, then its profits <a href="https://epoch.ai/gradient-updates/can-ai-companies-become-profitable">may</a> be able to pay back the training cost. But of course, if you don&#8217;t train a bigger, more expensive model immediately, then you stop making money after 3 months. So when does the profit start? Maybe at some point <a href="https://www.dwarkesh.com/i/187852154/005849-how-will-ai-labs-actually-make-profit">scaling will plateau</a>, but <a href="https://x.com/MatthewJBar/status/2046060153678844290">if progress at the frontier</a> has slowed down, then the combination of distillation and low switching costs (cloud margins result from high switching costs) makes it really easy for open source to catch up to the labs, eating into their margins. So how do the labs actually start making money?</p></li><li><p>With OpenAI&#8217;s new raise at an $852B valuation, OpenAI Foundation&#8217;s stake is <a href="https://openai.com/index/scaling-ai-for-everyone/">now worth $180B</a>. Anthropic&#8217;s <a href="https://fortune.com/2026/01/27/anthropic-billionaire-cofounders-ceo-dario-amodei-giving-away-80-percent-of-wealth-fighting-inequality-ai-revolution/">cofounders have pledged to donate 80%</a> of their wealth. Nobody seems to have a concrete idea of how to deploy 100s of billions (soon trillions) of wealth productively to &#8220;make AI go well&#8221;. If you were in charge of the OpenAI Foundation right now, what exactly would you do? And when? It&#8217;s not enough to identify a cause you think is important, because that doesn&#8217;t answer the fundamental problem of <a href="https://nanransohoff.substack.com/p/there-should-be-general-managers">how you convert money to impact</a>. Identify the concrete strategy you recommend pursuing.</p></li><li><p>What should countries which are not currently in the AI production chain (semis, energy, frontier models, robotics) do in order to not get totally sidestepped by transformative AI? If you&#8217;re the leader of India or Nigeria, what do you do right now?</p></li></ul><h3>Rules and tips</h3><ul><li><p>Please don&#8217;t let a lack of domain expertise dissuade you from entering. I&#8217;m looking for someone who can ramp up fast on unfamiliar topics and think clearly.</p></li><li><p>Each entrant may submit only once.</p></li><li><p>You are still eligible for this essay competition even if you&#8217;re not interested in the researcher role. Nor does winning this competition guarantee that you will be offered the role.</p></li><li><p>You&#8217;re welcome to use LLMs to help you research, but I specifically picked these questions because I&#8217;ve found LLM answers to them unsatisfying. On these kinds of ambiguous questions, LLMs are too all over the place. For example, they&#8217;ll identify 5 plausible answers but not have the context and taste to identify the crucial factor and iron out its implications.</p></li><li><p>You only have 1000 words - make them count. People have the habit of <a href="https://x.com/dwarkesh_sp/status/1968012981016608934">spending the first paragraphs clearing their throat</a> - avoid that.</p></li></ul><h3>Why am I hiring for a researcher?</h3><p>I want my podcast/blog to move from just asking questions about AI to actually helping answer them. But there are too many important questions, and I need a collaborator to build up context on them all, to explore dozens of fractal sub-questions, to consider the rebuttals and syntheses, and to sharpen each others thinking.</p><p>The questions I want us to explore are very broad while at the same time requiring deep technical analysis across many domains to actually answer.</p><h3>Why am I hiring this way?</h3><p>Well, I could just put out a job ad for a researcher, but I&#8217;ll get 1,000 different resumes, and I&#8217;ll have no clue based on that information whether the applicant would be any good at synthesizing lots of technical arguments and information. So I thought, let&#8217;s just list out some questions where I genuinely don&#8217;t know the answer and would be keen to get some insight.</p><h3>What this role looks like</h3><ul><li><p>Ideally in person in San Francisco, but potentially open to remote.</p></li><li><p>Will pay competitively</p></li></ul><h3>Submit <a href="https://airtable.com/app8aYOTzMkv9qeAJ/pagHhju8B5tgu4yXc/form">here</a></h3><p>If you have questions or comments, I&#8217;m hello@dwarkeshpatel.com.</p>]]></content:encoded></item><item><title><![CDATA[Jensen Huang – TPU competition, why we should sell chips to China, & Nvidia’s supply chain moat]]></title><description><![CDATA[&#8220;If our next several years are a trillion dollars in scale, we have the supply chain to do it"]]></description><link>https://www.dwarkesh.com/p/jensen-huang</link><guid isPermaLink="false">https://www.dwarkesh.com/p/jensen-huang</guid><dc:creator><![CDATA[Dwarkesh Patel]]></dc:creator><pubDate>Wed, 15 Apr 2026 15:45:23 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/194289889/5f292c095257191205d7c71b2b0c70da.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>I asked Jensen about TPU competition, Nvidia&#8217;s lock on the ever more bottlenecked supply chain needed to make advanced chips, whether we should be selling AI chips to China, why Nvidia doesn&#8217;t just become a hyperscaler, how it makes its investments, and much more. Enjoy!</p><p>Watch on <a href="https://youtu.be/Hrbq66XqtCo">YouTube</a>; listen on <a href="https://podcasts.apple.com/us/podcast/jensen-huang-tpu-competition-why-we-should-sell-chips/id1516093381?i=1000761582962">Apple Podcasts</a> or <a href="https://open.spotify.com/episode/1viBRy6dQdlSw0OdFvogXB?si=bc2cdbd467ed4ee3">Spotify</a>.</p><div id="youtube2-Hrbq66XqtCo" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;Hrbq66XqtCo&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/Hrbq66XqtCo?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h2><strong>Sponsors</strong></h2><ul><li><p><a href="https://crusoe.ai/dwarkesh">Crusoe&#8217;s</a> cloud runs on state-of-the-art Blackwell GPUs, with Vera Rubin deployment scheduled for later this year. But hardware is only part of the story&#8212;for inference, Crusoe&#8217;s MemoryAlloy tech implements a cluster-wide KV cache, delivering up to 10x faster TTFT and 5x better throughput than vLLM. Learn more at <a href="https://crusoe.ai/dwarkesh">crusoe.ai/dwarkesh</a></p></li></ul><ul><li><p><a href="https://cursor.com/dwarkesh">Cursor</a> helped me build an AI co-researcher over the course of a weekend. Now I have an AI agent that I can collaborate with in Google Docs via inline comment threads! And while other agentic coding tools feel like a total black-box, Cursor let me stay on top of the full implementation. You can try my co-researcher out <a href="http://github.com/dwarkeshsp/ai_coworker">here</a>, or get started on your own Cursor project today at <a href="https://cursor.com/dwarkesh">cursor.com/dwarkesh</a></p></li><li><p><a href="https://janestreet.com/dwarkesh">Jane Street</a> spent ~20,000 GPU hours training backdoors into 3 different language models, then challenged my audience to find the triggers. They received some clever solutions&#8212;like comparing the base and fine-tuned versions and extrapolating any differences to reveal the hidden backdoor&#8212;but no one was able to solve all 3. So if open problems like this excite you, Jane Street is hiring. Learn more at <a href="https://janestreet.com/dwarkesh">janestreet.com/dwarkesh</a></p></li></ul><h2><strong>Timestamps</strong></h2><p>(00:00:00) &#8211; Is Nvidia&#8217;s biggest moat its grip on scarce supply chains?</p><p>(00:16:25) &#8211; Will TPUs break Nvidia&#8217;s hold on AI compute?</p><p>(00:41:06) &#8211; Why doesn&#8217;t Nvidia become a hyperscaler?</p><p>(00:57:36) &#8211; Should we be selling AI chips to China?</p><p>(01:35:06) &#8211; Why doesn&#8217;t Nvidia make multiple different chip architectures?</p><h2>Transcript</h2><h3>00:00:00 &#8211; Is Nvidia&#8217;s biggest moat its grip on scarce supply chains?</h3><p><strong>Dwarkesh Patel</strong></p><p>We&#8217;ve seen the <a href="https://techcrunch.com/2026/03/01/saas-in-saas-out-heres-whats-driving-the-saaspocalypse/">valuations of a bunch of software companies crash</a> because people are expecting AI to commoditize software. There&#8217;s a potentially naive way of thinking about things, which is: look, Nvidia sends a <a href="https://en.wikipedia.org/wiki/GDSII">GDS2 file</a> to <a href="https://en.wikipedia.org/wiki/TSMC">TSMC</a>. TSMC builds the <a href="https://www.asml.com/en/technology/all-about-microchips/microchip-basics">logic</a> <a href="https://en.wikipedia.org/wiki/Die_(integrated_circuit)">dies</a>, it builds the <a href="https://en.wikipedia.org/wiki/Switch">switches</a>, then it packages them with the <a href="https://en.wikipedia.org/wiki/High_Bandwidth_Memory">HBM</a> that <a href="https://en.wikipedia.org/wiki/SK_Hynix">SK Hynix</a>, <a href="https://en.wikipedia.org/wiki/Micron_Technology">Micron</a>, and <a href="https://en.wikipedia.org/wiki/Samsung_Electronics">Samsung</a> make. Then it sends it to an <a href="https://www.smckyems.com/understanding-the-differences-between-oem-odm-ems-and-cem/">ODM</a> in Taiwan where they assemble the racks. <a href="https://en.wikipedia.org/wiki/Nvidia">Nvidia</a> is fundamentally making software that other people are manufacturing, and if software gets commoditized, does Nvidia get commoditized?</p><p><strong>Jensen Huang</strong></p><p>In the end, something has to transform electrons to tokens. The transformation of electrons to tokens and making those tokens more valuable over time is hard to completely commoditize. The transformation from electrons to tokens is such an incredible journey. Making that token is like making one molecule more valuable than another molecule, making one token more valuable than another. The amount of artistry, engineering, science, and invention that goes into making that token valuable, obviously we&#8217;re watching it happen in real time. The transformation, the manufacturing, all of the science that goes in there is far from deeply understood and the journey is far from over. I doubt that it will happen.</p><p>We&#8217;re going to make it more efficient, of course. The way that you framed the question is my mental model of our company. The input is electrons, the output is tokens. In the middle is Nvidia. Our job is to do as much as necessary and as little as possible to enable that transformation to be done at incredible capabilities. What I mean by &#8220;as little as possible,&#8221; whatever I don&#8217;t need to do, I partner with somebody and make it part of my ecosystem.</p><p>If you look at Nvidia today, we probably have the largest ecosystem of partners, both in the supply chain upstream and downstream, all of the computer companies, application developers, and model makers. AI is a five-layer cake, if you will. We have ecosystems across the entire five layers. We try to do as little as possible, but the part that we have to do, as it turns out, is insanely hard. I don&#8217;t think that gets commoditized.</p><p>In fact, I also don&#8217;t think the enterprise software companies, the tools makers&#8230; Most software companies today are tool makers. Some of them are not. Some of them are workflow codification systems. But for a lot of companies, they&#8217;re tool makers. For example, Excel is a tool, PowerPoint is a tool, <a href="https://en.wikipedia.org/wiki/Cadence_Design_Systems">Cadence</a> makes tools, <a href="https://en.wikipedia.org/wiki/Synopsys">Synopsys</a> makes tools. I actually see the opposite of what people see. I think the number of <a href="https://en.wikipedia.org/wiki/AI_agent">agents</a> is going to grow exponentially, and the number of tool users is going to grow exponentially. It&#8217;s very likely that the number of instances of all these tools is going to skyrocket.</p><p>It&#8217;s very likely that the number of instances of <a href="https://www.synopsys.com/implementation-and-signoff/rtl-synthesis-test/design-compiler.html">Synopsys Design Compiler</a> is going to skyrocket, along with the number of agents using the floor planners, our layout tools, and our design rule checkers. Today we&#8217;re limited by the number of engineers. Tomorrow, those engineers are going to be supported by a bunch of agents. We&#8217;re going to be exploring the design space like you&#8217;ve never seen before, and we&#8217;re going to use the tools that we use today.</p><p>I think tool use is going to cause the software companies to skyrocket. The reason why it hasn&#8217;t happened yet is because the agents aren&#8217;t good enough at using their tools yet. Either these companies are going to build the agents themselves, or agents are going to get good enough to be able to use those tools. I think it&#8217;s going to be a combination of both.</p><p><strong>Dwarkesh Patel</strong></p><p>I think in your <a href="https://investor.nvidia.com/financial-info/financial-reports/default.aspx">latest filings</a>, you had almost a $100 billion in purchase commitments with foundries, memory, and packaging. <a href="https://semianalysis.com/">SemiAnalysis</a> has reported that you will have $250 billion of these kinds of purchase commitments. One interpretation is that Nvidia&#8217;s moat is really that you&#8217;ve locked up many years of these scarce components. Somebody else might have an accelerator, but can they actually get the memory to build it? Can they actually get the <a href="https://www.asml.com/en/technology/all-about-microchips/microchip-basics">logic</a> to build it? Is this really Nvidia&#8217;s big moat for the next few years?</p><p><strong>Jensen Huang</strong></p><p>It&#8217;s one of the things that we can do that is hard for someone else to do. We&#8217;ve made enormous commitments upstream. Some of it is explicit, these commitments that you mentioned. Some of it is implicit. For example, a lot of the investments that are upstream are made by our supply chain because I said to the CEOs, &#8220;Let me tell you how big this industry is going to be, let me explain to you why, let me reason through it with you, and let me show you what I see.&#8221;</p><p>As a result of that process of informing, inspiring, and aligning with CEOs of all different industries upstream, they&#8217;re willing to make the investments. Why are they willing to make the investments for me and not someone else? The reason for that is because they know that I have the capacity to buy their supply and sell it through my downstream. The fact is that Nvidia&#8217;s downstream supply chain and our downstream demand is so large, they&#8217;re willing to make the investment upstream.</p><p>If you look at <a href="https://en.wikipedia.org/wiki/Nvidia_GTC">GTC</a>, people are marveled by the scale of it and the people that go. It&#8217;s a full 360 degrees, the entire universe of AI all in one place. They&#8217;re all in one place because they need to see each other. I bring them together so that the downstream can see the upstream, the upstream can see the downstream, and all of them can see the advances in AI. Very importantly, they can all meet the AI natives, all the AI startups being built, and all the amazing things happening so they can see firsthand all the things that I tell them. I spend a lot of my time informing, directly or indirectly, our supply chain, partners, and ecosystem about the opportunity in front of us.</p><p>Some people always say, &#8220;Jensen, in most keynotes, it&#8217;s one announcement after another.&#8221; With our keynotes, there&#8217;s always a part of it that&#8217;s a little torturous in the sense that it almost comes across like education. In fact, that&#8217;s exactly on my mind. I need to make sure the entire supply chain, upstream and downstream, the ecosystem, understands what is coming at us, why it&#8217;s coming, when it&#8217;s coming, how big it&#8217;s going to be, and is able to reason about it systematically, just like I reason about it.</p><p>Regarding the moat as you describe it, we&#8217;re able to build for a future. If our next several years are a trillion dollars in scale, we have the supply chain to do it. Without our reach, the velocity of our business&#8230; Just as there&#8217;s cash flow, there&#8217;s supply chain flow, there&#8217;s churns. Nobody is going to build a supply chain for an architecture if the business churns are low. Our ability to sustain the scale is only because our downstream demand is so great. And they see it, they hear about it, they see it all coming. That allows us to do the things we&#8217;re able to do at the scale we do them.</p><p><strong>Dwarkesh Patel</strong></p><p>I do want to understand more concretely whether the upstream can keep up. For many years now, you guys have been 2x-ing revenue year over year. You&#8217;ve been more than tripling the amount of <a href="https://en.wikipedia.org/wiki/Floating_point_operations_per_second">flops</a> you&#8217;re providing to the world year over year.</p><p><strong>Jensen Huang</strong></p><p>And 2x-ing at this scale now is really incredible.</p><p><strong>Dwarkesh Patel</strong></p><p>Exactly. But then you look at logic. You&#8217;re the biggest customer on TSMC&#8217;s <a href="https://en.wikipedia.org/wiki/3_nm_process">N3 node</a>, and you&#8217;re one of the biggest on <a href="https://en.wikipedia.org/wiki/2_nm_process">N2</a>. AI as a whole this year is going to be sixty percent of N3. It&#8217;s going to be 86% next year, according to SemiAnalysis. How do you double if you&#8217;re the majority? And how do you do that year over year? Are we in a regime now where the growth rate in AI compute has to slow because of upstream? Do you see a way to get around this? How do we build 2x more fabs year over year, ultimately?</p><p><strong>Jensen Huang</strong></p><p>At some level, the instantaneous demand is greater than the supply upstream and downstream in the world. At any instant, we could be limited by the number of plumbers, which actually happens.</p><p><strong>Dwarkesh Patel</strong></p><p>The plumbers are invited to next year&#8217;s GTC.</p><p><strong>Jensen Huang</strong></p><p>By the way, great idea. But that&#8217;s a good condition. You want an industry where the instantaneous demand is greater than the total supply of the industry. The opposite is obviously less good. If we&#8217;re too far apart, if one particular component is too far away, the industry swarms it. For example, notice people aren&#8217;t talking very much about <a href="https://3dfabric.tsmc.com/english/dedicatedFoundry/technology/cowos.htm">CoWoS</a> anymore.</p><p>The reason for that is because for two years we swarmed the living daylights out of it. We doubled, doubled, doubled on several doubles. Now I think we&#8217;re in fairly good shape. TSMC now knows that CoWoS supply has to keep up with the rest of the logic demand and the memory demand. They&#8217;re scaling CoWoS and future packaging technologies at the same level as they scale logic. This is terrific, because for a long time, CoWoS and HBM memory were rather specialty. But they&#8217;re not specialties anymore. People now realize they&#8217;re mainstream computing technology.</p><p>Of course, we&#8217;re now much more able to influence a larger scope of our supply chain. At the beginning of the AI revolution, all the things that I say now, I was saying five years ago. Some people believed in it and invested in it, for example, Sanjay and the Micron team. I still remember the meeting really well where I was clear about exactly what was going to happen, why it was going to happen, and the predictions of today. They really doubled down on it. We partnered with them across <a href="https://en.wikipedia.org/wiki/LPDDR">LPDDR</a> and HBM memories, and they really invested in it. It obviously has been tremendous for the company. Some people came a little bit later, but now they&#8217;re all here.</p><p>Each one of these bottlenecks gets a great deal of attention. Now we&#8217;re prefetching the bottlenecks years in advance. For example, the investments that we&#8217;ve done with <a href="https://en.wikipedia.org/wiki/Lumentum">Lumentum</a>, <a href="https://en.wikipedia.org/wiki/Coherent_Corp.">Coherent</a>, and the <a href="https://en.wikipedia.org/wiki/Silicon_photonics">silicon photonics</a> ecosystem over the last several years really reshaped the supply chain. We built up an entire supply chain around TSMC. We partnered with them on <a href="https://tspasemiconductor.substack.com/p/tsmc-coupe-metalens-building-the">COUPE</a>, invented a whole bunch of technology, and licensed those patents to the supply chain to keep it nice and open.</p><p>We&#8217;re preparing the supply chain through the invention of new technologies, new workflows, new testing equipment like double-sided probing, investing in companies, and helping them scale up their capacity. You can see that we&#8217;re trying to shape the ecosystem so that the supply chain is ready to support the scale.</p><p><strong>Dwarkesh Patel</strong></p><p>It seems like some bottlenecks are easier than others. Scaling up CoWoS versus scaling up&#8212;</p><p><strong>Jensen Huang</strong></p><p>I went to the hardest one, by the way.</p><p><strong>Dwarkesh Patel</strong></p><p>Which is?</p><p><strong>Jensen Huang</strong></p><p>Plumbers. Plumbers and electricians. This is one of the concerns that I have about the doomers describing the end of work and killing of jobs. If we discourage people from being software engineers, we&#8217;re going to run out of software engineers. The same prediction happened ten years ago. Some of the doomers were telling people, &#8220;Whatever you do, don&#8217;t be a radiologist.&#8221; You might hear some of those videos still on the web saying radiology is going to be the first career to go and the world is not going to need any more radiologists. Guess what we&#8217;re short of? Radiologists.</p><p><strong>Dwarkesh Patel</strong></p><p>Going back to this point about how some things you can scale, and other things&#8230; How do you actually manufacture 2x the amount of logic a year? Ultimately, memory and logic are bottlenecked by EUV. How do you get to 2x as many <a href="https://en.wikipedia.org/wiki/Extreme_ultraviolet_lithography">EUV</a> machines year over year?</p><p><strong>Jensen Huang</strong></p><p>None of that is impossible to scale quickly. All of that is easy to do within two or three years. You just need a demand signal. Once you can build one, you can build ten, and once you can build ten, you can build a million. These things are not hard to replicate.</p><p><strong>Dwarkesh Patel</strong></p><p>How far down the supply chain do you go? Do you go to <a href="https://en.wikipedia.org/wiki/ASML">ASML</a> and say, &#8220;Hey, if I look out three years from now, for Nvidia to be generating two trillion a year in revenue, we need way more EUV machines&#8221;?</p><p><strong>Jensen Huang</strong></p><p>Some of them I have to directly, some of them indirectly, and some of them&#8230; If I can convince TSMC, ASML will be convinced. We have to think about the critical pinch points. But if TSMC is convinced, you&#8217;ll have plenty of EUV machines in a few years.</p><p>My point is that none of the bottlenecks last longer than a couple of years, two, three years, none of them. Meanwhile, we&#8217;re improving computing efficiency by 10x 20x, and in the case of <a href="https://www.nvidia.com/en-us/data-center/technologies/hopper-architecture/">Hopper</a> to <a href="https://www.nvidia.com/en-us/data-center/technologies/blackwell-architecture/">Blackwell</a>, 30x to 50x. We&#8217;re coming up with new algorithms because <a href="https://en.wikipedia.org/wiki/CUDA">CUDA</a> is so flexible. We&#8217;re developing all kinds of new techniques so that we drive efficiency in addition to increasing capacity. None of those things worry me. It&#8217;s the stuff that&#8217;s downstream from us. Energy policies that prevent energy from&#8230; You can&#8217;t create an industry without energy. You can&#8217;t create a whole new manufacturing industry without energy.</p><p>We want to reindustrialize the United States. We want to bring back chip manufacturing, computer manufacturing, and packaging. We want to build new things like EVs and robots. We want to build AI factories. You can&#8217;t build any of these things without energy, and those things take a long time. More chip capacity, that&#8217;s a 2-3 year problem. More CoWoS capacity, 2-3 year problem.</p><p><strong>Dwarkesh Patel</strong></p><p>Interesting. I feel like I have guests tell me the exact opposite thing sometimes. In this case, I just don&#8217;t have the technical knowledge to adjudicate.</p><p><strong>Jensen Huang</strong></p><p>The beautiful thing is you&#8217;re talking to the expert.</p><h3>00:16:25 &#8211; Will TPUs break Nvidia&#8217;s hold on AI compute?</h3><p><strong>Dwarkesh Patel</strong></p><p>True. I want to ask about your competitors. If you look at the <a href="https://en.wikipedia.org/wiki/Tensor_Processing_Unit">TPU</a>, arguably two out of the top three models in the world, Claude and Gemini, were trained on TPU. What does that mean for Nvidia going forward?</p><p><strong>Jensen Huang</strong></p><p>We build a very different thing. What Nvidia built is accelerated computing, not a tensor processing unit. Accelerated computing is used for all kinds of things: molecular dynamics, quantum chromodynamics, data processing, data frames, structured data, and unstructured data. It&#8217;s also used for fluid dynamics and particle physics. In addition, we use it for AI.</p><p>Accelerated computing is much more diverse. Although AI is the conversation today and is obviously very important and impactful, computing is much broader than that. Nvidia has reinvented the way computing is done, moving from general-purpose computing to accelerated computing. Our market reach is far greater than any TPU or <a href="https://en.wikipedia.org/wiki/Application-specific_integrated_circuit">ASIC</a> can possibly have. If you look at our position, we&#8217;re the only company that accelerates applications of all kinds. We have a gigantic ecosystem. So all kinds of frameworks and algorithms run on Nvidia.</p><p>Because our computers are designed to be operated by other people, anyone who&#8217;s an operator can buy our systems. With most of these home-built systems, you have to be your own operator because they were never designed to be flexible enough for others to operate. Because anybody can operate our systems, we&#8217;re in every cloud, including Google, Amazon, Azure, and OCI.</p><p>If you want to operate it to rent, you better have a large ecosystem of customers in many industries to be the offtakers. If you want to operate it for yourself, we obviously have the ability to help you operate it yourself, like we did for Elon with xAI. And because we can enable operators in any company and any industry, you could use it to build a supercomputer for scientific research and drug discovery at Lilly. We can help them operate their own supercomputer and use it for the entire diversity of drug discovery and biological sciences that we accelerate.</p><p>There are just a whole bunch of applications that we can address that you can&#8217;t do with TPUs. Nvidia built CUDA to be a fantastic tensor processing unit as well, but it also handles every life cycle of data processing, computing, AI, and so on. Our market opportunity is just a lot larger, and our reach is a lot greater. Because we support every application in the world now, you can build Nvidia systems anywhere and know that there will be customers for it. It&#8217;s a very different thing.</p><p><strong>Dwarkesh Patel</strong></p><p>This is going to be a long question. You have spectacular revenue, and you&#8217;re not making $60 billion a quarter from pharma and quantum. You&#8217;re making it because AI is an unprecedented technology that is growing unprecedentedly fast.</p><p>The question then is what is best for AI specifically. I&#8217;m not in the details, but I talk to my AI researcher friends and they say, &#8220;Look, when I use a TPU, it&#8217;s this big <a href="https://en.wikipedia.org/wiki/Systolic_array">systolic array</a> that&#8217;s perfect for doing matrix multiplies, whereas a <a href="https://en.wikipedia.org/wiki/Graphics_processing_unit">GPU</a> is very flexible. It&#8217;s great when you have lots of branching or irregular memory access.&#8221;</p><p>But what is AI? It&#8217;s just these very predictable <a href="https://en.wikipedia.org/wiki/Matrix_multiplication">matrix multiplies</a> again and again and again. You don&#8217;t have to give up any die area for warp schedulers or switches between threads and memory banks. And the TPU is really optimized for the bulk of this growth in revenue and use case for compute that is coming online right now. I wonder how you react to that.</p><p><strong>Jensen Huang</strong></p><p>Matrix multiplies are an important part of AI, but they&#8217;re not the only part. If you want to come up with a new <a href="https://en.wikipedia.org/wiki/Attention_(machine_learning)">attention</a> mechanism, disaggregate in a different way, or invent a whole new type of architecture altogether&#8212;like a hybrid <a href="https://en.wikipedia.org/wiki/State-space_representation">SSM</a>&#8212;you want an architecture that&#8217;s generally programmable. If you want to create a model that fuses <a href="https://en.wikipedia.org/wiki/Diffusion_model">diffusion</a> and <a href="https://en.wikipedia.org/wiki/Autoregressive_model">autoregressive techniques</a>, you want an architecture that&#8217;s just generally programmable. We run everything you can imagine. That&#8217;s the advantage. It allows for the invention of new algorithms a lot more easily, because it&#8217;s a programmable system.</p><p>The ability to invent new algorithms is really what makes AI advance so quickly. TPUs, like anything else, are impacted by <a href="https://en.wikipedia.org/wiki/Moore%27s_law">Moore&#8217;s Law</a>, which we know is increasing by about 25% per year. The only way to really get 10x or 100x leaps is to fundamentally change the algorithm and how it&#8217;s computed every single year.</p><p>That&#8217;s Nvidia&#8217;s fundamental advantage. The only reason we were able to make Blackwell to Hopper 50x&#8230; When I first announced Blackwell was going to be 35x more energy efficient than Hopper, nobody believed it. Then <a href="https://www.dwarkesh.com/p/dylan-patel">Dylan</a> wrote an article saying I sandbagged, and it&#8217;s actually fifty times. You can&#8217;t reasonably do that with just Moore&#8217;s Law. The way we solve that problem is with new models, like <a href="https://en.wikipedia.org/wiki/Mixture_of_experts">MoEs</a>, that are parallelized, disaggregated, and distributed across a computing system. Without the ability to really get down and come up with new <a href="https://modal.com/gpu-glossary/device-software/kernel">kernels</a> with CUDA, it&#8217;s really hard to do.</p><p>It&#8217;s the combination of the programmability of our architecture and the fact that Nvidia is an extreme co-design company. We can even offload some of the computation into the fabric itself, like <a href="https://www.nvidia.com/en-us/data-center/nvlink/">NVLink</a>, or into the network with <a href="https://www.nvidia.com/en-us/networking/spectrumx/">Spectrum-X</a>. We could affect change across the processors, the system, the fabric, the libraries, and the algorithm simultaneously. Without CUDA to do that, I wouldn&#8217;t even know where to start.</p><p><strong>Dwarkesh Patel</strong></p><p>This gets at an interesting question about Nvidia&#8217;s clientele. 60% of your revenue is coming from these big five hyperscalers. In a different era with different customers&#8212;let&#8217;s say professors running experiments&#8212;they need CUDA. They can&#8217;t use another accelerator. They just needed to run <a href="https://en.wikipedia.org/wiki/PyTorch">PyTorch</a> with CUDA and have everything optimized.</p><p>But these hyperscalers have the resources to write their own kernels. In fact, they have to in order to get that last 5% of performance they need for their specific architecture. Anthropic and Google are mostly running their own accelerators or running TPUs and <a href="https://aws.amazon.com/ai/machine-learning/trainium/">Trainium</a>. But even OpenAI, using GPUs, has <a href="https://openai.com/index/triton/">Triton</a> because they need their own kernels. Down to CUDA C++, instead of using <a href="https://developer.nvidia.com/cublas">cuBLAS</a> and <a href="https://developer.nvidia.com/nccl">NCCL</a>, they&#8217;ve got their own stack  which compiles to other accelerators as well. If most of your customers can and do make replacements for CUDA, to what extent is CUDA really the thing that is going to make frontier AI happen on Nvidia?</p><p><strong>Jensen Huang</strong></p><p>CUDA is a rich ecosystem. If you want to build on any computer first, building on CUDA first is incredibly smart. Because the ecosystem is so rich, we support every framework. If you want to create custom kernels&#8230; For example, we contribute enormously to Triton. So the back end of Triton has huge amounts of Nvidia technology.</p><p>We&#8217;re delighted to help every framework become as great as it can be. There are lots and lots of frameworks. There&#8217;s Triton, <a href="https://vllm.ai/">vLLM</a>, <a href="https://github.com/sgl-project/sglang">SGLang</a>, and more. Now there&#8217;s a whole bunch of new <a href="https://en.wikipedia.org/wiki/Reinforcement_learning">reinforcement learning</a> frameworks coming out, like <a href="https://github.com/verl-project/verl">verl</a> and <a href="https://docs.nvidia.com/nemo/rl/latest/index.html">NeMo RL</a>. With <a href="https://www.interconnects.ai/p/the-state-of-post-training-2025">post-training</a> and reinforcement learning, that entire area is just exploding. So if you want to build on an architecture, building on CUDA makes the most sense because you know the ecosystem is great.</p><p>You know that if something happens, it&#8217;s more likely in your code and not in the mountain of code underneath. Don&#8217;t forget the amount of code you&#8217;re dealing with when building these systems. When something doesn&#8217;t work, was it you or was it the computer? You would like it to always be you and to be able to trust the computer. Obviously, we still have lots of bugs ourselves, but our system is so well wrung out that you can at least build on top of the foundation. That&#8217;s number one: the richness, programmability, and capability of the ecosystem.</p><p>The second thing is, if you&#8217;re a developer building anything at all, the single most important thing you want is an install base. You want the software you write to run on a whole bunch of other computers. You&#8217;re not building software just for yourself. You&#8217;re building it for your fleet or everybody else&#8217;s fleet because you&#8217;re a framework builder. Nvidia&#8217;s CUDA ecosystem is ultimately its great treasure.</p><p>We have several hundred million GPUs out there now. Every cloud has it. It goes back to the <a href="https://www.nvidia.com/en-us/data-center/products/a10-gpu/">A10</a>, <a href="https://www.nvidia.com/en-us/data-center/a100/">A100</a>, <a href="https://www.nvidia.com/en-us/data-center/h100/">H100</a>, <a href="https://www.nvidia.com/en-us/data-center/h200/">H200</a>, the <a href="https://www.nvidia.com/en-us/data-center/l40/">L series</a>, the <a href="https://www.nvidia.com/en-us/data-center/pascal-gpu-architecture/">P series</a>. There&#8217;s a whole bunch of them. They&#8217;re in all kinds of sizes and shapes. If you&#8217;re a robotics company, you want that CUDA stack to actually run in the robot itself. We&#8217;re literally everywhere. The install base means that once you develop the software or the model, it&#8217;s going to be useful everywhere. That is just incredibly valuable.</p><p>Lastly, the fact that we&#8217;re in every single cloud makes us genuinely unique. If you&#8217;re an AI company or developer, you&#8217;re not exactly sure which cloud service provider you&#8217;re going to partner with or where you&#8217;d like to run it. We run everywhere, including on-prem for you if you like. The combination of the richness of the ecosystem, the expansiveness of the install base, and the versatility of where we are makes CUDA invaluable.</p><p><strong>Dwarkesh Patel</strong></p><p>That makes a lot of sense. I guess the thing I&#8217;m curious about is whether those advantages matter a lot to your main customers. There&#8217;s many people for whom they might matter. The kind of person who can actually build their own software stack makes up most of your revenue. Especially if you go to a world where AI is getting especially good at the things which have tight verification loops where you can RL on them&#8230;. This question of how do you write a kernel that does attention or <a href="https://en.wikipedia.org/wiki/Multilayer_perceptron">MLP</a> the most efficiently across a scale up? It&#8217;s a very verifiable sort of feedback loop.</p><p>Can all the hyperscalers write these custom kernels for themselves? Nvidia still has great price performance, so they might still prefer to use Nvidia. But then the question is, does it just become a question of who is offering the best specs, the best flops and memory bandwidth for a given dollar. Whereas historically Nvidia has just had, and still has, the best margins in all of AI across hardware and software, +70%, because of this CUDA moat. And the question is, can you sustain those margins if for most of your customers, they can actually afford to build, instead of the CUDA moat?</p><p><strong>Jensen Huang</strong></p><p>The number of engineers we have assigned to these AI labs is insane, working with them, optimizing their stack. The reason for that is because nobody knows our architecture better than we do. These architectures are not as general purpose as a CPU. A CPU is kind of like a Cadillac. It&#8217;s a nice cruiser. It never goes too fast. Everybody drives it pretty well. It&#8217;s got cruise control, and everything&#8217;s easy. But in a lot of ways, Nvidia&#8217;s GPUs, accelerators, are like F1 racers. I could imagine everybody&#8217;s able to drive it at a hundred miles an hour, but it takes quite a bit of expertise to be able to push it to the limit. We use a ton of AI to create the kernels that we have.</p><p>I&#8217;m pretty sure we&#8217;re going to still be needed for quite some time. Our expertise helps our AI lab partners to get another 2x out of their stack easily oftentimes. It&#8217;s not unusual that by the time we&#8217;re done optimizing their stack or optimizing a particular kernel, their model sped up by 3x, 2x, 50%. That&#8217;s a huge number, especially when you&#8217;re talking about the install base of the fleet that they have, of all the Hoppers and Blackwells that they have. When you increase it by a factor of two, that doubles the revenues. That directly translates to revenues.</p><p>Nvidia&#8217;s computing stack is the best performance per <a href="https://en.wikipedia.org/wiki/Total_cost_of_ownership">TCO</a> in the world, bar none. Nobody can demonstrate to me that any single platform in the world today has a better performance-TCO ratio. Not one company. In fact, the benchmarks that are out there. Dylan&#8217;s <a href="https://newsletter.semianalysis.com/p/inferencemax-open-source-inference">InferenceMAX</a> is sitting out there for everybody to use, and not one&#8230; TPU won&#8217;t come, Trainium won&#8217;t come.</p><p>I encourage them to use InferenceMAX and demonstrate their incredible inference cost. It&#8217;s really hard. Nobody wants to show up. <a href="https://www.nvidia.com/en-us/data-center/resources/mlperf-benchmarks/">MLPerf</a>. I would welcome Trainium to demonstrate their 40% that they claim all the time. I would love to hear them demonstrate the cost advantage of TPUs. It makes no sense in my mind. It makes absolutely zero sense. On first principles, it makes no sense.</p><p>So I think the reason why we&#8217;re so successful is simply because our TCO is so great. Secondly, you say 60% of our customers are the top five, but most of that business is external. For example, most of Nvidia in AWS is for external customers, not internal use. Most of our customers at Azure, obviously all of our customers are external. All of our customers at OCI are external, not internal use. The reason why they favor us is because our reach is so great. We can bring them all of the great customers in the world. They&#8217;re all built on Nvidia. And the reason why all these companies are built on Nvidia is because our reach and our versatility is so great.</p><p>So I think the flywheel is really install base, the programmability of our architecture, the richness of our ecosystem, and the fact that there&#8217;s so many AI companies in the world. There&#8217;s tens of thousands of them now. If you were one of those AI startups, what architecture would you choose? You would choose an architecture that&#8217;s most abundant. We&#8217;re the most abundant in the world. You&#8217;d choose the one that has the largest installed base. We&#8217;re the largest install base. And you&#8217;d choose the one that has a rich ecosystem.</p><p>So that&#8217;s the flywheel. That&#8217;s the reason why, between the combination of: one, our perf per dollar is so great that they have the lowest cost tokens. Second, our perf per watt is the highest in the world. So if one of these companies, if our partners, built a one gigawatt data center, that one gigawatt data center better deliver the maximum amount of revenues and number of tokens, which directly translates to revenues. You want it to generate as many tokens as possible, maximize the revenues for that data center. We are the highest tokens per watt architecture in the world. Lastly, if your goal is to rent the infrastructure, we have the most customers in the world. So that&#8217;s the reason why the flywheel works.</p><p><strong>Dwarkesh Patel</strong></p><p>Interesting. I guess the question comes down to, what is the actual market structure here? Because even if there&#8217;s other companies&#8230; There could have been a world where there&#8217;s tens of thousands of AI companies that have roughly equal share of compute. But even through these five hyperscalers, really the people on Amazon using the compute are Anthropic, OpenAI, and these big foundation labs who can themselves afford and have the ability to make different accelerators work.</p><p><strong>Jensen Huang</strong></p><p>No, I think your premise is wrong.</p><p><strong>Dwarkesh Patel</strong></p><p>Maybe. But let me ask you a slightly different question.</p><p><strong>Jensen Huang</strong></p><p>Come back and make me correct your premise.</p><p><strong>Dwarkesh Patel</strong></p><p>Okay. Let me just ask you a different question.</p><p><strong>Jensen Huang</strong></p><p>But still make sure to make me come back and fix because it&#8217;s just too important to AI. It&#8217;s too important to the future of science. It&#8217;s too important to the future of the industry. That premise&#8230; Look &#8212;</p><p><strong>Dwarkesh Patel</strong></p><p>Let me just finish the question and then we can address it together.</p><p><strong>Jensen Huang</strong></p><p>Yeah.</p><p><strong>Dwarkesh Patel</strong></p><p>If all these things are true about price, performance, and performance per watt, et cetera, are true, why do you think it is the case that, say, Anthropic for example, just <a href="https://www.anthropic.com/news/google-broadcom-partnership-compute">announced a couple days ago they have a multi-gigawatt deal with Broadcom and Google</a> for TPUs and majority of their compute?</p><p>Obviously for Google, TPU is a majority of compute. So if I look at these big AI companies, it seems like a lot of their compute&#8230; There was some point where it&#8217;s all Nvidia and now it&#8217;s not. So I&#8217;m curious how to square, if these things are true on paper, why are they going with other accelerators?</p><p><strong>Jensen Huang</strong></p><p>Anthropic is a unique instance, not a trend. Without Anthropic, why would there be any TPU growth at all? It&#8217;s 100% Anthropic. Without Anthropic, why would there be Trainium growth at all? It&#8217;s 100% Anthropic. I think that&#8217;s fairly well known and well understood. It&#8217;s not that there&#8217;s an abundance of ASIC opportunities. There&#8217;s only one Anthropic.</p><p><strong>Dwarkesh Patel</strong></p><p>But <a href="https://www.amd.com/en/newsroom/press-releases/2025-10-6-amd-and-openai-announce-strategic-partnership-to-d.html">OpenAI&#8217;s deals with AMD</a>&#8230; They&#8217;re building their own <a href="https://tech-insider.org/openai-titan-chip-samsung-hbm4-custom-ai-chip-2026/">Titan</a> accelerator.</p><p><strong>Jensen Huang</strong></p><p>Yeah, but I think we could all acknowledge they&#8217;re vastly Nvidia. We&#8217;re going to still do a lot of work together. I&#8217;m not offended by other people using something else and trying things. If they don&#8217;t try these other things, how would they know how good ours is? Sometimes you&#8217;ve got to be reminded of it. We have to continuously earn the position that we&#8217;re in.</p><p>There are always big claims. Look at the number of ASICs that have been canceled. Just because you&#8217;re going to build an ASIC&#8230; You still have to build something better than Nvidia. It&#8217;s not that easy building something better than Nvidia. It&#8217;s not sensible, actually. Nvidia&#8217;s got to be missing something, seriously. Because of our scale, our velocity, we&#8217;re the only company in the world that&#8217;s cranking it out every single year. Big leaps, every single year.</p><p><strong>Dwarkesh Patel</strong></p><p>I guess their logic is, &#8220;Hey, it doesn&#8217;t need to be better. It just needs to be not more than 70% worse,&#8221; because they&#8217;re paying you 70% margins.</p><p><strong>Jensen Huang</strong></p><p>No, don&#8217;t forget, even in ASICs margins are really quite high. Nvidia&#8217;s margin is 70%, let&#8217;s say. But ASIC margins are 65%. What are you really saving?</p><p><strong>Dwarkesh Patel</strong></p><p>Oh, you mean from Broadcom or something like that?</p><p><strong>Jensen Huang</strong></p><p>Yeah, sure. You&#8217;ve got to pay somebody. I think the ASIC margins are incredibly good, from what I can tell. They believe it too. They&#8217;re quite proud of their incredible ASIC margins.</p><p>So, you asked the question why. A long time ago, we just didn&#8217;t have the ability to do it. At the time, I didn&#8217;t deeply internalize how difficult it would be to build a foundation AI lab like OpenAI and Anthropic, and the fact that they needed huge investments from the supplier themselves. We just weren&#8217;t in a position to make the multi-billion dollar investment into Anthropic so that they could use our compute. But Google and AWS were. They put in huge investments in the beginning so that Anthropic, in return, used their compute. We just weren&#8217;t in a position to do that at the time.</p><p>I would say my mistake is I didn&#8217;t deeply internalize that they really had no other options, that a VC would never put in $5-10 billion of investment into an AI lab with the hopes of it turning out to be Anthropic. So that was my miss. But even if I understood it, I don&#8217;t think we would&#8217;ve been in a position to do that at the time. But I&#8217;m not going to make that same mistake again.</p><p>I&#8217;m delighted to invest in OpenAI, and I&#8217;m delighted to help them scale, and I believe it&#8217;s essential to do so. And then, when I was able to, when Anthropic came to us, I&#8217;m delighted to be an investor, delighted to help them scale. We just weren&#8217;t, at the time, able to do it. If I could rewind everything&#8212;and Nvidia could have been as big back then as we are now&#8212;I would&#8217;ve been more than happy to do it.</p><h3>00:41:06 &#8211; Why doesn&#8217;t Nvidia become a hyperscaler?</h3><p><strong>Dwarkesh Patel</strong></p><p>This is actually quite interesting. For many years Nvidia has been the company in AI making money, making lots of money. Now you&#8217;re investing it. It&#8217;s been reported that you&#8217;ve done up to $30 billion in OpenAI and $10 billion in Anthropic. But now their valuations have increased, and I&#8217;m sure they&#8217;ll continue to increase.</p><p>So if over these many years you were giving them the compute, you saw where it was headed, and they were worth like one tenth what they&#8217;re worth now a couple years ago&#8212;or even a year ago in some cases and you had all this cash &#8212; there&#8217;s a world where either Nvidia themselves becomes a foundation lab, does a huge investment to make that possible, or has made the deals you&#8217;ve made now at current valuations much earlier on. And you had the cash to do it. So I am curious, actually, why not have done it earlier?</p><p><strong>Jensen Huang</strong></p><p>We did it as soon as we could have. We did it as soon as we could have, and if I could have, I would&#8217;ve done it even earlier. At the time that Anthropic needed us to do it, we just weren&#8217;t in a position to do it. It wasn&#8217;t in our sensibility to do so.</p><p><strong>Dwarkesh Patel</strong></p><p>How so? Was it like a cash thing?</p><p><strong>Jensen Huang</strong></p><p>Yeah, the level of investment. We had never invested outside the company at the time, and not that much. We didn&#8217;t realize we needed to. I always thought that they could just go raise from VCs, for God&#8217;s sakes, like all companies do. But what they were trying to do couldn&#8217;t have been done through VCs. What OpenAI wanted to do couldn&#8217;t have been done through VCs. I recognize that now. I didn&#8217;t know it then.</p><p>But that&#8217;s their genius. That&#8217;s why they&#8217;re smart. They realized then that they had to do something like that. And I&#8217;m delighted that they did. Even though we caused Anthropic to have to go to somebody else, I&#8217;m still happy that it happened. Anthropic&#8217;s existence is great for the world. I&#8217;m delighted for it.</p><p><strong>Dwarkesh Patel</strong></p><p>I guess you still are making a ton of money, and you&#8217;re making way more money quarter after quarter.</p><p><strong>Jensen Huang</strong></p><p>It&#8217;s still okay to have regrets.</p><p><strong>Dwarkesh Patel</strong></p><p>So the question still arises. Okay, now that we&#8217;re here and you have all this money that you keep making, what should Nvidia be doing with it? There&#8217;s one answer which is that there&#8217;s this whole middleman ecosystem that has popped up for converting CapEx into OpEx for these labs so that they can rent compute. Because the chips are really expensive, they make a lot of money over their lifetime because the AI models are getting better. So the value that they generate, their tokens, is increasing, but they&#8217;re expensive to set up. Nvidia has the money to do the CapEx. In fact, it&#8217;s been reported, you are <a href="https://finance.yahoo.com/news/nvidia-just-piled-2-billion-224300847.html">backstopping CoreWeave up to $6.3 billion and have invested $2 billion</a>.</p><p>Why doesn&#8217;t Nvidia become a cloud themselves? Why doesn&#8217;t it become a hyperscaler themselves and rent this compute out? You have all this cash to do it.</p><p><strong>Jensen Huang</strong></p><p>This is a philosophy of the company, and I think it&#8217;s wise. We should do as much as needed, as little as possible. What that means is, the work that we do with building our computing platform, if we don&#8217;t do it, I genuinely believe it doesn&#8217;t get done. If we didn&#8217;t take the risk that we take&#8212;if we didn&#8217;t build NVLink the way we built it, if we didn&#8217;t build the whole stack, if we didn&#8217;t create the ecosystem the way we did, if we didn&#8217;t dedicate ourselves to 20 years of CUDA while losing money most of that time&#8212;if we didn&#8217;t do it, nobody else would have done it.</p><p>If we didn&#8217;t create all the <a href="https://developer.nvidia.com/cuda/cuda-x-libraries">CUDA-X libraries</a> so that they&#8217;re all domain-specific&#8230; A decade and a half ago, we pushed into domain-specific libraries because we realized that if we didn&#8217;t create these domain-specific libraries, whether it&#8217;s for ray tracing or image generation or even the early works of AI, these models, if we didn&#8217;t create them, for data processing, structured data processing, or vector data processing, if we didn&#8217;t create them, nobody would. I am completely certain of that. We created a library for computational lithography called <a href="https://developer.nvidia.com/culitho">cuLitho</a>. If we didn&#8217;t create it, nobody would have. So accelerated computing wouldn&#8217;t advance the way it has if we didn&#8217;t do what we did.</p><p>So we should do that. We should dedicate our company, all of our might, wholeheartedly to go do that. However, the world has lots of clouds. If I didn&#8217;t do it, somebody would show up. So following the recipe, the philosophy, of doing as much as needed but as little as possible&#8212;as little as possible&#8212;that philosophy exists in our company today. Everything I do, I do it with that lens.</p><p>In the case of clouds, if we didn&#8217;t support <a href="https://en.wikipedia.org/wiki/CoreWeave">CoreWeave</a> to exist, these <a href="https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-evolution-of-neoclouds-and-their-next-moves">neoclouds</a>, these AI clouds, wouldn&#8217;t exist. If we didn&#8217;t help CoreWeave exist, they would not exist. If we didn&#8217;t support <a href="https://www.nscale.com/">Nscale</a>, they wouldn&#8217;t be where they are today. If we didn&#8217;t support <a href="https://en.wikipedia.org/wiki/Nebius_Group">Nebius</a>, they wouldn&#8217;t be what they are today. Now they&#8217;re doing fantastically.</p><p>Is that a business model [inaudible]? We should do as much as needed, as little as possible. So we invest in our ecosystem because I want our ecosystem to thrive. I want the architecture, and AI, to be able to connect with as many industries as possible, as many countries as possible, and make it possible for the planet to be built on AI and to be built on the American tech stack. That vision is exactly what we&#8217;re pursuing.</p><p>Now, one of the things that you mentioned&#8230; There are so many great, amazing foundation model companies, and we try to invest in all of them. This is another thing that we do. We don&#8217;t pick winners. We need to support everyone. It&#8217;s part of our joy of doing so. It&#8217;s imperative to our business. But we also go out of our way not to pick winners. So when I invest in one of them, I invest in all of them.</p><p><strong>Dwarkesh Patel</strong></p><p>Why do you go out of your way not to pick winners?</p><p><strong>Jensen Huang</strong></p><p>Because it&#8217;s not our job to, number one. Number two, when Nvidia first started, there were 60 3D graphics companies. We are the only one that survived. If you would have taken those 60 graphics companies and asked yourself which one was going to make it, Nvidia would be at the top of that list not to make it.</p><p>This is long before you, but Nvidia&#8217;s graphics architecture was precisely wrong. It&#8217;s not a little bit wrong. <a href="https://en.wikipedia.org/wiki/NV1">We created an architecture that was precisely wrong</a>, and it was an impossible thing for developers to support. It was never going to make it. We reasoned about it from good first principles, but we ended up with the wrong solution. Everybody would have counted us out. And here we are.</p><p>So I have enough humility to recognize that. Don&#8217;t pick winners. Either let them all take care of themselves, or take care of all of them.</p><p><strong>Dwarkesh Patel</strong></p><p>One thing I didn&#8217;t understand is you said, &#8220;Look, we&#8217;re not prioritizing these neoclouds just because they are neoclouds and we want to prop them up.&#8221; But you also listed a bunch of neoclouds and said they wouldn&#8217;t exist if it wasn&#8217;t for NVIDIA. How are those two things compatible?</p><p><strong>Jensen Huang</strong></p><p>First of all, they need to want to exist, and they come to ask us for help. When they want to exist and they have a business plan, expertise, and the passion for it&#8230; They obviously have to have some capabilities themselves. But if, at the end of the day, they need some investment in order to get it off the ground, we would be there for them. But the sooner they get their flywheel going...</p><p>Your question was, &#8220;Do we want to be in the financing business?&#8221; The answer is no. There are people in the financing business, and we&#8217;d rather work with all the people in the financing business than be a financier ourselves. Our goal is to focus on what we do, keep our business model as simple as possible, and support our ecosystem.</p><p>When someone like OpenAI needs an investment of a $30 billion scale because it&#8217;s still before their IPO, and we deeply believe in them and I deeply believe that they&#8217;re going to be an&#8230; Well, they&#8217;re an extraordinary company already today. They&#8217;re going to be an incredible company. The world needs them to exist. The world wants them to exist. I want them to exist. They have the wind at their back. Let&#8217;s support them and let them scale. Those investments we&#8217;ll do because they need us to do it. But we&#8217;re not trying to do as much as possible. We&#8217;re trying to do as little as possible.</p><p><strong>Dwarkesh Patel</strong></p><p>This may be an obvious question, but we&#8217;ve lived many years in this situation where there&#8217;s a shortage of GPUs, and it&#8217;s grown now because models are getting better.</p><p><strong>Jensen Huang</strong></p><p>We have a shortage of GPUs.</p><p><strong>Dwarkesh Patel</strong></p><p>Yes. Nvidia is known for divvying up the scarce allocation, not just based on high bidder, but rather on, &#8220;Hey, we want to make sure that these neoclouds exist. Let&#8217;s give some to CoreWeave, let&#8217;s give some to <a href="https://www.crusoe.ai/cloud">Crusoe</a>, let&#8217;s give some to <a href="https://lambda.ai/">Lambda</a>.&#8221; Why is it good for Nvidia? First of all, would you agree with this characterization of fracturing the market?</p><p><strong>Jensen Huang</strong></p><p>No. No. Your premise is just wrong. We&#8217;re sufficiently mindful about these things. We&#8217;re very mindful about these things. First of all, if you don&#8217;t place a PO, all the talking in the world won&#8217;t make a difference. Until we get a PO, what are we going to do? So the first thing is, we work really hard with everybody to get a forecast done, because these things take a long time to build, and the data centers take a long time to build. We align ourselves with demand and supply and things like that through forecasting. Okay? That&#8217;s job number one.</p><p>Number two, we&#8217;ve tried to forecast with as many people as possible, but in the final analysis, you still have to place an order. Maybe, for whatever reason, you didn&#8217;t place your order. What can I do? At some point, first in, first out. But beyond that, if you&#8217;re not ready because your data center&#8217;s not ready, or certain components aren&#8217;t ready to enable you to stand up a data center, we might decide to serve another customer first. That&#8217;s just maximizing the throughput of our own factory. We might do some adjustments there.</p><p>Aside from that, the prioritization is first in, first out. You&#8217;ve got to place a PO. If you don&#8217;t place a PO&#8230; Now, of course, there are stories about that. For example, all of this kind of started from an <a href="https://fortune.com/2024/09/16/larry-ellison-elon-musk-begged-nvidias-jensen-huang-more-gpus-fancy-sushi-dinner/">article about Larry and Elon having dinner with me where they begged for GPUs</a>. That never happened. We absolutely had dinner. We absolutely had dinner, and it was a wonderful dinner. At no time did they beg for GPUs. They just had to place an order. Once they place an order, we do our best to get the capacity to them. We&#8217;re not complicated.</p><p><strong>Dwarkesh Patel</strong></p><p>Okay. So it sounds like there&#8217;s a queue, and then based on whether your data center is ready and when you place a purchase order, you get them at a certain time. But it still doesn&#8217;t sound like the highest bidder just gets it. Is there a reason to do it&#8230;?</p><p><strong>Jensen Huang</strong></p><p>We never do that.</p><p><strong>Dwarkesh Patel</strong></p><p>Okay.</p><p><strong>Jensen Huang</strong></p><p>We never do.</p><p><strong>Dwarkesh Patel</strong></p><p>Why not just do high bidder?</p><p><strong>Jensen Huang</strong></p><p>Because it&#8217;s a bad business practice. You set your price and then people decide to buy it or not. I understand that others in the chip industry change their prices when demand is higher, but we just don&#8217;t. That&#8217;s just never been a practice of ours. You can count on us. I prefer to be dependable, to be the foundation of the industry. You don&#8217;t need to second-guess. If I quoted you a price, we quoted you a price. That&#8217;s it. If demand goes through the roof, so be it.</p><p><strong>Dwarkesh Patel</strong></p><p>On the other end, that&#8217;s why you have a productive relationship with TSMC, right?</p><p><strong>Jensen Huang</strong></p><p>Yeah, Nvidia&#8217;s been in business with them for, I guess, coming up on 30 years. Nvidia and TSMC don&#8217;t have a legal contract. There&#8217;s always some rough justice. Sometimes I&#8217;m right, sometimes I&#8217;m wrong. Sometimes I got a better deal, sometimes I got a worse deal. But overall, the relationship is incredible. I can completely trust them. I can completely depend on them.</p><p>One of the things you can count on with Nvidia is that this year, <a href="https://www.nvidia.com/en-us/data-center/technologies/rubin/">Vera Rubin</a> is going to be incredible. Next year, <a href="https://developer.nvidia.com/blog/nvidia-vera-rubin-pod-seven-chips-five-rack-scale-systems-one-ai-supercomputer/">Vera Rubin Ultra</a> will come. The year after that, <a href="https://en.wikipedia.org/wiki/Feynman_(microarchitecture)">Feynman</a> will come. And the year after that, I haven&#8217;t introduced the name yet. Every single year you can count on us. You&#8217;re going to have to go find another ASIC team in the world&#8212;pick your ASIC team&#8212;where you can say, &#8220;I can bet the farm, I can bet my entire business that you will be here for me every single year. Your token cost will decrease by an order of magnitude every single year. I can count on it like I can count on the clock.&#8221;</p><p>I just said something about TSMC. For no other foundry in history can you possibly say that. You can say that about Nvidia today. You can count on us every single year. If you would like to buy a billion dollars worth of AI factory compute, no problem. If you&#8217;d like to buy a hundred million dollars, no problem. You&#8217;d like to buy $10 million, or just one rack, not a problem. Or just one graphics card, okay, no problem. If you would like to place an order for a $100 billion of AI factory, no problem. We&#8217;re the only company in the world where you can say that today.</p><p>I can say that about TSMC as well. I want to buy one, buy 1 billion, no problem. We just have to go through the process of planning for it, and all the things that mature people do. So I think this ability for Nvidia to be the foundation of the world&#8217;s AI industry, this is a position that has taken us a couple of decades to arrive at. Enormous commitment, enormous dedication. The stability of our company, the consistency of our company, is really important.</p><h3>00:57:36 &#8211; Should we be selling AI chips to China?</h3><p><strong>Dwarkesh Patel</strong></p><p>Okay. I want to ask about China. I actually don&#8217;t know what I think about whether it&#8217;s good to sell chips to China or not, but I like to play devil&#8217;s advocate against my guests. So when <a href="https://www.dwarkesh.com/p/dario-amodei-2">Dario</a> was on, <a href="https://darioamodei.com/post/on-deepseek-and-export-controls">who supports export controls</a>, I asked him, why can&#8217;t America and China both have a country of geniuses in the datacenter? But since you&#8217;re on the opposite side, I&#8217;ll ask you in the opposite way.</p><p>One way to think about it is, Anthropic actually announced a couple days ago <a href="https://www.anthropic.com/glasswing">Mythos Preview</a>. This model Mythos, they&#8217;re not even releasing publicly <a href="https://red.anthropic.com/2026/mythos-preview/">because they say it has such cyber-offensive capabilities</a> that we don&#8217;t think the world is ready until we make sure these <a href="https://en.wikipedia.org/wiki/Zero-day_vulnerability">zero-days</a> are patched up. But they say it found thousands of high-severity vulnerabilities across every major operating system, every browser. It found one in <a href="https://en.wikipedia.org/wiki/OpenBSD">OpenBSD</a>, which is this operating system that&#8217;s been specifically designed to not have zero days. It found one that&#8217;s existed for 27 years.</p><p>So if Chinese companies and Chinese labs and the Chinese government had access to the AI chips to train a model like Claude Mythos with these cyber-offensive capabilities and run millions of instances of it with more compute, the question is, is that a threat to American companies, to American national security?</p><p><strong>Jensen Huang</strong></p><p>First of all, Mythos was trained on fairly mundane capacity, and a fairly mundane amount of it. By an extraordinary company. The amount of capacity and the type of compute it was trained on is abundantly available in China. So you just have to first realize that chips exist in China.</p><p>They manufacture 60% of the world&#8217;s mainstream chips, maybe more. It&#8217;s a very large industry for them. They have some of the world&#8217;s greatest computer scientists. As you know, most of the AI researchers in all of these AI labs are Chinese. They have 50% of the world&#8217;s AI researchers. So the question is, considering all the assets they already have&#8212;they have an abundance of energy, they have plenty of chips, they&#8217;ve got most of the AI researchers&#8212;if you&#8217;re worried about them, what is the best way to create a safe world?</p><p>Victimizing them, turning them into an enemy, likely isn&#8217;t the best answer. They are an adversary. We want the United States to win. But I think having a dialogue and having research dialogue is probably the safest thing to do. This is an area that is glaringly missing because of our current attitude about China as an adversary. It is essential that our AI researchers and their AI researchers are actually talking. It is essential that we try to both agree on what not to use the AI for.</p><p>With respect to finding bugs in software, of course, that&#8217;s what AI is supposed to do. Is it going to find bugs in a lot of software? Of course. There are lots and lots of bugs. There are lots of bugs in the AI software. That&#8217;s what AI is supposed to do, and I&#8217;m delighted that AI has reached a level where it could help us be so much more productive.</p><p>One of the things that is underemphasized is the richness of the ecosystem around cybersecurity, AI cybersecurity and AI security and AI privacy and AI safety. There&#8217;s a whole ecosystem of AI startups that are trying to create this future for us, where you have one AI agent that&#8217;s incredible, surrounded by thousands of AI agents, keeping it safe, keeping it secure. That future surely is going to happen.</p><p>The idea that you&#8217;re going to have an AI agent running around with nobody watching after it is kind of insane. We know very well that this ecosystem needs to thrive. It turns out this ecosystem needs open source. This ecosystem needs open models. They need open stacks so that all of these AI researchers and all these great computer scientists can go build AI systems that are as formidable and can keep AI safe. So one of the things that we need to make sure that we do is we keep the open source ecosystem vibrant. That can&#8217;t be ignored. A lot of that is coming out of China. We ought to not suffocate that.</p><p>With respect to China, of course we want the United States to have as much computing as possible. We&#8217;re limited by energy, but we&#8217;ve got a lot of people working on that. We&#8217;ve got to not make energy a bottleneck for our country. But what we also want is to make sure that all the AI developers in the world are developing on the American tech stack, and making the contributions, the advancements of AI&#8212;especially when it&#8217;s open source&#8212;available to the American ecosystem. It would be extremely foolish to create two ecosystems: the open source ecosystem, and it only runs on a foreign tech stack, and a closed ecosystem that runs on the American tech stack. I think that would be a horrible outcome for the United States.</p><p><strong>Dwarkesh Patel</strong></p><p>Since there are a lot of things, let me just triage the response. I think the concern, going back to the flop difference in the hacking, is yes, they have compute, but there&#8217;s some estimates that because they&#8217;re at <a href="https://en.wikipedia.org/wiki/7_nm_process">7nm</a>&#8212;they don&#8217;t have EUVs because of <a href="https://www.congress.gov/crs-product/R48642">chip-making export controls</a>&#8212;the amount of flops they&#8217;re able to actually produce, they have one tenth the amount of flops that the US has.</p><p>So with that, could they eventually train a model like Mythos? Yes. But the question is, because we have more flops, American labs are able to get to these levels of capabilities first. Because Anthropic got to it first, they say, &#8220;Okay, we&#8217;re going to hold onto it for a month while all these American companies, we&#8217;ll give them access to it. They&#8217;re going to patch up all their vulnerabilities, and now we release it.&#8221;</p><p>Furthermore, even if they train a model like this, the ability to deploy it at scale&#8230; If you had a cyber hacker, it&#8217;s much more dangerous if they have a million of them versus a thousand of them. So that inference compute really matters a lot. In fact, the fact that they have so many AI researchers who are so good is the thing that makes it so scary, because what is it that makes those engineer researchers more productive? It&#8217;s compute.</p><p>If you talk to any AI lab in America, they say the thing that&#8217;s bottlenecking them is compute. There are <a href="https://www.linkedin.com/pulse/exclusive-interview-founder-deepseek-lingxi-hu--z1hbf/">quotes</a> from the <a href="https://en.wikipedia.org/wiki/DeepSeek">DeepSeek</a> <a href="https://en.wikipedia.org/wiki/Liang_Wenfeng">founder</a>, or <a href="https://en.wikipedia.org/wiki/Qwen">Qwen</a> leadership or whatever. They say the thing they&#8217;re bottlenecked on is compute. So then the question is, isn&#8217;t it better that we get American companies, because they have more compute, to get to the Mythos-level capabilities first, prepare our society for it, before China can get to it because, they have less compute?</p><p><strong>Jensen Huang</strong></p><p>We should always be first and we should always have more. But in order for that outcome you described to be true, you have to take it to the extremes. They have to have no compute. If they have some compute, the question is how much is needed?</p><p>The amount of compute they have in China is enormous. You&#8217;re talking about the country that is the second largest computing market in the world. If they want to aggregate their compute, they&#8217;ve got plenty of compute to aggregate.</p><p><strong>Dwarkesh Patel</strong></p><p>But is that true? People do these estimates and they&#8217;re like, &#8220;<a href="https://en.wikipedia.org/wiki/Semiconductor_Manufacturing_International_Corporation">SMIC</a> is actually behind on the process nodes.&#8221;</p><p><strong>Jensen Huang</strong></p><p>I&#8217;m about to tell you.</p><p><strong>Dwarkesh Patel</strong></p><p>Okay.</p><p><strong>Jensen Huang</strong></p><p>The amount of energy they have is incredible. Isn&#8217;t that right? AI is a parallel computing problem, isn&#8217;t it? Why can&#8217;t they just put 4x, 10x, as many chips together because energy&#8217;s free? They have so much energy. They have datacenters that are sitting completely empty, fully powered. You know they have ghost cities, they have ghost datacenters too. They have so much infrastructure capacity. If they wanted to, they just gang up more chips, even if they&#8217;re 7nm.</p><p>Their capacity of building chips is one of the largest in the world. The semiconductor industry knows that they monopolize mainstream chips. They have over-capacity, they have too much capacity. So the idea that China won&#8217;t be able to have AI chips is completely nonsense.</p><p>Now, of course, if you ask me, would the United States be further ahead if the entire world had no compute at all? But that&#8217;s just not an outcome. That&#8217;s not a scenario that&#8217;s true. They have plenty of compute already. The amount of threshold they need for the concern you&#8217;re worried about, they&#8217;ve already reached that threshold and beyond.</p><p>So I think you misunderstand that AI is a five-layer cake, and at the lowest layer is energy. When you have an abundance of energy, it makes up for chips. If you have an abundance of chips, it makes up for energy. For example, the United States is scarce on energy, which is the reason why Nvidia has to keep advancing our architecture and do this extreme co-design so that with the few chips that we ship&#8212;with the few chips, because the amount of energy is so limited&#8212;our throughput per watt is off the charts.</p><p>But if your amount of watts is completely abundant, it&#8217;s free, what do you care about performance per watt for? You get plenty. You can use old chips to do. So 7nm chips are essentially Hopper. The ability for Hopper&#8230; I&#8217;ve got to tell you, today&#8217;s models are largely trained on Hopper, Hopper generation. So 7nm chips are plenty good. The abundance of energy is their advantage.</p><p><strong>Dwarkesh Patel</strong></p><p>But then there&#8217;s a question of whether they can actually manufacture enough chips.</p><p><strong>Jensen Huang</strong></p><p>But they do. What&#8217;s the evidence? Huawei just had the largest single year in the history of their company.</p><p><strong>Dwarkesh Patel</strong></p><p>How many chips did they ship?</p><p><strong>Jensen Huang</strong></p><p>A ton. Millions. Millions is way more than Anthropic has.</p><p><strong>Dwarkesh Patel</strong></p><p>There&#8217;s a question of how much logic SMIC can chip, and there&#8217;s a question of how much memory&#8212;</p><p><strong>Jensen Huang</strong></p><p>I&#8217;m telling you what it is. They have plenty of logic, and they have plenty of HBM2 memory.</p><p><strong>Dwarkesh Patel</strong></p><p>Right. But as you know, the bottleneck often in training and doing inference on these models is the amount of bandwidth. So if you have HBM2&#8230; I don&#8217;t know the numbers offhand but versus the newest thing you have, there could be almost an order of magnitude difference in memory bandwidth, which is huge.</p><p><strong>Jensen Huang</strong></p><p><a href="https://en.wikipedia.org/wiki/Huawei">Huawei</a> is a networking company.</p><p><strong>Dwarkesh Patel</strong></p><p>But that doesn&#8217;t change the fact that you need EUV for the most advanced HBM.</p><p><strong>Jensen Huang</strong></p><p>Not true. Not at all true. You could gang them together, just like we gang them together with <a href="https://www.nvidia.com/en-us/data-center/gb200-nvl72/">NVL72</a>. They&#8217;ve already demonstrated silicon photonics, connecting all of this compute together into one giant supercomputer. Your premise is just wrong.</p><p>The fact of the matter is, their AI development is going just fine. The best AI researchers in the world, because they&#8217;re limited in compute, they also come up with extremely smart algorithms. Remember, I just said that Moore&#8217;s law is advancing about 25% per year. However, through great computer science, we could still improve algorithm performance by 10x. What I&#8217;m saying is that great computer science is where the lever is.</p><p>There is no question, MoE is a great invention. There&#8217;s no question, all the incredible attention mechanisms reduce the amount of compute. We have got to acknowledge that most of the advances in AI came out of algorithm advances, not just the raw hardware. Now, if most advances came from algorithms and computer science and programming, tell me that their army of AI researchers is not their fundamental advantage. We see it. DeepSeek is not an inconsequential advance. The day that DeepSeek comes out on Huawei first, that is a horrible outcome for our nation.</p><p><strong>Dwarkesh Patel</strong></p><p>Why is that? Because currently you can have a model like DeepSeek that can run on any accelerator, if it&#8217;s open source. Why would that stop being the case in the future?</p><p><strong>Jensen Huang</strong></p><p>Suppose it doesn&#8217;t. Suppose it&#8217;s optimized for Huawei, suppose it&#8217;s optimized for their architecture. It would put ours at a disadvantage. You described a situation that I perceive to be good news. A company developed software, developed an AI model, and it runs best on the American tech stack. I saw that as good news. You set it up as a premise that it was bad news. I&#8217;m going to give you the bad news, that AI models around the world are developed and they run best on non-American hardware. That is bad news for us.</p><p><strong>Dwarkesh Patel</strong></p><p>I guess I just don&#8217;t see the evidence that there&#8217;s these huge disparities that would prevent you from switching accelerators. American labs are running their models across all the clouds, across all the different accelerators&#8212;</p><p><strong>Jensen Huang</strong></p><p>I am the evidence. You take a model that&#8217;s optimized for Nvidia and you try to run it on something else.</p><p><strong>Dwarkesh Patel</strong></p><p>But American labs do that.</p><p><strong>Jensen Huang</strong></p><p>And they don&#8217;t run better. Nvidia&#8217;s success is perfect evidence. The fact that AI models are created on our stack, run best on our stack, how is that illogical to understand?</p><p><strong>Dwarkesh Patel</strong></p><p>Anthropic&#8217;s models are run on GPUs, they&#8217;re run on Trainium, they&#8217;re run on TPUs.</p><p><strong>Jensen Huang</strong></p><p>A lot of work has to go into it to change. But go to the global south, go to the Middle East. Coming out of the box, if all of the AI models run best on somebody else&#8217;s tech stack, you&#8217;ve got to be arguing some ridiculous claim right now that that&#8217;s a good thing for the United States.</p><p><strong>Dwarkesh Patel</strong></p><p>But I guess I don&#8217;t understand the argument. Say Chinese companies get to the next Mythos first. They find all the security vulnerabilities in American software first, but they can do it on Nvidia hardware and they ship it to the global south. They do it on Nvidia hardware. How is that good? Okay, it runs on Nvidia hardware&#8212;</p><p><strong>Jensen Huang</strong></p><p>It&#8217;s not good. It&#8217;s not good.</p><p><strong>Dwarkesh Patel</strong></p><p>Right.</p><p><strong>Jensen Huang</strong></p><p>It&#8217;s not good. So let&#8217;s not let it happen.</p><p><strong>Dwarkesh Patel</strong></p><p>Why do you think it&#8217;s perfectly fungible, that if you didn&#8217;t ship them compute it would exactly be replaced by Huawei? They are behind, right? They have worse chips than you.</p><p><strong>Jensen Huang</strong></p><p>It&#8217;s completely&#8230; There&#8217;s evidence right now. Their chip industry&#8217;s gigantic.</p><p><strong>Dwarkesh Patel</strong></p><p>You can just look at the flop or bandwidth or memory comparisons between the H200 and the <a href="https://www.tomshardware.com/tech-industry/artificial-intelligence/deepseek-research-suggests-huaweis-ascend-910c-delivers-60-percent-nvidia-h100-inference-performance">Huawei 910C</a>. It&#8217;s like half to a third.</p><p><strong>Jensen Huang</strong></p><p>They use more of it. They use twice as many.</p><p><strong>Dwarkesh Patel</strong></p><p>It seems like your argument is they have all this energy that&#8217;s ready to go, right? And they need to fill it with chips.</p><p><strong>Jensen Huang</strong></p><p>And they&#8217;re good at manufacturing.</p><p><strong>Dwarkesh Patel</strong></p><p>And I&#8217;m sure eventually they would be able to just out-manufacture everybody. But there are these few critical years.</p><p><strong>Jensen Huang</strong></p><p>What is the critical year you&#8217;re talking about?</p><p><strong>Dwarkesh Patel</strong></p><p>These next few years. We&#8217;ve got these models that are going to be able to do all the cyber attacks.</p><p><strong>Jensen Huang</strong></p><p>In that case, if the next years are critical, then we have to make sure that all of the world&#8217;s AI models are built on the American tech stack, in these critical years.</p><p><strong>Dwarkesh Patel</strong></p><p>If they&#8217;re built on the American tech stack, how would that prevent them, if they have more advanced capabilities, from launching the Mythos-equivalent cyber attacks?</p><p><strong>Jensen Huang</strong></p><p>There&#8217;s no guarantee either way.</p><p><strong>Dwarkesh Patel</strong></p><p>But if you have it early, we can prepare for it.</p><p><strong>Jensen Huang</strong></p><p>Listen, why are you causing one layer of the AI industry to lose an entire market so that you could benefit another layer of the AI industry? There are five layers and every single layer has to succeed. The layer that has to succeed most is actually the AI applications. Why are you so fixated on that AI model? That one company? For what reason?</p><p><strong>Dwarkesh Patel</strong></p><p>Because those models make possible these incredibly offensive capabilities, and you need compute to run them.</p><p><strong>Jensen Huang</strong></p><p>The energy, the chips, and the ecosystem of AI researchers make it possible.</p><p><strong>Dwarkesh Patel</strong></p><p>Okay, stepping back, it has to be the case that China is able to build enough 7nm capacity. And remember, they&#8217;re still stuck on 7nm while you&#8217;ll move on to 3nm and then 2nm or 1.6nm with Feynman. So while you&#8217;re on 1.6nm, they&#8217;re still going to be on 7nm, and they have to produce enough of it to make up for the shortfall. They have so much energy that the more chips you give them, the more compute they&#8217;d have. So it comes out as a question of, ultimately they are getting more compute. Compute is an input to training and inference&#8212;</p><p><strong>Jensen Huang</strong></p><p>Listen, I just think you speak in absolutes. I think the United States ought to be ahead. The amount of compute in the United States is 100x more than anywhere else in the world. The United States ought to be ahead. Okay. The United States is ahead.</p><p>Nvidia builds the most advanced technologies. We make sure that the US labs are the first to hear about it and have the first chance to buy it. And if they don&#8217;t have enough money, we even invest in them. The United States ought to be ahead. We want to do everything we can to make sure the United States is ahead. Number one point, do you agree? We&#8217;re doing everything we can to do that.</p><p><strong>Dwarkesh Patel</strong></p><p>But how is shipping chips to China keeping the US ahead if they&#8217;re bottlenecked on compute?</p><p><strong>Jensen Huang</strong></p><p>No, no. We&#8217;ve got Vera Rubin for the United States. We have Vera Rubin for the United States. Now, am I in the United States? Do you consider me part of the United States?</p><p><strong>Dwarkesh Patel</strong></p><p>Yes.</p><p><strong>Jensen Huang</strong></p><p>Nvidia. You consider Nvidia a United States company? Okay. Number one, why is it that we don&#8217;t come up with a regulation that&#8217;s more balanced so that Nvidia can win around the world instead of giving up the world? Why would you want the United States to give up the world?</p><p>The chip industry is part of the American ecosystem. It&#8217;s part of American technology leadership. It&#8217;s part of the AI ecosystem. It&#8217;s part of AI leadership. Why is it that your policy, your philosophy, leads to the United States giving up a vast part of the world&#8217;s market?</p><p><strong>Dwarkesh Patel</strong></p><p>I guess the claim here is&#8230; Dario had this <a href="https://www.darioamodei.com/essay/the-adolescence-of-technology">quote</a> where he said that it&#8217;s like Boeing bragging that we&#8217;re selling North Korea nukes, but the missile casings are made by Boeing. And that&#8217;s somehow enabling the US technology stack. Fundamentally, you&#8217;re giving them this capability.</p><p><strong>Jensen Huang</strong></p><p>Comparing AI to anything that you just mentioned is lunacy.</p><p><strong>Dwarkesh Patel</strong></p><p>But AI is similar to enriched uranium, right? It can have positive uses, it can have negative uses. We still don&#8217;t want to send enriched uranium to other countries.</p><p><strong>Jensen Huang</strong></p><p>Who&#8217;s sending enriched&#8212;</p><p><strong>Dwarkesh Patel</strong></p><p>The analogy is that enriched uranium is like compute.</p><p><strong>Jensen Huang</strong></p><p>It&#8217;s a lousy analogy. It&#8217;s an illogical analogy.</p><p><strong>Dwarkesh Patel</strong></p><p>But if that compute can run a model that can do zero-day exploits against all American software, how is that not a weapon?</p><p><strong>Jensen Huang</strong></p><p>First of all, the way to solve that problem is to have dialogues with the researchers and dialogues with China, and dialogues with all the countries to make sure that people don&#8217;t use technology in that way. That&#8217;s a dialogue that has to happen. Okay? Number one.</p><p>Number two, we also need to make sure that the United States is ahead, that Vera Rubin, Blackwell, is available in the United States in abundance, mountains of it. Obviously, our results would show it. Abundance, tons of it. The amount of computing we have is great. We have amazing AI researchers here. It&#8217;s great. We ought to stay ahead.</p><p>However, we also have to recognize that AI is not just a model. AI is a five-layer cake. The AI industry matters across every single layer, and we want the United States to win at every single layer, including the chip layer. Conceding the entire market is not going to allow the United States to win the technology race long-term in the chip layer, in the computing stack. That is just a fact.</p><p><strong>Dwarkesh Patel</strong></p><p>I guess then the crux comes down to, how does selling them chips now help us win in the long term? Tesla sold extremely good electric vehicles to China for a long time. iPhones are sold in China, extremely good. They didn&#8217;t cause them lock-in. China will still make their version of EVs and they&#8217;re dominating. Their smartphones are dominating.</p><p><strong>Jensen Huang</strong></p><p>When we started the conversation today, you acknowledged that Nvidia&#8217;s position is very different. You used words like moat. The single most important thing to our company is the richness of our ecosystem, which is about developers. 50% of the AI developers are in China. The United States should not give that up.</p><p><strong>Dwarkesh Patel</strong></p><p>But we have a lot of Nvidia developers in the US, and that doesn&#8217;t prevent American labs from also being able to use other accelerators in the future. In fact, right now they&#8217;re using other accelerators as well, which is fine and great. I don&#8217;t see why that wouldn&#8217;t be the case in China as well, if you sell them Nvidia chips, just the same way that Google can use TPUs and Nvidia&#8212;</p><p><strong>Jensen Huang</strong></p><p>We have to keep innovating and, as you probably know, our share is growing, not decreasing. The premise that even if we competed in China, that we&#8217;re going to lose that market anyways&#8230; You&#8217;re not talking to somebody who woke up a loser. That loser attitude, that loser premise makes no sense to me.</p><p>We&#8217;re not a car. We are not a car. The fact that I can buy this car brand one day and use another car brand another day, easy. Computing is not like that. There&#8217;s a reason why the <a href="https://nvidianews.nvidia.com/news/nvidia-and-intel-to-develop-ai-infrastructure-and-personal-computing-products">x86</a> deal exists. There&#8217;s a reason why <a href="https://en.wikipedia.org/wiki/Arm_Holdings">ARM</a> is so sticky. These ecosystems are hard to replace. It costs an enormous amount of time and energy, and most people don&#8217;t want to do it. So it&#8217;s our job to continue to nurture that ecosystem, to keep advancing the technology so that we can compete in the marketplace.</p><p>Conceding a marketplace based on the premise you described, I simply can&#8217;t acknowledge that. It makes no sense. Because I don&#8217;t think the United States is a loser. Our industry is not a loser. That losing proposition, that losing mindset, makes no sense to me.</p><p><strong>Dwarkesh Patel</strong></p><p>Okay. I&#8217;ll move on. I just want to make sure that&#8212;</p><p><strong>Jensen Huang</strong></p><p>You don&#8217;t have to move on. I&#8217;m enjoying it.</p><p><strong>Dwarkesh Patel</strong></p><p>Okay, great. Then I won&#8217;t. I appreciate that. But I think maybe the crux&#8230; and thanks for walking around the circles with me, because I think it helps bring out what the crux here is.</p><p><strong>Jensen Huang</strong></p><p>The crux is you&#8217;re going to extremes. Your argument starts from extremes. That if we give them any compute at all in this narrow moment, we will lose everything.</p><p><strong>Dwarkesh Patel</strong></p><p>No, I think what my argument is&#8212;</p><p><strong>Jensen Huang</strong></p><p>Those extremes, they&#8217;re childish.</p><p><strong>Dwarkesh Patel</strong></p><p>Let me just make my argument for myself. The idea is not that there is some key threshold of compute. It&#8217;s that any marginal compute is helpful. So if you have more compute, you can train a better model.</p><p><strong>Jensen Huang</strong></p><p>And I just want you to acknowledge that any marginal sales for the American technology industry is beneficial.</p><p><strong>Dwarkesh Patel</strong></p><p>I actually don&#8217;t&#8230; If the AI models that run on those chips are capable of cyber offensive capabilities, or the chips are training models with cyber capabilities and running more instances of those models, it is not a nuclear weapon, but it enables a weapon of a kind.</p><p><strong>Jensen Huang</strong></p><p>The logic that you use, you might as well say it to microprocessors and <a href="https://en.wikipedia.org/wiki/Dynamic_random-access_memory">DRAMs</a>. You might as well say it to electricity.</p><p><strong>Dwarkesh Patel</strong></p><p>But in fact we do have export controls on the technology that is relevant to making the most advanced DRAM. We have all kinds of export controls on China for all kinds of chip-making stuff.</p><p><strong>Jensen Huang</strong></p><p>We sell a lot of DRAM and CPUs into China, and I think it&#8217;s right.</p><p><strong>Dwarkesh Patel</strong></p><p>I guess this goes back to the fundamental question of, is AI different? If you have the kind of technology where they can find these zero-days in software, is that something where we want to minimize China&#8217;s ability to get there first, to deploy it widely?</p><p><strong>Jensen Huang</strong></p><p>We want the United States to be ahead. We can control that.</p><p><strong>Dwarkesh Patel</strong></p><p>How do we control that if the chips are already there and they&#8217;re using them to train that model?</p><p><strong>Jensen Huang</strong></p><p>We have tons of compute. We have tons of AI researchers. We&#8217;re racing as fast as we can.</p><p><strong>Dwarkesh Patel</strong></p><p>Again, we have more nuclear weapons than anybody else, but we don&#8217;t want to send enriched uranium anywhere.</p><p><strong>Jensen Huang</strong></p><p>We&#8217;re not enriched uranium. It&#8217;s a chip, and it&#8217;s a chip that they can make themselves.</p><p><strong>Dwarkesh Patel</strong></p><p>But there&#8217;s a reason they&#8217;re buying it from you. We have quotes from the founders of Chinese companies that say that they&#8217;re bottlenecked on compute.</p><p><strong>Jensen Huang</strong></p><p>Because our chips are better. On balance, our chips are better. There&#8217;s just no question about it. In the absence of our chip&#8230; Can you acknowledge that Huawei had a record year? Can you acknowledge that a whole bunch of chip companies have gone public? Can you acknowledge that?</p><p><strong>Dwarkesh Patel</strong></p><p>Yes.</p><p><strong>Jensen Huang</strong></p><p>Can you also acknowledge that we used to have a very large share in that market, and we no longer have a large share in that market? We can also acknowledge that China is about 40% of the world&#8217;s technology industry. To concede that market for the United States technology industry is a disservice to our country. It is a disservice to our national security. It is a disservice to our technology leadership, all for the benefit of one company. It makes no sense to me.</p><p><strong>Dwarkesh Patel</strong></p><p>I guess I&#8217;m confused. It feels like you&#8217;re making two different statements. One is that we&#8217;re going to win this competition with Huawei because our chips are going to be way better if we&#8217;re allowed to compete. Another is that they would be doing the same exact thing without us anyway. How can both of those things be true at the same time?</p><p><strong>Jensen Huang</strong></p><p>It&#8217;s obviously true. In the absence of a better choice, you&#8217;ll take the only choice you have. How is that illogical? It&#8217;s so logical.</p><p><strong>Dwarkesh Patel</strong></p><p>The reason they want Nvidia chips is that they&#8217;re better.</p><p><strong>Jensen Huang</strong></p><p>Yeah.</p><p><strong>Dwarkesh Patel</strong></p><p>Better is more compute. More compute means you can train a better model.</p><p><strong>Jensen Huang</strong></p><p>No, it&#8217;s just better. It&#8217;s better because it&#8217;s easier to program. We have a better ecosystem. But whatever the better is, whatever the better is&#8230; And of course we&#8217;re going to send them compute. So what? The fact of the matter is that we get to benefit. Don&#8217;t forget, we get the benefit of American technology leadership. We get the benefit of developers working on the American tech stack. We get the benefit, as those AI models diffuse out into the rest of the world, that the American tech stack is therefore the best for it. We can continue to advance and diffuse American technology. That, I believe, is a positive. It&#8217;s a very important part of American technology leadership.</p><p>Now, the policies that you&#8217;re advocating <a href="https://americanaffairsjournal.org/2020/08/who-lost-lucent-the-decline-of-americas-telecom-equipment-industry/">resulted in the American telecommunications industry being policied out of basically the world</a>, to the point where we don&#8217;t control our own telecommunications anymore. I don&#8217;t see that as smart. It&#8217;s a little narrow-minded, and it led to unintended consequences that I&#8217;m describing to you right now that you seem to have a very hard time understanding.</p><p><strong>Dwarkesh Patel</strong></p><p>Okay, let&#8217;s just step back. It seems like the crux here is there&#8217;s a potential benefit and there&#8217;s a potential cost. What we&#8217;re trying to figure out is, is the benefit worth the cost? I guess I&#8217;m trying to get you to acknowledge the potential cost. Compute is an input to training powerful models. Powerful models do have powerful offensive capabilities, like cyber attacks. It is a good thing that American companies got to Mythos-level capabilities first, and then now they&#8217;re going to hold off on those capabilities so that the American companies and American government can make their software more protected before that level of capability was announced.</p><p>If China had had more compute or more crowd compute, if they could have made a Mythos-level model earlier and deployed it widely, that would have been very bad. One of the reasons that hasn&#8217;t happened is that we have more compute thanks to companies like Nvidia in America. That is a cost of sending it to China. So let&#8217;s leave the benefit aside for a second. Do you acknowledge that this is a potential cost?</p><p><strong>Jensen Huang</strong></p><p>I&#8217;ll also tell you the potential cost is we allow one of the most important layers of the AI stack, the chip layer, to concede an entire market&#8212;the second largest market in the world&#8212;so that they could develop scale, so that they could develop their own ecosystem, so that future AI models are optimized in a very different way than the American tech stack. As AI diffuses out into the rest of the world, their standards, their tech stack, will become superior to ours, because their models are open.</p><p><strong>Dwarkesh Patel</strong></p><p>I guess I just believe enough in Nvidia&#8217;s kernel engineers and CUDA engineers to think that they could optimize&#8212;</p><p><strong>Jensen Huang</strong></p><p>AI is more than kernel optimization, as you know.</p><p><strong>Dwarkesh Patel</strong></p><p>Of course, but there are so many things you can do, from <a href="https://en.wikipedia.org/wiki/Knowledge_distillation">distilling</a> to a model that&#8217;s well-fit for your chips.</p><p><strong>Jensen Huang</strong></p><p>We&#8217;re going to do our best.</p><p><strong>Dwarkesh Patel</strong></p><p>You have all the software. It&#8217;s just hard to imagine that there&#8217;s a long-term lock-in to the Chinese ecosystem, even if they have a slightly better open source model for a while.</p><p><strong>Jensen Huang</strong></p><p>China is the largest contributor to open source software in the world. Fact. China&#8217;s the largest contributor to open models in the world. Fact. Today it&#8217;s built on the American tech stack, Nvidia&#8217;s. Fact.</p><p>All five layers of the tech stack for AI are important. The United States ought to go win all five of them. They&#8217;re all important. The one that is the most important, of course, is the AI application layer. The layer that diffuses into society, the one that uses it most will benefit from this industrial revolution most. But my point is that every layer has to succeed.</p><p>If we scare this country into thinking that AI is somehow a nuclear bomb, so that everybody hates AI and everybody&#8217;s afraid of AI, I don&#8217;t know how you&#8217;re helping the United States. You&#8217;re doing it a disservice. If we scare everybody out of doing software engineering jobs because it&#8217;s going to kill every software engineering job&#8212;and we don&#8217;t have any software engineers as a result of that&#8212;we&#8217;re doing a disservice to the United States.</p><p>If we scare everybody out of radiology so nobody wants to be a radiologist because computer vision is completely free and no AI is going to do a worse job than a radiologist, we misunderstand the difference between a job and a task. The job of a radiologist is patient care. The task is to read a scan. If we misunderstand that so profoundly and we scare everybody out of going to radiology school, we&#8217;re not going to have enough radiologists and good enough healthcare.</p><p>So I&#8217;m making the case that when you make a premise that is so extreme, everything goes from zero or infinity, we end up scaring people in a way that&#8217;s just not true. Life is not like that. Do we want the United States to be first? Of course we do. Do we need to be a leader in every layer of that stack? Of course we do. Of course we do. Today you&#8217;re talking about Mythos because Mythos is important. Sure. That&#8217;s fantastic.</p><p>But in a few years time, I&#8217;m making you the prediction that when we want the American tech stack, when we want American technology to be diffused around the world&#8212;out to India, out to the Middle East, out to Africa, out to Southeast Asia&#8212;when our country would like to export, because we would like to export our technology, we would like to export our standards, on that day, I want you and I to have that same conversation again. I will tell you exactly about today&#8217;s conversation, about how your policy and what you imagined literally caused the United States to concede the second largest market in the world for no good reason at all.</p><p>We shouldn&#8217;t concede it. If we lose it, we lose it. But why do we concede it? Now nobody is advocating an all or nothing. Nobody&#8217;s advocating all or nothing, meaning we ship everything to China at all times. Nobody&#8217;s advocating that. We should always have the best technology here. We should always have the most technology here, and the first. But we should also try to compete and win around the world. Both of those things can simultaneously happen. It requires some amount of nuance, some amount of maturity instead of absolutes. The world is just not absolutes.</p><p><strong>Dwarkesh Patel</strong></p><p>Okay. The argument hinges on this. They&#8217;ve built models that are specified for the best chips that they make in a few years. Those chips get exported around the world. That sets the standard. Because of EUV export controls, as we said, you&#8217;re going to move on to 1.6nm. They&#8217;re still going to be on 7nm, even after a few years from now.</p><p>It may make sense that domestically they would prefer, &#8220;Hey, we&#8217;ve got so much energy, we can manufacture at scale. We&#8217;ll still keep using 7nm.&#8221; But on the exporting thing, their 7nm chips have to be competitive against your 1.6nm chips. Their models have to be so far optimized for the 7nm that it&#8217;s better to run their models on 7nm than to run their models on your 1.6nm.</p><p><strong>Jensen Huang</strong></p><p>Can we just look at the facts then? Is Blackwell 50 times more advanced lithography than Hopper? Is it 50 times? Not even close. I just kept saying it over and over again. Moore&#8217;s Law is dead. Between Hopper and Blackwell, from the transistors themselves, call it 75%. It was three years apart, 75%. Blackwell is 50 times Hopper.</p><p>My point is, architecture matters. Computer science matters. Semiconductor physics matters as well, but computer science matters. The impact of AI largely comes from the computing stack, which is the reason why CUDA is so effective, which is the reason why CUDA is so beloved. It&#8217;s an ecosystem, a computing architecture that allows for so much flexibility that if you wanted to change an architecture completely&#8212;create something like MoE, create something like diffusion, create something that&#8217;s disaggregated&#8212;you could do so. It&#8217;s easy to do.</p><p>So the fact of the matter is, AI is about the stack above as much as it is about the architecture below. To the extent that we have architectures and software stacks that are optimized for our stack, for our ecosystem, it is obviously good, because we started the conversation today about how Nvidia&#8217;s ecosystem is so rich. Why do people always love programming CUDA first? They do. They do. So do the researchers in China.</p><p>But if we are forced to leave China, if we&#8217;re forced to leave China, first of all, it&#8217;s a policy mistake. Obviously it has backlash. It has turned out badly for the United States. It enabled, it accelerated their chip industry. It forced all of their AI ecosystem to focus on their internal architectures. It&#8217;s not too late, but nonetheless it has already happened.</p><p>You&#8217;re going to see in the future, they&#8217;re not stuck at 7nm, obviously. They&#8217;re good at manufacturing. They will continue to advance from 7nm and beyond. Now, is there a 10x difference between 5nm and 7nm? The answer is no. Architecture matters. Networking matters. That&#8217;s why <a href="https://www.wsj.com/articles/nvidia-to-acquire-mellanox-for-about-7-billion-11552304615">Nvidia bought Mellanox</a>. Networking matters. Energy matters. So all of that stuff matters. It&#8217;s not simplistic, like the way you&#8217;re trying to distill it.</p><h3>01:35:06 &#8211; Why doesn&#8217;t Nvidia make multiple different chip architectures?</h3><p><strong>Dwarkesh Patel</strong></p><p>We can move on from China, but that actually raises an interesting question. We were discussing earlier these bottlenecks at TSMC and memory and so forth.</p><p>So if we&#8217;re in this world where you&#8217;re already the majority of N3&#8212;and at some point you&#8217;ll be N2 and you&#8217;ll be a majority of that&#8212;do you see that you could go back to N7, the spare capacity at an older process node, and say, &#8220;Hey, the demand for AI is so great and our capacity to expand the leading edge is not meeting it, so we&#8217;re going to make a Hopper or <a href="https://www.nvidia.com/en-us/data-center/ampere-architecture/">Ampere</a>, but with everything we know about numerics today and all the other improvements you described&#8221;? Do you see that world happening before 2030?</p><p><strong>Jensen Huang</strong></p><p>It&#8217;s not necessary to. The reason for that is because with every generation, the architecture is more than just the transistor scale. You&#8217;re doing so much engineering and packaging and stacking, and the numerics and the system architecture.</p><p>When you run out of capacity, to easily go back to another node&#8230; That&#8217;s a level of R&amp;D that no one could afford. We could afford to lean forward. I don&#8217;t think we could afford to go back. Now, if the world simply says&#8230; If on that day, let&#8217;s do the thought experiment, on that day we go, &#8220;Listen, we&#8217;re just never going to have more capacity ever again.&#8221; Would I go back and use 7nm? In a heartbeat, of course I would.</p><p><strong>Dwarkesh Patel</strong></p><p>One question somebody I was talking to had is, why doesn&#8217;t Nvidia run multiple different chip projects at the same time with totally different architecture? So you could do something like a <a href="https://en.wikipedia.org/wiki/Cerebras">Cerebras</a>-style wafer scale. You could do a <a href="https://en.wikipedia.org/wiki/Tesla_Dojo">Dojo</a>-style huge package. You could do one without CUDA. You have the resources and the engineering talent to do all of these in parallel. So why put all the eggs in one basket, given who knows where AI might go and architectures might go?</p><p><strong>Jensen Huang</strong></p><p>Oh, we could. It&#8217;s just that we don&#8217;t have a better idea. We could do all of those things. It&#8217;s just not better. We simulate it all in our simulator, proveably worse. So we wouldn&#8217;t do it. We&#8217;re working on exactly the projects that we want to work on. If the workload were to change dramatically&#8212;and I don&#8217;t mean the algorithms, I actually mean the workload, and that depends on the shape of the market&#8212;we may decide to add other accelerators.</p><p>For example, recently we added <a href="https://www.wsj.com/tech/ai/nvidia-licenses-ai-inference-technology-from-chip-startup-groq-0a405adb">Groq</a>, and we&#8217;re going to fold Groq into our CUDA ecosystem. We&#8217;re doing that now because the value of tokens has gone up so high that you could have different pricing of tokens. Back in the old days, just a couple years ago, tokens were either free or barely expensive. But now you can have different customers, and those customers want different answers. Because the customers make so much money&#8212;for example, our software engineers&#8212;if I can give them much more responsive tokens so that they&#8217;re even more productive than they are today, I would pay for it.</p><p>But that market has only recently emerged. So I think we now have the ability to have the same model, based on the response time, have different segments. That&#8217;s the reason why we decided to expand the Pareto frontier and create a segment of inference that is faster response time, even though it&#8217;s lower throughput. Until now, higher throughput is always better. We think there could be a world where there could be very high ASP tokens, and even though the throughput is lower in the factory, the ASPs make up for it.</p><p>That&#8217;s the reason why we did it. But otherwise, from an architecture perspective, if I had more money, I would put more behind Nvidia&#8217;s architecture.</p><p><strong>Dwarkesh Patel</strong></p><p>I think this idea of extremely premium tokens and just the disaggregation of the inference market is a very interesting.</p><p><strong>Jensen Huang</strong></p><p>The segmentation of it.</p><p><strong>Dwarkesh Patel</strong></p><p>Yeah. Alright, final question. Suppose the deep learning revolution didn&#8217;t happen. What would Nvidia be doing? Obviously games, but given&#8212;</p><p><strong>Jensen Huang</strong></p><p>Accelerated computing, the same thing we&#8217;ve been doing all along. The premise of our company is that Moore&#8217;s law is going to&#8230; General purpose computing is good for a lot of things, but for a lot of computation it&#8217;s not ideal.</p><p>So we combined an architecture called a GPU, CUDA, to a CPU, so that we can accelerate the workload of the CPU. Different kernels of code or algorithms could be offloaded onto our GPU. As a result, you speed up an application by 100x, 200x. Where can you use that? Obviously engineering and science and physics, data processing, computer graphics, image generation, all kinds of things. Even if AI doesn&#8217;t exist today, Nvidia would be very, very large.</p><p>The reason for that is fairly fundamental, which is that the ability for general purpose computing to continue to scale has largely run its course. And the only way&#8230; Not the only way, but the way to do that is through domain-specific acceleration. One of the domains that we started with was computer graphics, but there are many other domains. There&#8217;s all kinds. Particle physics and fluids, structured data processing, all kinds of different types of algorithms that benefit from CUDA.</p><p>Our mission was really to bring accelerated computing to the world and advance the type of applications that general purpose computing can&#8217;t do, and scale to the level of capability that helps break through certain fields of science. Some of the early applications were molecular dynamics, seismic processing for energy discovery, image processing of course, all of those kinds of fields where general purpose computing is just simply too inefficient to do so.</p><p>If there were no AI, I would be very sad. But because of the advances that we made in computing, we democratized deep learning. We made it possible for any researcher, any scientist, anywhere, any student, to be able to access a PC or a GeForce add-in card and do amazing science. That fundamental promise hasn&#8217;t changed, not even a little bit.</p><p>If you watch GTC, there&#8217;s the whole beginning part of it. None of it&#8217;s AI. That whole part of it with computational lithography or our quantum chemistry work, data processing work, all of that stuff is unrelated to AI. And it&#8217;s still very important. I know that AI is very interesting and quite exciting, but there&#8217;s a lot of people doing a lot of very important work that&#8217;s not AI related, and tensors are not the only way that you compute it. We want to help everybody.</p><p><strong>Dwarkesh Patel</strong></p><p>Jensen, thank you so much.</p><p><strong>Jensen Huang</strong></p><p>You&#8217;re welcome. I enjoyed it.</p><p><strong>Dwarkesh Patel</strong></p><p>Me too.</p>]]></content:encoded></item><item><title><![CDATA[What I learned this week - Pretraining parallelisms, Can distillation be stopped, Mythos and the cybersecurity equilibrium, Pipeline RL, On why pretraining runs fails]]></title><description><![CDATA[April 15, 2025]]></description><link>https://www.dwarkesh.com/p/what-i-learned-april-15</link><guid isPermaLink="false">https://www.dwarkesh.com/p/what-i-learned-april-15</guid><dc:creator><![CDATA[Dwarkesh Patel]]></dc:creator><pubDate>Wed, 15 Apr 2026 14:03:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!QpJ5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e801a3e-5563-40fc-ba70-4af569d80647_555x772.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>At the end of <a href="https://www.dwarkesh.com/p/michael-nielsen">my conversation with Michael Nielsen</a>, we talked about how to actually retain what you learn. Michael&#8217;s advice was to make some kind of demanding artifact. Write something up. Try to explain it. So in that spirit, here are notes on some topics I&#8217;ve learned about over the last week or two. These notes are extremely rough, and have many mistakes.</p><h3>Can distillation be stopped?</h3><p>Can the frontier labs stop distillation? Because if they can&#8217;t, open source commoditizing models can catch up incredibly rapidly, making the long run business model for the labs less viable. Let&#8217;s say it takes 1T tokens from a frontier model to capture its juice (I have no idea if that&#8217;s correct, but let&#8217;s say). Even ignoring savings from caching, Opus 4.6 is $25/MTok. So $25 million for those 1T tokens. That&#8217;s nothing.</p><p>Labs are responding by hiding chain of thought. But there&#8217;s two problems with this solution:</p><ul><li><p>Chain of thought is not made of some fundamentally different kind of token. You can just instruct the model to not think first but just start solving the problem, or to write out its thinking somewhere else.</p></li><li><p>Even if labs do figure out how to robustly hide chain of thought to train in the future, you can make reconstructing the chain of thought necessary to reproduce a decoded sequence as an RLVR target. Yes that costs more, but seems doable.</p></li><li><p>Maybe most importantly, the real juice of these agentic models is their tool use (writing and updating files of code, running bash commands, etc). And if these things are done locally on the user&#8217;s computer, you can&#8217;t really hide them. And it seems like a hard lift to get users to migrate all their development workflows to a cloud that you fully control and hide visibility to, modulo a Claude agent input text prompt.</p></li></ul><p>By the way, I learned about an interesting way companies which build products atop API access to AI models can basically distill these models, in a way that potentially makes the distilled models even better than the ones they&#8217;re actually built atop.</p><p>Suppose you&#8217;ve got a coding product. In order to build a feature, a user uses your product to query some frontier model API across 10+ back and forths. Once the user is satisfied with the end result, you have the end state that the user actually wanted - &#8220;the gold diff&#8221;. These coding product companies can now set the gold diff as the RL target for training their own models, where the model gets rewarded for producing outputs that look like what users eventually converged on, and penalized for producing the kinds of intermediate outputs that users kept rejecting or editing.</p><h3>On why pretraining runs fails</h3><p>Had an interesting chat with someone on why pretraining runs often fail. It was very interesting to get a sense of all the tangible ways that things can get fucked, and why training is such a precarious operation. At a high level, breaking causality, and adding bias, seem to be key culprits.</p><p>Breaking causality:</p><ul><li><p>When you do expert routing, you first go through the router, which gives you a score of how much each token wants each expert. There&#8217;s two ways to proceed from here: 1. Token routing, where you read the scores from the token&#8217;s perspective, and allocate to each token&#8217;s top k experts. Problem is that you could end up with wildly unbalanced allocation across experts, which is terrible for performance. Alternatively, you could (and only in training) do expert choice, where you just split the tokens by which are more relatively preferred by each expert. This way you can enforce that each expert gets roughly the same number of tokens. But the big problem is that this breaks causality, because which expert token n gets allocated to may depend on which expert token n + k might be router to. And breaking causality is very bad, because you&#8217;re getting information in training (and updating based on it) that you wouldn&#8217;t see in deployment.</p><ul><li><p>Rumor is that this explains why Llama 4 was underwhelming.</p></li><li><p>I guess you could do expert choice during prefill inference? But maybe it doesn&#8217;t work well in practice to allocate tokens to experts which would not have received that token in actual training.</p></li><li><p>Tbh I don&#8217;t fully understand why breaking causality is so bad. I understand you can&#8217;t see beyond causality in real inference. But why is this minor deviation such a big issue?</p></li></ul></li><li><p>Another thing that can break causality is token dropping. Where experts just ignore the tokens in the batch that they&#8217;re supposed to process, but which rank not so strongly, and cutting whom would spare going outside padding. This breaks causality cause a later token being more strongly matched to this expert might lead to an earlier token getting ignored.</p><ul><li><p>Apparently this was an issue with Gemini 2 Pro.</p></li></ul></li></ul><p>Adding bias:</p><ul><li><p>Bias much worse than variance - variance can average out, but bias compounds</p></li><li><p>Apparently the original GPT 4 training was slow and got initially fucked because of the following bug: they were using FP16 on their collectives like all-reduce. FP16 distributes its granularity according to logarithmic density - between 1 and 2, the mantissa bits carve the interval ~0.001 apart. But 1024 and up, the mantissa might be carving the interval by multiple whole number values. Suppose some collective involves adding 1 + 1 &#8230; 10,000 times - you could get in a situation where as soon as you get to 1024, you add 1, it goes to 1025, you round down to the nearest interval at 1024, add one again. And so the calculated value is 10x off the real value. Huge issue if you&#8217;re trying to sum many small gradients into a large accumulator. And imagine how hard the bug must have been to find!</p></li></ul><p>Implications for AI training:</p><ul><li><p>Some of the people who think we can cure aging argue that there&#8217;s basically 5 different ways people die of old age (heart disease, cancer, etc), and that if we cure these 5 different diseases, then we&#8217;d basically have solved again. You could ask a similar question about these failed pretraining runs - are there 5 different ways training runs fail, in which case once a lab figures out numerics and , you&#8217;ll just have smooth sailing, or will you keep seeing new bespoke issues emerge at each new level of scale? The person I talked to seemed to think the later - he pointed out that even within numerics, there&#8217;s so many ways you can fuck things up. And new ones will keep emerging at scale.</p></li><li><p>Bearish on AI fully automating kernel writing anytime soon. Presumably this is because he thinks it&#8217;s more of an AGI complete problem than some give it credit for. There&#8217;s another school of thought that says, &#8220;Hey, which kernel gets attention or MLP to run fastest on this scaleup is a super verifiable domain, thus we can RL to superhuman performance easily.&#8221; But he says, it took Nvidia, which has the best kernel engineers in the world, a long time to optimize for Blackwell, which suggests that actually it&#8217;s quite hard, and might not be super easy to close the loop on.</p></li><li><p>Sometimes people say inference for RL generation and inference for end user generation is basically the same. But this person pointed out that in RL inference, numerical drift between inference and training engine can cause these subtle off policy biases, which matter a ton for highest quality training. But are not an issue if just serving to users.</p></li><li><p>Emphasized how important it is to have a disciplined process for amalgamating compute multipliers, because of the risks of stacking up bugs with subtle biases.</p></li></ul><h3>Pretraining parallelisms</h3><p>Notes from an excellent lecture that <a href="https://horace.io/">Horace He</a> gave my friends and me. </p><p>What made this lecture so good is that Horace built up the whole topic as a chain of problems and solutions: here&#8217;s what we want to do, here&#8217;s why it breaks, here&#8217;s how we fix it, here&#8217;s why that fix eventually breaks too. Most explanations just list out a hodge podge of different strategies, without ever connecting them to the problems they solve or explaining why you&#8217;d pick one over another.</p><ul><li><p>Equation for pretraining flops = 6ND. 2 FLOPs per parameter per token for the forward pass (multiply + add). Backward pass is 2&#215; forward because you compute gradients w.r.t. both input matrices. So 2 + 4 = 6.</p></li><li><p>Okay we can&#8217;t do all this on one GPU. So how do we split up this problem? The obvious solution is to do data parallel - where you copy the model weights across each GPU, and you just do a part of the batch on each GPU.</p><ul><li><p>The obvious problem is that each GPU only has a limited amount of HBM - B300 is 288GB - and this is not enough to store the weights as models get bigger and bigger, much less their activations.</p></li></ul></li><li><p>Okay so next thing we try is fully sharded data parallel - each GPU only stores 1/N of the parameters of each layer - before processing each layer, you all-gather the full layer&#8217;s parameters from all GPUs (each GPU only stores 1/N of each layer). After processing, each GPU discards the gathered parameters.</p><ul><li><p>It was emphasized that this is the go to default. And you only move on from this when having too many GPUs forces you to move on, for reasons explained later. The reason this is the default is that it&#8217;s trivial to overlap compute and communication time - that&#8217;s because the only thing being communicated is the weights, which are not dependent on what happened in the layer before, so you can start all gathering the next layer while you&#8217;re still computing this layer. Compare this against tensor or expert parallelism, which do need to share activations for one layer before you can process the next one. The problem with pipeline parallelism is bubbles as explained below.</p></li><li><p>From a comms volume perspective, FSDP looks insanely expensive at first &#8212; you all-gather every layer&#8217;s full weights across all GPUs, use them for one matmul, then throw them away. But this ignores what regular data parallelism already costs you - in regular DP, you still need to do an all reduce after every layer of the backwards pass in order to sync the batch&#8217;s gradients across all the GPUs. That all-reduce has comms volume of params &#215; 2. FSDP adds all-gathers &#8212; one per layer in the forward pass, one per layer in the backward pass. But an all-gather is half the comms volume of an all-reduce. So naive FSDP comms volume ends up being # params * 4 (all gather forward and back, plus all reduce on back). You can do even better: since each gradient shard only needs to end up on the one GPU that owns it, replace the all-reduce with a reduce-scatter (which skips the final broadcast step). That gets you to params &#215; 3 total &#8212; a 50% overhead over vanilla DP.</p></li></ul></li><li><p>So why can&#8217;t you always just do FSDP?</p><ul><li><p>Comms crossover: You want your compute time to be greater than your comms time - you don&#8217;t want to be bottlenecked on comms. But since compute time for FSDP decreases as you increase the number of GPUs, and comms time does not, as you scale the number of GPUs on FSDP, your MFU can totally crater. When this happens, you need to add pipeline parallelism too.</p><ul><li><p>Compute time = (6 * # tokens * active params) / (compute per GPU * number of GPUs)</p><ul><li><p>This decreases as you increase number of GPUs</p></li></ul></li><li><p>Comms time = (# total params * 3) / (nv link domain size * infiniband BW)</p><ul><li><p>Comms time does not increase as you add more domains. This was really confusing to me. Each domain collectively holds all the parameters, and you need to sync gradients across domains after each layer of the backward pass. You&#8217;d think that adding more domains means more hops in the ring, so the all-reduce gets slower. But the standard ring algorithm splits the message into one chunk per participant. More domains means more hops, but proportionally smaller chunks per hop. (This breaks down when chunks get so small that per-hop latency dominates, at which point you switch to tree algorithms.)</p><ul><li><p>Technically, you can do better than a naive single all reduce for the gradients between all the domains. You do a hierarchical collective to optimize comms time across multiple NVLink domains. Key thing to remember is that each GPU in the domain gets its own bandwidth access to infiniband. So you wanna use it all up since interconnect bandwidth is the bottleneck. You do this by trying to do as much as possible within a scaleup before you move out. So you do reduce scatter within a scale up to give each GPU the domain-level reduced gradients for a shard of the layer, then all reduce these shards across corresponding GPUs across domains, then all gather within a domain. This shifts the comms time line down, thus moving the crossover point to the right.</p></li><li><p>Made an animation to illustrate it using Cursor and Composer 2:</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;af55c305-6da7-47cf-b799-2970cb9467e0&quot;,&quot;duration&quot;:null}"></div></li></ul></li></ul></li></ul></li></ul><ul><li><p>If you look at the equations, you can see that if you increase batch size, crossover point moves to right, and if you make the model more sparse, moves to the left.</p></li><li><p>Also why TPUs are better at FSDP - because more accelerators within a domain.</p></li></ul></li></ul><ul><li><p>Batch size floor: FSDP is data-parallel, so each GPU processes at least one sequence. Attention is computed within a sequence and can&#8217;t (easily) be split across GPUs. If your critical batch size is 10M tokens and sequence length is 10K, you only have 1K sequences &#8212; so you can&#8217;t scale beyond 1K GPUs with pure FSDP, even if you have plenty of comms bandwidth left.</p></li><li><p>Problems with pipeline parallelism (the next addition you&#8217;d make to FSDP in order to deal with these issues):</p><ul><li><p>The problem with pipeline parallelism is different - there you have bubbles that emerge from the fact that at the beginning of the batch, the GPUs dedicated to the final layers are not being used, and conversely at the end of the batch, the GPUs dedicated to the first layers are not being used. The reason you can&#8217;t overlap batches in training to solve pipeline bubbles is that you need to consolidate gradients and update the model before you process the next batch.</p></li><li><p>But also you&#8217;re adding architecture constraints - things like Kimi&#8217;s attention-to-residuals (where each block attends to all previous layers&#8217; residuals) become very difficult when those residuals live on different pipeline stages. Similarly, interleaving sliding-window and global attention layers could cause load imbalance across stages. Dealing with all this slows down research iteration, which is the greatest sin you can commit.</p></li></ul></li></ul><h3>Mythos and the cybersecurity equilibrium</h3><p>It seems like the key difference between Mythos and previous versions is that while previous versions could find individual vulnerabilities in the code (&#8220;Hey, there&#8217;s a missing bounds check here&#8221;), Mythos is long run agentic enough to rope 5 different vulnerabilities together which are all required in order to find an exploit (&#8220;Now I can execute arbitrary code, escalate privileges, etc&#8221;). To the extent that some discontinuity has been hit, it&#8217;s probably more the result of the combinatorial nature of cyberattacks rather than some off-trend increase in intelligence.</p><p>What does this mean for offense/defense? One way to look at it is that software is more secure today than it was 20 years ago, despite more and more human intelligence probing at public code, both white hat and black hat. If we get another influx of intelligence suddenly, why should the dynamic change?</p><p>In fact, we know that our foreign adversaries almost certainly have access to a bunch of critical zero days which they&#8217;re saving for a rainy day, or already using in inconspicuous ways. To the extent that Glasswing allows the whole industry to find a bunch of these latent exploits and patch them, shouldn&#8217;t we expect defense to have become much stronger relative to offense by the end of 26? Of course, this is thanks to the fact than American companies got there first and are cooperating with other companies and our government to patch things before our adversaries get to the same level.</p><p>One counterpoint I heard from a security expert is that there&#8217;s big difference between finding vulnerabilities and patching them - and AI is much better at the first than the later (people often talk about the offense/defense balance, but difficulty of finding versus patching vulnerabilities seems much more significant). In order to patch an issue, you have to find a fix that will not interfere with all the ways people use your software, and all the features which rely on weird bespoke behavior. XKCD has a nice comic illustrating how these kinds of issues come up:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QpJ5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e801a3e-5563-40fc-ba70-4af569d80647_555x772.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QpJ5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e801a3e-5563-40fc-ba70-4af569d80647_555x772.png 424w, https://substackcdn.com/image/fetch/$s_!QpJ5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e801a3e-5563-40fc-ba70-4af569d80647_555x772.png 848w, https://substackcdn.com/image/fetch/$s_!QpJ5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e801a3e-5563-40fc-ba70-4af569d80647_555x772.png 1272w, https://substackcdn.com/image/fetch/$s_!QpJ5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e801a3e-5563-40fc-ba70-4af569d80647_555x772.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QpJ5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e801a3e-5563-40fc-ba70-4af569d80647_555x772.png" width="555" height="772" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5e801a3e-5563-40fc-ba70-4af569d80647_555x772.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:772,&quot;width&quot;:555,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!QpJ5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e801a3e-5563-40fc-ba70-4af569d80647_555x772.png 424w, https://substackcdn.com/image/fetch/$s_!QpJ5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e801a3e-5563-40fc-ba70-4af569d80647_555x772.png 848w, https://substackcdn.com/image/fetch/$s_!QpJ5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e801a3e-5563-40fc-ba70-4af569d80647_555x772.png 1272w, https://substackcdn.com/image/fetch/$s_!QpJ5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e801a3e-5563-40fc-ba70-4af569d80647_555x772.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Potential solutions, if it&#8217;s non-trivial to just push patches to every piece of software?</p><ul><li><p>TODO - I know nothing about formal verification of software - check out what a seL4 proof of some behavior might look like</p></li><li><p>Use LLMs to rapidly port all C to Rust. Curious how easily Mythos can find vulnerabilities in memory safe languages.</p></li></ul><p>In some sense, its good that Anthropic didn&#8217;t release this model publicly until critical IT could be patched up. In another sense, isn&#8217;t it a super bad precedent for private companies to be hoarding the ability to be able to break into any operating system and browser and device? One obvious question for Anthropic is why they didn&#8217;t just build some kind of classifier which would detect whether you&#8217;re using the model for cyberattack type stuff, and refuse requests if yes, and release that publicly.</p><ul><li><p>Patching your own software is isomorphic to finding bugs in someone else&#8217;s repo from the perspective of an LLM (and patching your own software is a frequent coding model use case).</p></li><li><p>These kinds of classifiers can be easy to evade if you have enough expertise to break the problem of finding exploits down into smaller subproblems of finding vulnerabilities which each individually seem like sensibly good behavior to an LLM with no memory</p></li></ul><h3><a href="https://arxiv.org/pdf/2509.19128">Pipeline RL</a> paper summary</h3><p>As you keep RLing a model, not only does the average length of a response increase (since you&#8217;re basically training the model to think for longer before answering) but the variance in length also increases - sometimes you get an easy problem and you can immediately answer it - other times, you need to go think for 100k tokens.</p><p>This is a big problem for GPU utilization on training. Because you have to wait for all these stragglers to finish generating before you can start the next training step.</p><p>Okay one way you could get out of this conundrum is to just to just batch generation so that while stragglers keep going, you generate even more rollouts.</p><p>The problem is that there is an optimal batch size for each training step, so you&#8217;d need to split all these rollouts you made across lots of consecutive training steps.</p><p>But this takes you into the domain of offline RL, because your model is changing with each training step. And so you&#8217;re training your model on trajectories that were actually generated by an earlier model, which is not ideal.</p><p>Pipeline RL paper proposes the following fix: in flight weight weight updates - where you just sub out the generating model partway though these generating trajectories as soon as the new training step is done, so all the short trajectories, and a good chunk of the long trajectories, that the next training step will be trained on are generated by the most recent version of the model.</p>]]></content:encoded></item><item><title><![CDATA[Michael Nielsen – How science actually progresses]]></title><description><![CDATA[The true story of Einstein, Newton, and Darwin]]></description><link>https://www.dwarkesh.com/p/michael-nielsen</link><guid isPermaLink="false">https://www.dwarkesh.com/p/michael-nielsen</guid><dc:creator><![CDATA[Dwarkesh Patel]]></dc:creator><pubDate>Tue, 07 Apr 2026 15:49:28 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/193466212/619081f90c9cac9ccaa31d175b67a2ad.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Really enjoyed chatting with Michael Nielsen about how we recognize scientific progress.</p><p>It&#8217;s especially relevant for closing the RL verification loop for scientific discovery.</p><p>But it&#8217;s also a surprisingly mysterious and elusive question when you look at the history of human science.</p><p>We approach this question stories like Einstein (who claimed that he hadn't even heard of the famous Michelson-Morley experiment, which is supposed to have motivated special relativity, until after he had come up with the theory), Darwin (why did it take till 1859 to lay out an idea whose essence every farmer since antiquity must have observed?), Prout (how do you recognize that isotopes exist if you cannot chemically separate them?), and many others.</p><p>The verification loop on scientific ideas is often extremely long and weirdly hostile. Ancient Athenians dismissed Aristarchus&#8217;s heliocentrism in the 3rd century BC because it would imply that the stars should shift in the sky as the Earth orbits the sun. The first successful measurement of stellar parallax was in 1838. That&#8217;s a 2,000-year verification loop.</p><p>But clearly human science is able to make progress faster than raw experimental falsification/verification would imply, and in cases where experiments are very ambiguous. How?</p><p>Michael has some very deep and provocative hypotheses about the nature of progress. One I found especially thought-provoking is that aliens will likely have a VERY different science + tech stack than us. Which contradicts the common sense picture of a linear tech tree that I was assuming. And has some interesting implications about how future civilizations might trade and cooperate with each other.</p><p>Watch on <a href="https://youtu.be/myP8UjAM1pk">YouTube</a>; listen on <a href="https://podcasts.apple.com/us/podcast/michael-nielsen-how-science-actually-progresses/id1516093381?i=1000760075027">Apple Podcasts</a> or <a href="https://open.spotify.com/episode/1JTv7Le8s5Mf0hDcXDOJYl">Spotify</a>.</p><div id="youtube2-myP8UjAM1pk" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;myP8UjAM1pk&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/myP8UjAM1pk?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h2><strong>Sponsors</strong></h2><ul><li><p><a href="https://labelbox.com/dwarkesh">Labelbox</a> researchers built a new safety benchmark. Why? Well, current safety benchmarks claim that attacks on top models are successful only a few percent of the time, but the prompts in those benchmarks don&#8217;t reflect how real bad actors actually write. You can read Labelbox&#8217;s research <a href="https://labelbox.com/blog/the-ai-safety-illusion-why-current-safety-datasets-fool-us-on-model-safety/">here</a>. If this could be useful for your work, reach out at <a href="https://labelbox.com/dwarkesh">labelbox.com/dwarkesh</a></p></li></ul><ul><li><p><a href="https://mercury.com">Mercury</a> has an MCP that lets you give an LLM access to your full transaction history, including things like attached receipts and internal notes. I just used it to categorize my 2025 transactions, and it worked shockingly well. Modern functionality like this is exactly why I use Mercury. Learn more at <a href="https://mercury.com">mercury.com</a></p></li><li><p><a href="https://janestreet.com/dwarkesh">Jane Street&#8217;s</a> ML engineers presented some of their GPU optimization workflows at GTC, showing how they use CUDA graphs, streams, and custom kernels to shave real time off their training runs. You can watch the full talk <a href="https://www.nvidia.com/en-us/on-demand/session/gtc26-s82065/">here</a>. And they open-sourced all the relevant code <a href="https://github.com/janestreet/gtc2026/">here</a>. If this kind of stuff excites you, Jane Street is hiring &#8212; learn more at <a href="https://janestreet.com/dwarkesh">janestreet.com/dwarkesh</a></p></li></ul><h2><strong>Timestamps</strong></h2><p>00:00:00 &#8211; How scientific progress outpaces its verification loops</p><p>00:17:51 &#8211; Newton was the last of the magicians</p><p>00:23:26 &#8211; Why wasn&#8217;t natural selection obvious much earlier?</p><p>00:29:52 &#8211; Could gradient descent have discovered general relativity?</p><p>00:50:54 &#8211; Why aliens will have a different tech stack than us</p><p>01:15:26 &#8211; Are there infinitely many deep scientific principles left to discover?</p><p>01:26:25 &#8211; What drew Michael to quantum computing so early?</p><p>01:35:29 &#8211; Does science need a new way to assign credit?</p><p>01:43:57 &#8211; Prolificness versus depth</p><p>01:49:17 &#8211; What it takes to actually internalize what you learn</p><h2>Transcript</h2><h3>00:00:00 &#8211; How scientific progress outpaces its verification loops</h3><p><strong>Dwarkesh Patel</strong></p><p>Today, I&#8217;m speaking with <a href="http://n">Michael Nielsen</a>. You have done many things. You&#8217;re one of the pioneers of <a href="https://quantum.country/">quantum computing</a>, wrote the <a href="https://www.amazon.com/Reinventing-Discovery-New-Networked-Science/dp/0691148902">main textbook in the field</a> of the <a href="https://en.wikipedia.org/wiki/Open_science">open science movement</a>. You wrote a <a href="http://neuralnetworksanddeeplearning.com/">book about deep learning</a> that <a href="https://en.wikipedia.org/wiki/Chris_Olah">Chris Olah</a> and <a href="https://en.wikipedia.org/wiki/Greg_Brockman">Greg Brockman</a> credit with getting them into the field. More recently, you&#8217;re a research fellow at the <a href="https://astera.org/">Astera Institute</a> and writing a book about religion, science, and technology.</p><p>I&#8217;m going to ask you about none of those things. The conversation I want to have today is, how do we recognize scientific progress? It&#8217;s especially relevant for AI because people are trying to close the <a href="https://en.wikipedia.org/wiki/Reinforcement_learning">RL</a> verification loop on scientific discovery. What does it mean to close that loop? But in preparing for this interview, I&#8217;ve realized that it&#8217;s a more mysterious and elusive force, even in the history of human science, than I understood.</p><p>I think a good place to start will be <a href="https://en.wikipedia.org/wiki/Michelson%E2%80%93Morley_experiment">Michelson-Morley</a> and how <a href="https://en.wikipedia.org/wiki/Special_relativity">special relativity</a> is discovered, if it&#8217;s different from the story that you get off of YouTube videos. I will prompt you that way, and then we&#8217;ll go in there.</p><p><strong>Michael Nielsen</strong></p><p>Michelson-Morley is the famous result often presented as this experiment that was done in the 1880s that helped <a href="https://en.wikipedia.org/wiki/Albert_Einstein#Special_relativity">Einstein come up with the special theory of relativity</a> a little bit later, changing the way we think about space and time and our fundamental conception of those things.</p><p>And there&#8217;s a big gap, I think, between the way <a href="https://en.wikipedia.org/wiki/Albert_A._Michelson">Michelson</a> and <a href="https://en.wikipedia.org/wiki/Edward_W._Morley">Morley</a> and other people at the time thought about the experiment and certainly the way in which Einstein thought or did not think about the experiment. In actual fact, he stated later in his life he wasn&#8217;t even sure whether he was aware of the paper at the time. There&#8217;s a lot of evidence that he probably was aware of the paper at the time, but it actually wasn&#8217;t dispositive for his thinking at all. Something else completely was going on.</p><p>What Michelson and Morley thought they were doing was testing different theories of what was called the <a href="https://en.wikipedia.org/wiki/Aether_theories">ether</a>. If you go back to the 1600s, <a href="https://en.wikipedia.org/wiki/Robert_Boyle">Robert Boyle</a> introduced the idea of the ether. We know that sound is vibrations in the air. Boyle and other people got interested in the question of whether light is vibrations in something, and they couldn&#8217;t figure out what it was. Boyle did an experiment where he tested whether you could propagate light through a vacuum. He found that you could. You couldn&#8217;t do it with sound. He introduced this idea of the ether, and for the next two hundred or so years, people had all these conversations about what the ether was and what its nature was.</p><p>The Michelson and Morley experiment was really an experiment to test different theories of the ether against one another, in particular to find out whether or not there was a so-called ether wind. The idea was that the Earth is maybe passing through this ether wind. And if it is passing through the ether wind and you shoot a light beam parallel to the direction the ether wind is going in, it&#8217;ll get accelerated a little bit. If it&#8217;s being passed back in the opposite direction, it&#8217;ll get slowed down a little bit, and you should be able to see this in the results of interference experiments. What they found, much to their surprise, was that in fact there was no ether wind. That ruled out some theories of the ether, but not all, and Michelson certainly continued to believe in the ether.</p><p><strong>Dwarkesh Patel</strong></p><p>This is what was a shocking part of reading this story from the biography of Einstein that you recommended by... what was his first name?</p><p><strong>Michael Nielsen</strong></p><p><a href="https://en.wikipedia.org/wiki/Abraham_Pais">Abraham Pais.</a></p><p><strong>Dwarkesh Patel</strong></p><p>Abraham Pais. <em><a href="https://amzn.to/4typOoi">Subtle is the Lord</a></em>. Also from <a href="https://en.wikipedia.org/wiki/Imre_Lakatos">Imre Lakatos</a>, <em><a href="https://amzn.to/3PQfCZA">The Methodology of Scientific Research Programmes</a></em>. The way it&#8217;s told is that Michelson-Morley proved that the ether did not exist. Therefore, it created a crisis in physics that Einstein solved with special relativity.</p><p>What you&#8217;re pointing out is he actually was trying to distinguish between many different theories of ether. If you&#8217;re in space or if you&#8217;re on Earth, it&#8217;s the same direction of ether, or maybe the ether wind is being carried around by the Earth, and so you can&#8217;t really experience it on Earth. But if you go to a high enough altitude, you might be able to experience it. In fact, Michelson&#8217;s experiments, the famous one is 1887, but he conducted these experiments for basically two decades.</p><p><strong>Michael Nielsen</strong></p><p>For longer than that. He conducted the first one in 1881, I think, but he continued to believe until he died. He died, I think it was 1929 or so. It was the late twenties. He was still doing experiments in the 1920s about whether or not the ether existed. So he continued to believe in the ether to the end of his life. I think the last public statement he made was a year or two before he died, and he basically still believed it at that point.</p><p><strong>Dwarkesh Patel</strong></p><p>In fact, there was another physicist, <a href="https://en.wikipedia.org/wiki/Dayton_Miller">Miller</a>, who kept doing these experiments in the 1920s. He thought that if he went to a high enough altitude, Mount Wilson in California&#8230; &#8220;Oh, I&#8217;m high enough that the ether winds are not being dragged by the Earth. And I&#8217;ve measured the effect of the ether.&#8221; Einstein hears about this and he says, and this is where you get the famous quote, &#8220;Subtle is the Lord, but malicious He is not.&#8221;</p><p>Anyways, I think the reason the story is interesting is for many different reasons. One of the ways in which the real history of science is different from this idea you get of the scientific method is that you really can&#8217;t apply <a href="https://en.wikipedia.org/wiki/Falsifiability">falsification</a> as easily as you might think. It&#8217;s not clear what is being falsified. Is it just another version of the theory of the ether that&#8217;s being falsified? Certainly you can&#8217;t induce the theory of special relativity from the fact that one version of the ether seems to be disconfirmed by these experiments.</p><p><strong>Michael Nielsen</strong></p><p>It certainly doesn&#8217;t show that ideas about falsification are wrong or falsified, but it does show that the most naive ideas&#8230; Things are often much more complicated than you think. Michelson did this experiment in 1881. He was a very young man, and then other people, I think <a href="https://en.wikipedia.org/wiki/John_William_Strutt,_3rd_Baron_Rayleigh">Rayleigh</a> was one of them, pointed out that there were some problems with the way he did it, so they had to redo it in 1887. At that point, a lot of the leading physicists of the day basically accepted this result, that there was no ether wind. But what to do about this?</p><p>Sure, maybe you falsified some theories of the ether. There are others that you haven&#8217;t falsified at all at this point, and people set to work on developing those. It is funny, people will phrase it as showing that the ether didn&#8217;t exist. Even just the word &#8220;the&#8221; there is a misnomer. You actually had a ton of different theories and a couple of leading contenders. So yes, there&#8217;s some version of falsification going on, but how you respond to this new experiment is very complicated. Certainly the leading physicists of the day responded by saying, &#8220;Okay, this gives us a lot of information about what the ether must be, but it doesn&#8217;t tell us that there is no ether.&#8221;</p><p><strong>Dwarkesh Patel</strong></p><p>In fact, <a href="https://en.wikipedia.org/wiki/Hendrik_Lorentz">Lorentz</a> at the end of the 19th century, before Einstein, figures out the math of how you convert from one reference frame to another reference frame, and comes up with the <a href="https://en.wikipedia.org/wiki/Lorentz_transformation">Lorentz transformations</a>, which is the basis of special relativity. But his interpretation is that you are converting from the ether reference frame to these non-privileged other reference frames if you&#8217;re moving relative to the ether.</p><p>His interpretation of <a href="https://en.wikipedia.org/wiki/Length_contraction">length contraction</a> and <a href="https://en.wikipedia.org/wiki/Time_dilation">time dilation</a> is that this is the effect of moving through the ether, and you have this pressure. This pressure is warping clocks. It&#8217;s warping measures of length. The interesting thing here is that experimentally you cannot distinguish Lorentz&#8217;s interpretation from special relativity.</p><p><strong>Michael Nielsen</strong></p><p>I think that&#8217;s a strong statement. Lorentz introduces this quantity called <a href="https://en.wikipedia.org/wiki/Relativity_of_simultaneity#History">local time</a>, which he regards as... My understanding is he&#8217;s not trying to give a physical interpretation of this, but it&#8217;s what Einstein would later just recognize as time in another <a href="https://en.wikipedia.org/wiki/Inertial_frame_of_reference">inertial reference frame</a>. He&#8217;s not trying to attribute much physical meaning to it. I think <a href="https://en.wikipedia.org/wiki/Henri_Poincar%C3%A9#Work_on_relativity">Poincar&#233;</a> gets much closer later on to realizing that this is the time that&#8217;s registered by clocks.</p><p>About forty-odd years later, people start doing these <a href="https://en.wikipedia.org/wiki/Experimental_testing_of_time_dilation">muon experiments</a> where they see cosmic rays hit the top of the atmosphere. They produce a shower of <a href="https://en.wikipedia.org/wiki/Muon">muons</a>, and you can look to see at different heights in the atmosphere how many of those muons remain. They decay over time, and a very strange thing happens, which is that they&#8217;re decaying way too slow. You expect they shouldn&#8217;t be able to last the whole way through the atmosphere at all. Their decay rate is too quick, if you were in a classical theory. But if in fact their time really has slowed down, it&#8217;s okay.</p><p>In fact, the measured decay rates in 1940&#8212;and there have since been more accurate experiments done&#8212;match exactly what you expect from special relativity. That&#8217;s the kind of thing where if Lorentz had been alive&#8212;he&#8217;d been dead ten or so years at that point&#8212;it seems quite likely that he would have tried to save his theory by patching it up yet again, but it would have been a massive setback. It starts to just look like time&#8212;this thing that Lorentz introduced as a mathematical convenience&#8212;that&#8217;s actually what time is, for the muons at least. Then there&#8217;s a whole bunch of other experiments that show this very similar phenomenon.</p><p><strong>Dwarkesh Patel</strong></p><p>When was that experiment done?</p><p><strong>Michael Nielsen</strong></p><p>That was, I think, 1940. It might have been published in 1941.</p><p><strong>Dwarkesh Patel</strong></p><p>Maybe to rephrase and change my claim: it&#8217;s not that you could not have distinguished them, but the scientific community adopted what we in retrospect consider the more correct interpretation before it was actually experimentally shown to be preferred. So there&#8217;s clearly some process that human science does which can distinguish different theories.</p><p><strong>Michael Nielsen</strong></p><p>Can I just interrupt? You used the word process, and it&#8217;s interesting to think about that term. Process carries connotations of something set in advance. It&#8217;s much more complicated in practice. You have people like Lorentz, who Einstein absolutely and utterly admired, and Poincar&#233;, one of the greatest scientists who ever lived, and Michelson, another truly outstanding scientist, who never reconciled themselves.</p><p>It&#8217;s not as though there&#8217;s some standard procedure that we&#8217;re all using to reconcile these things. Great scientists can remain wrong for a very long time after the scientific community has broadly changed its opinion. But there&#8217;s no centralized authority or centralized method.</p><p><strong>Dwarkesh Patel</strong></p><p>That is the interesting thing. There&#8217;s progress even though it is hard to articulate the process by which it happens, the heuristics that are used.</p><p>You mentioned Poincar&#233;. Lorentz has the math right, but the interpretation wrong. It seems like Poincar&#233; had the opposite, where he understood that it&#8217;s hard to define <a href="https://en.wikipedia.org/wiki/Relativity_of_simultaneity">simultaneity</a> because it requires a circular definition with time, or velocity of something that might arrive at a midpoint together, but velocity is defined in terms of time.  I find this interesting.</p><p>There are a couple of other examples we could call on. There is this phenomenon in the history of science where somebody asks the right question, but then they don&#8217;t clinch it. I&#8217;m curious what you think is happening in those cases.</p><p><strong>Michael Nielsen</strong></p><p>You actually do want to go case by case and try to understand. It&#8217;s not necessarily clear that they&#8217;re doing the same thing wrong in all of the cases. The Poincar&#233; case is amazing. He seems to have understood the <a href="https://en.wikipedia.org/wiki/Principle_of_relativity">principle of relativity</a>, the idea that the laws of physics are the same in all inertial reference frames. He seems to have understood that the speed of light is the same in all inertial reference frames. He doesn&#8217;t phrase it quite that way, but it is my understanding, though I don&#8217;t speak French.</p><p>These are basically the ideas that Einstein uses to deduce special relativity. But then he also has this additional misunderstanding where he thinks that length contraction is a dynamical effect, that somehow particles are being pushed together by some external force, something is going on dynamically. He doesn&#8217;t understand that it&#8217;s purely kinematics. That actually space and time are different from what we thought, and you need to fundamentally rethink those things.</p><p>It&#8217;s almost like he knew too much. He had almost too grand a vision in mind. Einstein subtracts from that and says, &#8220;No. Space and time are just different than what we thought, and here&#8217;s the correct picture.&#8221; <a href="https://philsci-archive.pitt.edu/22181/1/2014-shpmp-walter.pdf">There&#8217;s a paper in, I think it&#8217;s 1909, where Poincar&#233; still has this dynamical picture of what&#8217;s going on with the length contraction</a>. This is just not necessary. This is a mistake from the modern point of view.</p><p>Why is he doing this? Why is he clinging onto this idea? I don&#8217;t know. I&#8217;ve obviously never met the man. It would be fascinating to be able to talk it over and try and understand. His expertise seems to be getting in the way. He knows so much, he understands so much, and then he&#8217;s not able to let go of these things.</p><p>A really interesting fact is that a few years prior, in the 1890s, Einstein&#8217;s a teenager and he believes in the ether too. He knows about this stuff. But he&#8217;s not quite as attached as these older people were. Maybe they were a little bit prisoners of their own expertise. That&#8217;s my guess. Some historians of science would certainly disagree.</p><p><strong>Dwarkesh Patel</strong></p><p>Then there&#8217;s the obvious stories where Einstein himself later on is said to have not latched onto the correct interpretations of quantum mechanics or cosmology because of his own attachments.</p><p><strong>Michael Nielsen</strong></p><p>Yeah.</p><p><strong>Dwarkesh Patel</strong></p><p>Here&#8217;s the bigger question I have. The muon example is a great example of these long verification loops and how progress seems to happen in the scientific community faster than these verification loops imply. Maybe the clearest example is <a href="https://en.wikipedia.org/wiki/Aristarchus_of_Samos">Aristarchus</a> in the second century BC comes up with the idea of heliocentrism. The ancient Athenians dismiss it on the grounds that we should see as the Earth is moving around the Sun, if really the Sun is the center of the solar system, the stars move relative to the Earth. The only reason that would not be the case is the stars are so far away that you would not observe this.</p><p>And it&#8217;s only in 1838 that <a href="https://en.wikipedia.org/wiki/Stellar_parallax">stellar parallax</a> was actually measured. And so, we didn&#8217;t need to wait until 1838 to have heliocentrism. We didn&#8217;t need to wait for the experimental validation to understand that Copernicus is better in some way. In fact, when Copernicus first came up with his theories, it&#8217;s well known that the <a href="https://en.wikipedia.org/wiki/Geocentrism">Ptolemaic model</a> was more accurate because it had centuries of adding on these <a href="https://en.wikipedia.org/wiki/Deferent_and_epicycle">epicycles</a>.</p><p>What&#8217;s maybe less well appreciated is that it was also in some sense simpler. Because Copernicus actually had to add extra epicycles. It had more epicycles than the Ptolemaic model because he had this bias that the Earth should go in a perfect circle in equal time. Anyway, I think this is an interesting story because it&#8217;s not a more accurate theory. It&#8217;s not a simpler theory. So how could you have known ex ante that Copernicus was correct and Ptolemy was not?</p><p><strong>Michael Nielsen</strong></p><p>Good question. I don&#8217;t entirely know the answer. I can give you a partial answer that I, centuries in the future, start to find very compelling. I&#8217;m sure it&#8217;s part of the historic story at least. One of the big shocks for <a href="https://en.wikipedia.org/wiki/Isaac_Newton">Newton</a>, he did understand <a href="https://en.wikipedia.org/wiki/Kepler%27s_laws_of_planetary_motion">Kepler&#8217;s laws of motion</a> eventually, so you&#8217;re able to explain the motions of the planets in the sky. But he also, out of the same theory, his <a href="https://en.wikipedia.org/wiki/Newton%27s_law_of_universal_gravitation">theory of gravitation</a>, was able to explain terrestrial motion. He&#8217;s able to explain why objects move in parabolas on the Earth, and he&#8217;s able to explain the tides in terms of the moon and the sun&#8217;s gravitational effect on water on the Earth.</p><p>You have what seem like three very different disconnected phenomena all being explained by this one set of ideas. That starts to feel very compelling, at least to me. I think most people find that very satisfying once they eventually realize it.</p><h3>00:17:51 &#8211; Newton was the last of the magicians</h3><p><strong>Dwarkesh Patel</strong></p><p>Have you read the <a href="https://en.wikipedia.org/wiki/John_Maynard_Keynes">Keynes</a> biography of Newton?</p><p><strong>Michael Nielsen</strong></p><p>He wrote an entire biography?</p><p><strong>Dwarkesh Patel</strong></p><p>No, the <a href="https://mathshistory.st-andrews.ac.uk/Extras/Keynes_Newton/">essay</a>.</p><p><strong>Michael Nielsen</strong></p><p>Sure. I love that. This description of him as the last of the magicians is wonderful.</p><p><strong>Dwarkesh Patel</strong></p><p>In fact, I think it&#8217;s maybe worth superimposing. Or you should read out that one passage of the thing.</p><p><strong>Michael Nielsen</strong></p><p>Alright. It&#8217;s from a talk that he gave at Cambridge not long before he died. He&#8217;d acquired Newton&#8217;s papers somehow and gave a lecture twice about this, or his brother Jeffrey gave it the other time because he was too ill. There&#8217;s this wonderful, wonderful quote in the middle. The whole thing is really interesting, but I love this particular quote: &#8220;Newton was not the first of the age of reason. He was the last of the magicians, the last great mind which looked out on the visible and intellectual world with the same eyes as those who began to build our intellectual inheritance rather less than ten thousand years ago.&#8221;</p><p>This idea people have that Newton was the first modern scientist is somehow wrong. There&#8217;s some truth to it, but he really had this very different way of looking at the world that was part superstitious and part modern. It was a funny hybrid. He&#8217;s a transitional figure in some sense. That phrase, &#8220;the last of the magicians,&#8221; really points at something.</p><p><strong>Dwarkesh Patel</strong></p><p>The thing I&#8217;m very curious about with Newton is whether it was the same program, the same heuristics, the same biases that he applied to his alchemical work as he did to his understanding of astronomy. This is from the Keynes essay: &#8220;There was extreme method in his madness. All his unpublished works on esoteric and theological matters are marked by careful learning, accurate method, and extreme sobriety of statement. They are just as sane as the <em>Principia</em> if their whole matter and purpose were not magical. They were nearly all composed during the same 25 years of his mathematical studies.&#8221;</p><p>Clearly, there was some aesthetic that motivated people like Einstein to reject earlier ways of thinking and say, &#8220;No, the other is wrong, and there&#8217;s a better way to think about things.&#8221; The same is true with Newton. The question I have is whether similar heuristics toward parsimony, aesthetics, and so on, would be equally useful across time and across disciplines, or whether you need different heuristics. The reason that&#8217;s relevant is even if we can&#8217;t build a verification loop for science, maybe if the taste tests point in the same direction, you can at least encode that bias into the AIs. That would maybe be enough.</p><p><strong>Michael Nielsen</strong></p><p>The point is that where we always get bottlenecked is where the previous processes and heuristics don&#8217;t apply. That&#8217;s almost definitionally what causes the bottlenecks. Because people are smart, they know what has worked before. They study it. They apply the same kinds of things, so they don&#8217;t get stuck in the same places as before. They keep getting bottlenecked in different places. I&#8217;m overgeneralizing a bit, but I think it&#8217;s right.</p><p>If you&#8217;re attempting to reduce science to a process, you&#8217;re attempting to reduce it to something where there is just a method which you can apply, and you turn the crank and out pops insight. You can do a certain amount of that, but you&#8217;re going to get bottlenecked at the places where your existing method doesn&#8217;t apply. Definitionally, there&#8217;s no crank you can turn. You need a lot of people trying different ideas. The more difficult the idea is to have, the greater the bottleneck, but then also the greater the triumph.</p><p><a href="https://en.wikipedia.org/wiki/Quantum_mechanics">Quantum mechanics</a> is a great example of this. It&#8217;s such a shocking set of ideas. It&#8217;s such a shocking theory. The theory of evolution in some sense is also quite a shocking idea, not the principle of natural selection, but that it can explain so much. That&#8217;s a shocking idea.</p><h3>00:23:26 &#8211; Why wasn&#8217;t natural selection obvious much earlier?</h3><p><strong>Dwarkesh Patel</strong></p><p><em><a href="https://en.wikipedia.org/wiki/Philosophi%C3%A6_Naturalis_Principia_Mathematica">Principia Mathematica</a></em> is released in 1687. <em>The Origin of Species</em> is released in 1859. At least naively, it seems like Darwin&#8217;s theory of natural selection is conceptually easier than the theory of gravity.</p><p><a href="https://www.dwarkesh.com/p/terence-tao">I asked Terence Tao this question</a>. There was this contemporaneous biologist with Darwin, <a href="https://en.wikipedia.org/wiki/Thomas_Henry_Huxley">Thomas Huxley</a>, who read this and said, &#8220;How extremely stupid to not have thought of this.&#8221; Nobody ever reads the Principia Mathematica and thinks, &#8220;God, why didn&#8217;t I beat Newton to the punch here?&#8221; So what&#8217;s going on here? Why did Darwinism take so much longer?</p><p><strong>Michael Nielsen</strong></p><p>The idea must have been known to animal breeders for a long time at some level, or certainly large chunks of the idea were known, that artificial selection was a thing. In some sense, <a href="https://en.wikipedia.org/wiki/Charles_Darwin">Darwin&#8217;s</a> genius wasn&#8217;t in having that idea, it was understanding just how central it was to biology. You can go back and explain a tremendous amount about all the variety of what we see in the world with this as not necessarily the only principle, but certainly a core principle. He writes this wonderful book, <em><a href="https://en.wikipedia.org/wiki/On_the_Origin_of_Species">The Origin of Species</a></em>. It&#8217;s just so much evidence and so many examples, trying to tease this out and see what the implications are, and connecting it to as much else as he possibly can, to geology and all these other things.</p><p>That hard work&#8212;making the case that it&#8217;s actually relevant all across the biosphere&#8212;is what he&#8217;s doing there. He&#8217;s not just having the idea, he&#8217;s making a compelling case that it&#8217;s intertwined with absolutely everything else.</p><p><strong>Dwarkesh Patel</strong></p><p>The motivation for the question was <a href="https://en.wikipedia.org/wiki/Lucretius">Lucretius</a>, this first-century Roman poet who has an idea that seems analogous to natural selection. It&#8217;s about species getting fitted more over time to their environments, or species losing fit to their environment. And so, why did this go nowhere for nineteen centuries?</p><p>Then I looked into it or, more accurately, asked LLMs what exactly Lucretius&#8217;s idea here was. It is extremely different from what real natural selection is. He thought there was this generative period in the past where all the species came about, and then there was this one-time filter which resulted in the species that are around today, and they became fit to the environment.</p><p>He did not have this idea that it is an ongoing gradual process or that there is a tree of life that connects all life forms on Earth together, which, by the way, is an incredibly weird fact that <a href="https://en.wikipedia.org/wiki/Last_universal_common_ancestor">every single life form on Earth has a common ancestor</a>.</p><p><strong>Michael Nielsen</strong></p><p>It&#8217;s not incredibly weird. If you think that the origin of life must have been very hard, that there&#8217;s a bottleneck there, then it&#8217;s not so surprising.</p><p><strong>Dwarkesh Patel</strong></p><p>There&#8217;s also this verification loop aspect where even if Newton might be harder in some sense, if you&#8217;ve clinched it, you can experimentally&#8230; I know &#8220;validate&#8221; is the wrong word philosophically, but you can give a lot of base points to the theory.</p><p>You can be like, &#8220;Okay, I have this idea of why things fall on Earth. I have this idea of why orbital periods for planets have a certain pattern. Let&#8217;s try it on the Moon, which orbits the Earth.&#8221; And in fact, it&#8217;s weird but the orbital period matches what my calculations imply.</p><p><strong>Michael Nielsen</strong></p><p>And the tides work correctly. It&#8217;s just amazing.</p><p><strong>Dwarkesh Patel</strong></p><p>Exactly. Whereas for Darwinism, it takes a ton of work for Darwin to compile all the cumulative evidence, but there&#8217;s no individual piece that is overwhelmingly powerful.</p><p><strong>Michael Nielsen</strong></p><p>And there&#8217;s a whole bunch of problems as well. He doesn&#8217;t really understand what the mechanism is. He doesn&#8217;t understand genes, all these things.</p><p><strong>Dwarkesh Patel</strong></p><p>The very interesting thing in the history of Darwinism is, this idea which theoretically you could come up with at any time, there is almost identical independent creation of that idea between <a href="https://en.wikipedia.org/wiki/Alfred_Russel_Wallace">Alfred Wallace</a> and Charles Darwin. So much so that I think Wallace sends his manuscript to Darwin and is like, &#8220;What do you think of this idea?&#8221; And Darwin&#8217;s like, &#8220;Fuck.&#8221;</p><p><strong>Michael Nielsen</strong></p><p>I don&#8217;t think that&#8217;s an exact quote, but it&#8217;s pretty much correct.</p><p><strong>Dwarkesh Patel</strong></p><p>They end up presenting their ideas together in the spirit of sportsmanship. Why was this period in the 1850s or 1860s the right time for these ideas to form? You can come up with different ideas. One is geology. In the 1830s, <a href="https://en.wikipedia.org/wiki/Charles_Lyell">Charles Lyell</a> figures out that there&#8217;s been millions and billions of years of time that&#8217;s existed on Earth. The paleontology shows you that fossils have existed for that entire time. Life goes back a long way. In fact, you can even find fossils for intermediate species that show you the tree of life. Between humans and other apes as well, there&#8217;s intermediate humans.</p><p>There&#8217;s also the age of colonization, and we have all these voyages doing <a href="https://en.wikipedia.org/wiki/Biogeography">biogeography</a>. That all must have been necessary. In fact, there&#8217;s a huge history of parallel innovation and discovery in the history of science. So maybe it is another piece of evidence that more had to be in place for a given idea to be discovered. Because if it&#8217;s not discovered for a long time and then spontaneously many different people are coming up with it, that shows you that the building blocks were in some sense necessary.</p><p><strong>Michael Nielsen</strong></p><p>This example of Lyell and other geologists in the early 1800s having this idea of <a href="https://en.wikipedia.org/wiki/Deep_time">deep time</a> does seem to have been crucial. I know Darwin was very influenced by Lyell. If you don&#8217;t have at least tens or hundreds of millions of years, evolution starts to look like a non-starter.</p><p>In order to make it work on a timescale of 5,000 to 10,000 years or <a href="https://en.wikipedia.org/wiki/Ussher_chronology">6,000 years with Bishop Ussher</a> you would need to see evolution occurring at a massive rate during human lifetimes, and we&#8217;re just not seeing that. That does seem to have been a blocker. To your question of what other blockers were there, were there any others? I don&#8217;t know.</p><p><strong>Dwarkesh Patel</strong></p><p>Or how much earlier could you, in principle, have come up with it if you were much smarter?</p><h3>00:29:52 &#8211; Could gradient descent have discovered general relativity?</h3><p><strong>Michael Nielsen</strong></p><p>Let&#8217;s go back and zoom out to your original question about the verification loop in AI. An example that should give you pause there is the big signature success so far, which is certainly <a href="https://en.wikipedia.org/wiki/AlphaFold">AlphaFold</a>. AlphaFold really isn&#8217;t about AI. A massive fraction of the success there is the <a href="https://en.wikipedia.org/wiki/Protein_Data_Bank">Protein Data Bank</a>. It&#8217;s <a href="https://en.wikipedia.org/wiki/X-ray_diffraction">X-ray diffraction</a>, <a href="https://en.wikipedia.org/wiki/Nuclear_magnetic_resonance">NMR</a>, <a href="https://en.wikipedia.org/wiki/Cryogenic_electron_microscopy">cryo-EM</a>, and the several billion dollars that were spent obtaining those 180,000-odd protein structures.</p><p>It&#8217;s basically the story of how we spent many decades obtaining <a href="https://en.wikipedia.org/wiki/Protein_structure">protein structure</a> just by going out and looking very hard at the world experimentally, and then we fitted a nice model at the end of it, which was a tiny fraction of the entire investment. That&#8217;s a story of data acquisition principally. The AI bit is very impressive and quite remarkable, but it is only a small part of the total story.</p><p><strong>Dwarkesh Patel</strong></p><p>AlphaFold is very interesting, and philosophically I wonder what you think of it as a scientific theory or explanation. I guess over time the world is becoming harder to understand&#8230; As I&#8217;m saying things, because you&#8217;re such a careful speaker, I say a phrase and wonder if you&#8217;ll actually buy that premise.</p><p>But in some domains, we need to fit models to things rather than coming up with underlying principles that explain a broad range of phenomena. Compare the theory of general relativity, or any theory which just nets out to some equations, versus AlphaFold, which is encoding these different relationships between things we can&#8217;t even interpret over 100 million parameters.</p><p>Are those really the same thing? GR can predict things you could have never anticipated or it was never meant to do, <a href="https://en.wikipedia.org/wiki/Tests_of_general_relativity">like why Mercury&#8217;s orbit precesses</a>. AlphaFold is not going to have that kind of explanatory reach. I want to get your reaction to that.</p><p><strong>Michael Nielsen</strong></p><p>I think it&#8217;s an incredibly interesting question. Maybe a really pivotal question. If you take a very classic point of view, you want these deep explanatory principles. You want as few free parameters as you possibly can. You want very simple models which explain a lot, and AlphaFold doesn&#8217;t look anything like that. You might just say, &#8220;It&#8217;s nice and maybe helpful as a model, but it&#8217;s not a scientific explanation.&#8221; That&#8217;s a conservative point of view, answer one to the question.</p><p>Answer two is to say maybe you shouldn&#8217;t think about AlphaFold as an explanation in the classic sense, but maybe it contains lots of little explanations inside it. Part of what you can get out of <a href="https://www.anthropic.com/research/team/interpretability">interpretability</a> work is you can go into AlphaFold and start to extract certain things. Maybe by doing an archeology of AlphaFold, we can actually understand a great deal more about these principles. You can start to extract that a certain circuit does this interesting thing, and we learn from it.</p><p>I don&#8217;t know to what extent that&#8217;s been done with AlphaFold, but it&#8217;s been done a little bit with some of the chess models, like <a href="https://en.wikipedia.org/wiki/AlphaZero">AlphaZero</a>. There seem to be some strategies which were borrowed by <a href="https://en.wikipedia.org/wiki/Magnus_Carlsen">Magnus Carlsen</a>, which he seems to have just taken from AlphaZero. I don&#8217;t think there&#8217;s any public confirmation of this, but some experts have noticed that he changed <a href="https://x.com/olimpiuurcan/status/1139437778683322369?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1139437778683322369%7Ctwgr%5E207ad5ac534f093e50621a391647199246f3dfb4%7Ctwcon%5Es1_&amp;ref_url=https%3A%2F%2Fwww.quora.com%2FHow-did-AlphaZero-inspire-Magnus-Carlsen-to-play-chess-better">his game quite radically after some public forensics were released on how AlphaZero worked</a>. That&#8217;s an example where human beings are starting to extract meaning out of these models.</p><p>That leads to viewing the models as a potential source of explanations. You need to do more work because they&#8217;re not very legible up front, but you can potentially extract them. That&#8217;s an interesting intermediate situation where they&#8217;re not explanations themselves, but you can extract interesting explanations out of them and use them as a source.</p><p>The third and most interesting possibility is that they&#8217;re a new type of object. They should be taken very seriously as explanations, but where in the past we haven&#8217;t had the ability to really do anything with them, now we have interesting new actions we can do. We can merge them, we can distill them. It&#8217;s a big opportunity in the philosophy of science.</p><p>There&#8217;s an anticipation of this in some way. Some mathematicians and physicists work today&#8230; Historically, if you had a 100-page equation&#8212;which is the kind of thing that does come up&#8212;there&#8217;s just nothing you can do if it&#8217;s 1920. At that point, you give up on the problem. But today, with tools like <a href="https://en.wikipedia.org/wiki/Wolfram_Mathematica">Mathematica</a>, you can just keep going. That&#8217;s an object now, a thing that you can work with. There are examples where people work with these things that formerly were regarded as too complicated, and sometimes they get simple answers out the end. That&#8217;s just an intermediate working state.</p><p>So I wonder if something similar is going to happen in this case, where you could take these models and use them in a similar way that people do with Mathematica, and take them seriously. They&#8217;re not explanations in the classic sense, but they&#8217;ll be something else which interesting operations can be done on.</p><p><strong>Dwarkesh Patel</strong></p><p>The thing I worry about is, suppose it&#8217;s 1500 and you&#8217;re training a model on&#8230;  This is a weird history where we developed <a href="https://en.wikipedia.org/wiki/Deep_learning">deep learning</a> before we had cosmology. Suppose we live in that world. You&#8217;re observing how the stars don&#8217;t seem to move. The planets have all these weird behaviors. Then you train a model on that, and you do some kind of interp on it trying to figure out what the patterns are.</p><p>You&#8217;d just be able to keep building on Ptolemy&#8217;s model. You&#8217;d see there&#8217;s another epicycle we didn&#8217;t notice. Parameters X to Y encode this epicycle, parameters whatever encode the next epicycle. If you were just trying to figure out why the solar system is the way it is from observational data, you could just keep adding epicycles upon epicycles, but it really took one mind to integrate it all in and say, &#8220;Here&#8217;s what makes more sense overall.&#8221;</p><p><strong>Michael Nielsen</strong></p><p>This is to my point that we don&#8217;t really understand what to do with the models. We don&#8217;t have the verbs yet. It is certainly interesting to think about the question where you start to apply constraints to the models, essentially saying, &#8220;What&#8217;s the simplest possible explanation?&#8221; Or, &#8220;Can you simplify? Can you give me the 90/10 explanation?&#8221; And go further and further in boiling it down.</p><p>It might be that indeed they start out by providing a very, very complicated, many-parameter model. But you can just force the case, and basically that&#8217;s scaffolding, which maybe is the very early days of their attempt to understand something. They&#8217;re forced through that to a much more simple understanding.</p><p><strong>Dwarkesh Patel</strong></p><p>Sorry for misunderstanding, but it sounds like you&#8217;re saying maybe there&#8217;s some <a href="https://en.wikipedia.org/wiki/Regularization_(mathematics)">regularizer</a> or some distillation you could do of a very complicated model that gets you to a truer, more parsimonious theory. Take Ptolemy versus Copernicus. You start off with lots of Ptolemy epicycles, and then you try to distill this model, and maybe it gets rid of some of the epicycles that are less and less necessary to get the mean squared error of the orbits to match.</p><p>But at some point it has to do this thing which is to switch two things. Locally, it actually doesn&#8217;t make things more accurate. It&#8217;s in a global sense that it&#8217;s a more progressive theory. There&#8217;s some process which obviously humanity did over its span, which did that regularization or did that swap. But with raw <a href="https://en.wikipedia.org/wiki/Gradient_descent">gradient descent</a>, I don&#8217;t really feel like it would do that.</p><p><strong>Michael Nielsen</strong></p><p>Think about the example of going from Newtonian gravity to Einstein&#8217;s general theory of relativity. These are shockingly different theories, and the question is what causes that flip. As nearly as I understand the history, what goes on is Einstein develops special relativity and pretty much straight away he understands. It&#8217;s a very obvious observation. In special relativity, influences can&#8217;t propagate faster than the speed of light, and in Newtonian gravity, <a href="https://en.wikipedia.org/wiki/Action_at_a_distance">action is at a distance</a>.</p><p>Straight away in special relativity, you could use Newtonian gravity to do faster-than-light signaling. You could send information backwards in time. You could do all kinds of crazy stuff. It&#8217;s not a big leap to realize we have a big problem here. That&#8217;s the forcing function there. You&#8217;ve realized that your old explanation is not sufficient. You need something new.</p><p>Then you&#8217;re going to start by doing the simplest possible stuff. It just turns out that a lot of that stuff doesn&#8217;t work very well, so you&#8217;re forced to go through these steps where gradually it gets more complicated, and it&#8217;s wrong in a variety of ways. The final theory appears shockingly simple and beautiful, but it&#8217;s gone through some somewhat ugly intermediate stages.</p><p><strong>Dwarkesh Patel</strong></p><p>If you&#8217;re thinking about what it looks like to have AI accelerate science, there&#8217;s one for well-understood domains where we just want local solutions, like how does this protein fold. We just train a raw model using gradient descent. Then there&#8217;s things like coming up with general relativity, where you couldn&#8217;t really just train on every single observation in the universe and hope that general relativity pops out.</p><p>What would it require? It also certainly wasn&#8217;t immediately discovered. It was decades of thought. You&#8217;d need independent research programs where people start off with these biases, where Einstein is initially motivated by this <a href="https://en.wikipedia.org/wiki/Einstein%27s_thought_experiments#Falling_painters_and_accelerating_elevators">thought experiment of whether you can distinguish the effect of gravity from just being accelerated upwards</a>. You just need different AI thinkers to start off with these initial biases and see what can germinate out of them. The verification loop for that might be quite long, but you just need to keep all those research programs alive at the same time.</p><p><strong>Michael Nielsen</strong></p><p>This point you make about keeping all the different research programs alive, I think that is very important and central. A great example is situations where the same answer has been correct in some circumstances and wrong in other circumstances.</p><p>The planet Uranus was not in quite the right spot, and <a href="https://en.wikipedia.org/wiki/Discovery_of_Neptune">people famously predicted the existence of Neptune on this basis</a>. Wonderful, massive success for Newtonian gravity. The planet Mercury is not in quite the right spot. You predict the existence of some other distorting planet. It turns out that doesn&#8217;t exist. Actually, <a href="https://en.wikipedia.org/wiki/Tests_of_general_relativity">the reason Mercury is not in the right spot is because you need general relativity</a>.</p><p>You&#8217;ve pursued very similar ideas, and it&#8217;s been very successful in one case, and it&#8217;s been completely and utterly unsuccessful in the other case. A priori, you can&#8217;t tell which of these is the thing to do, and you actually need to do both. This is certainly very true in the history of science.</p><p>This kind of diversity, where you just have lots of people go off and pursue lots of potentially promising ideas, you just need to support that for a long time. It&#8217;s hard to do that for a variety of reasons, but it does seem to be very, very important.</p><p><strong>Dwarkesh Patel</strong></p><p>This example of Uranus versus Mercury is very interesting. I think it illustrates the difficulty with falsificationism. The orbit of Uranus is in some sense falsifying Newtonian mechanics. But then you make some ancillary prediction that says, &#8220;Oh, the reason this is happening is there must be another planet which is perturbing Uranus&#8217;s orbit.&#8221; I think it&#8217;s <a href="https://en.wikipedia.org/wiki/Urbain_Le_Verrier">Le Verrier</a> in 1846. &#8220;Point a telescope in the right direction, you find Uranus.&#8221;</p><p><strong>Michael Nielsen</strong></p><p>Neptune.</p><p><strong>Dwarkesh Patel</strong></p><p>Sorry. Neptune, yes. But with Mercury, it&#8217;s observed that the ellipse which forms its orbit is rotating 43 arcseconds more every century than Newtonian mechanics would imply, so people say that there must be a planet inside Mercury&#8217;s orbit. They call it <a href="https://en.wikipedia.org/wiki/Vulcan_(hypothetical_planet)">Vulcan</a> and point the telescopes. It&#8217;s not there.</p><p>But if you&#8217;re a proper Newtonian, what you do is say, &#8220;Well, maybe there&#8217;s some cosmic dust that&#8217;s occluding this planet, or maybe the planet is so small we can&#8217;t see it, or let&#8217;s build an even more powerful telescope, or maybe there&#8217;s some magnetic field which is occluding our measurement.&#8221; At any one of these steps&#8212;</p><p><strong>Michael Nielsen</strong></p><p>And this happens over and over. There are just so many stories which are exactly like this. An example I love from the 1990s. Some <a href="https://en.wikipedia.org/wiki/Pioneer_anomaly">people noticed that the Pioneer spacecraft weren&#8217;t quite where they were supposed to be</a>.</p><p>You can get very excited about this. &#8220;Oh my goodness, general relativity is wrong. Maybe we&#8217;re going to discover the next theory of gravity.&#8221; Today the accepted explanation is that there&#8217;s just a slight asymmetry in the spacecraft. It turns out that the thermal radiation is slightly larger in one direction than the other, and that&#8217;s causing a tiny little acceleration towards the sun. Most of the time when there&#8217;s these apparent exceptions, it&#8217;s just something like that going on.</p><p>It&#8217;s very much like the Mercury-Vulcan case. But every once in a while, it&#8217;s not. A priori, you can&#8217;t distinguish these. Science is just full of these. It&#8217;s funny too, the way we tell the history of science, it sounds so simple. You just focus on the right exception and you realize that you need to throw out the old theory and lo and behold, your Nobel Prize awaits. But in fact, these exceptions are all over the place. 99.9% of the time, it just turns out to be some effect like this thermal acceleration in the case of the Pioneer spacecraft. Unfortunately, there&#8217;s a lot of selection bias going into those stories.</p><p><strong>Dwarkesh Patel</strong></p><p>The thing is there&#8217;s no ex ante heuristic which tells you which case you&#8217;re in. To spell out why I think this is important, some people have this idea that AI is going to make disproportionate progress towards science because it makes disproportionate progress towards domains where there&#8217;s tight verification loops. It&#8217;s really good at coding because you can run unit tests.</p><p>Science may be similar because you can run experiments. What that doesn&#8217;t appreciate is that there&#8217;s an infinite number of theories that are compatible with any given experiment. Over time, why we latch onto the one we think is more correct in retrospect is, as we&#8217;re discussing, hard to articulate.</p><p>Lakatos has all kinds of interesting examples in the book about these hostile verification loops that are extremely long-lasting. One he talks about is <a href="https://en.wikipedia.org/wiki/William_Prout">Prout</a>. There&#8217;s this chemist in 1815 who hypothesizes that all atomic nuclei must have whole number weights. They&#8217;re basically all made of hydrogen. The reason he thinks this is because if you look at the measured weights of all elements, it does seem that almost all of them have whole number weights. But then there are some exceptions. For example, chlorine comes out at 35.5.</p><p>So then there&#8217;s all these ad hoc theories that people in this school keep coming up with, like, &#8220;Oh, maybe there&#8217;s chemical impurities.&#8221; But there&#8217;s no chemical reaction you can do which seems to get rid of this. Maybe it&#8217;s fractions of whole numbers, so 35.5 can be halves. But actually, if you measure chlorine even closer, it&#8217;s 35.46, so it&#8217;s getting further away from the correct fraction. Later on, what is discovered is what you&#8217;re actually measuring is different isotopes, which cannot be chemically distinguished. They can only be physically distinguished.</p><p>So you have 85 years before we realize what an isotope is, where the verification loop is actively hostile against the correct theory. You just need this remnant to be defending&#8230; There&#8217;s no ex ante reason it&#8217;s the preferred theory. As a community, we should just have people try to integrate new observations, even if they don&#8217;t seem to fit their school of thought, and hopefully enough of that happens&#8230; Anyways, I guess the thing I&#8217;m trying to articulate is the difficulty with automating science.</p><p><strong>Michael Nielsen</strong></p><p>The question is, where is the bottleneck at some level? Are we primarily bottlenecked on one type of thing, or are we bottlenecked on multiple types of things? Certainly, talking to structural biology people, they seem to think that AlphaFold was an enormous advance. It was a shock.</p><p>At some level, yes, AI can certainly help us speed up science. It is helping with a certain type of bottleneck. That doesn&#8217;t mean though, as you&#8217;re saying, that it&#8217;s necessarily going to help with all kinds of bottlenecks. I suppose the question you&#8217;re pointing at is, what are the types of bottlenecks that remain, and what are the prospects for getting past them?</p><p>Even in the case of coding, it&#8217;s really interesting talking to programmer friends. At the moment they&#8217;re all in this state of shock and high excitement, and they&#8217;re all over the place. You do wonder where the bottleneck is going to move to. Certainly, one thing that a lot of them seem to be bottlenecked on now is having interesting ideas, and in particular, having interesting design ideas. There&#8217;s not really a verification loop for knowing that a design idea is very interesting.</p><p>They&#8217;re no longer nearly as bottlenecked by their ability to produce code, but they are still bottlenecked by this other thing. Formerly, they weren&#8217;t bottlenecked on it because just writing code took so much of their time. They could have lots of ideas while they were taking three weeks to implement their prototype, and then they would implement the next version. Now they&#8217;re taking three hours to implement the prototype, and they don&#8217;t have as good ideas after that, from a design point of view.</p><h3>00:50:54 &#8211; Why aliens will have a different tech stack than us</h3><p><strong>Dwarkesh Patel</strong></p><p>You have a very interesting take. I think it was a footnote in <a href="https://michaelnotebook.com/dci/index.html">one of your essays</a>, and I couldn&#8217;t find it again, which was that it&#8217;s very possible that if we met aliens, they would have a totally different technological stack than us. That contradicts a common assumption I had that I never questioned, which is that science is this thing you do relatively early on in the history of civilization. You get to a point and you have a couple hundred years of just cranking through the basics, understanding how the universe works, and you&#8217;ve got it. You&#8217;ve got science. Then everybody would converge on the same &#8220;science.&#8221; I found that a very interesting idea, and I want you to say more about it.</p><p><strong>Michael Nielsen</strong></p><p>The idea there that I&#8217;m at least somewhat attached to is that the tech tree or the science and tech tree is probably much larger than we realize. We&#8217;re in this funny situation. People will sometimes talk about a theory of everything as a potential goal for physics, and then there&#8217;s this presumption that physics is done once you get there. Of course, this is not true at all.</p><p>If you think about computer science, computer science started in the 1930s when <a href="https://en.wikipedia.org/wiki/Alan_Turing">Turing</a> and <a href="https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis">Church</a> and so on <a href="https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis">laid down what the theory of everything was</a>. They just said, &#8220;Here&#8217;s how computation works.&#8221; We&#8217;ve spent ninety-odd years since then exploring the consequences of that and gradually building up more and more interesting ideas. Those ideas, to some extent, you can regard as technology. But insofar as they&#8217;re discovered principles inside that theory of computation, I think they&#8217;re best regarded as science and in some cases, very fundamental science.</p><p>Ideas like <a href="https://en.wikipedia.org/wiki/Public-key_cryptography">public-key cryptography</a> are incredibly deep, very non-obvious ideas which lay hidden already in the 1930s. My expectation is that there will be different ways of exploring this tech tree, and we&#8217;re still relatively low down. We&#8217;re still at the point where we&#8217;re just understanding these basic fundamental theories, and we haven&#8217;t yet explored them.</p><p>A thing which I think is quite fun is if you look at the phases of matter. When I was in school, we&#8217;d get taught that there are three phases of matter, or sometimes four or five, depending on what you included. As an adult, as a physicist, you start to realize we&#8217;ve been adding to this list. We&#8217;ve got <a href="https://en.wikipedia.org/wiki/Superconductivity">superconductors</a> and <a href="https://en.wikipedia.org/wiki/Superfluidity">superfluids</a>, and maybe different types of superconductors, and <a href="https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein_condensate">Bose-Einstein condensates</a>, the <a href="https://en.wikipedia.org/wiki/Quantum_Hall_effect">quantum Hall systems</a>, <a href="https://en.wikipedia.org/wiki/Fractional_quantum_Hall_effect">fractional quantum Hall systems</a>, and so on. It&#8217;s starting to turn out there&#8217;s a lot of phases of matter to discover, and we&#8217;re going to discover a lot more of them. In fact, we&#8217;re going to be able to start to design them in some sense. We&#8217;ll still be subject to the laws of physics, but there is this tremendous freedom in there.</p><p>This looks to me like we&#8217;re down at the bottom of the tech tree. We&#8217;ve barely gotten started there, and I expect that to be the case broadly. Certainly, programming is a very natural place to look. The idea that we&#8217;ve discovered all the deep ideas in programming just seems obviously ludicrous. We keep discovering what seem like deep, new, fundamental ideas. We&#8217;re very limited. We&#8217;re basically slightly jumped-up chimpanzees, so we&#8217;re slow and it&#8217;s taking us time. But what do we look like another million years in the future, in terms of all the different ideas people have had around how to manipulate computers and information? I think we&#8217;re likely to discover that there are a lot of very deep ideas still to be discovered.</p><p>I think it was <a href="https://en.wikipedia.org/wiki/Donald_Knuth">Knuth</a> in the preface to <em><a href="https://amzn.to/4vagtVj">The Art of Computer Programming</a></em> who says something like it. He started this book back in the sixties. He talked to a mathematician who was a bit contemptuous and said, &#8220;Look, computer science isn&#8217;t really a thing yet. Come back to me when there&#8217;s a thousand deep theorems.&#8221; Knuth remarks, writing the preface decades later, &#8220;There clearly are a thousand deep theorems now.&#8221;</p><p>It&#8217;s really interesting to think what the long-term future is as you get higher and higher up in the tech tree, choices about which direction we go and how we choose to explore. It&#8217;s potentially the case that different civilizations or different choices mean we end up in different parts of that tree. In particular, there are just very basic things about how we&#8217;re very visual creatures, while certain other animals are much more aurally based. Does that bias the types of thoughts that you have? Then you extend it to much more exotic kinds of civilizations where maybe their biases in terms of how they perceive and manipulate the world are quite different than ours. That might make some significant changes in terms of how they do that exploration of the tech tree. It&#8217;s all speculation, obviously.</p><p><strong>Dwarkesh Patel</strong></p><p>This is such an interesting take. I want to better understand it. One way to understand it is that there might be some things which are so fundamental and have such a wide collision area against reality that they&#8217;re inevitably going to discover, like general relativity.</p><p><strong>Michael Nielsen</strong></p><p>Numbers. Numbers. Of all the intelligences in the Milky Way galaxy&#8230; Maybe that number is one. Well, actually, arguably we&#8217;ve already increased the number. But of all of those, what fraction have the concept of counting? It does seem very natural. What fraction have discovered the idea of some kind of decimal place system? Interesting question. Maybe we&#8217;re missing something really simple and obvious that&#8217;s actually way better than that.</p><p>What fraction got there immediately? What fraction had to go through some other intermediate state? What fraction uses linear representations versus a two-dimensional or a three-dimensional representation? I think the answers to these questions are just not at all obvious. It&#8217;s a lot of design freedom.</p><p><strong>Dwarkesh Patel</strong></p><p>On theoretical computer science, this is going to be extremely naive and arrogant, but I took <a href="https://ocw.mit.edu/courses/6-845-quantum-complexity-theory-fall-2010/">Scott Aaronson&#8217;s class on complexity theory</a>, and I was by far the worst student he&#8217;s ever had. What I remember is there was this period, in which you were one of the pioneers, where we figured out the class of problems that quantum computers can solve and how it relates to problems that classical computers can solve. It was groundbreaking. It&#8217;s crazy that this works. Since then&#8230; There&#8217;s literally this website called <a href="http://www.complexityzoo.com/">Complexity Zoo</a> which lists out all the <a href="https://en.wikipedia.org/wiki/Complexity_class">complexity classes</a>. If you have this complexity class with this kind of oracle, it&#8217;s equivalent to this other class. It feels like we&#8217;re building out that taxonomy.</p><p>There are a couple ways to understand what you&#8217;re saying. One, maybe you disagree with me that this is actually what&#8217;s happened with this field. Another is that while that might happen to any one field, who would&#8217;ve thought in 1880 that computer science, other than <a href="https://en.wikipedia.org/wiki/Charles_Babbage">Babbage</a>, was going to be a thing in the first place? We&#8217;re underestimating how many more fields there could be. Or maybe you think both, or maybe a third secret thing. I&#8217;d be curious.</p><p><strong>Michael Nielsen</strong></p><p>A very common argument here is the low-hanging fruit argument. The argument that says there should be diminishing returns.</p><p><strong>Dwarkesh Patel</strong></p><p>In fact, empirically we see this. The amount of scientists in the world has exponentially increased.</p><p><strong>Michael Nielsen</strong></p><p>I think it&#8217;s worth thinking about why you expect diminishing returns and how well that argument actually applies in practice. An analogy I like is thinking about going to an event, like a wedding, and you go to the dessert buffet. They&#8217;ve put out thirty desserts. Naturally, what people do is the best desserts go first. We don&#8217;t quite have a well-ordered preference there, so maybe there&#8217;s some difference, but human beings are fairly similar, so the best desserts will go first. This is an argument for why you expect diminishing returns in a lot of different fields. If it&#8217;s relatively easy to see what&#8217;s available and people have similar preferences, then the best stuff goes first and it just gets worse and worse after that.</p><p>If you look at a very static snapshot in time of scientific progress, maybe there&#8217;s some truth to that. But if somebody is standing behind the dessert table and is replenishing and restocking the desserts and keeps adding new ones in, it may turn out that a little bit later, much better desserts appear, and you&#8217;re going to go and eat those instead.</p><p>Scientific progress has a little bit of that flavor. We go through these funny time periods. Computer science is a great example, where computer science basically arose as a side effect of some pretty abstruse questions in the <a href="https://plato.stanford.edu/entries/philosophy-mathematics/">philosophy of mathematics</a> and logic. You&#8217;ve got these people trying to attack these rather esoteric questions that seem quite high up in exploration, and they discover this fundamental new field, and all of a sudden there&#8217;s an explosion there. The diminishing returns argument just didn&#8217;t apply there. We just weren&#8217;t able to see what was there.</p><p>This has been the case over and over again. New fields arrive and all of a sudden, and boom, it&#8217;s easy to make progress again. Young people flood in because you can be twenty-one and make major breakthroughs rather than having to spend twenty-five years mastering everything that&#8217;s been done before. It&#8217;s obviously very attractive. I&#8217;m not sure anybody understands very well the dynamics of that, or how to think about why the structure of knowledge is that way, where these new fields keep opening up. But it does seem empirically to be the case.</p><p><strong>Dwarkesh Patel</strong></p><p>Despite the fact that that is the case&#8230; Take deep learning. Obviously, this is an example of a new field where twenty-one-year-olds can make progress and it&#8217;s relatively new. Fifteen years or so since it got back into high gear. But already we&#8217;re in a stage where you need billions, tens of billions, or hundreds of billions of dollars to keep making progress at the frontier.</p><p>There are a couple ways to understand that. One is that it actually is harder than the kinds of things the ancients had to do, or is more intensive at least. Second is it might not have been, but because our civilizational resources are so large, the amount of people is so large, the amount of money is so large, we can basically make the kind of progress it would have taken the ancients forever to make almost immediately. We notice something is productive and immediately dump in all the resources. But it&#8217;s also weird that there&#8217;s not that many of them. I feel like deep learning is notable because it is one big exception to the fact that it&#8217;s hard to think of other examples.</p><p><strong>Michael Nielsen</strong></p><p>I think that&#8217;s a consequence of the architecture of attention. At any given time, there&#8217;s always a most successful thing. If deep learning wasn&#8217;t a thing, maybe you&#8217;d be talking about <a href="https://en.wikipedia.org/wiki/CRISPR">CRISPR</a>. Maybe we wouldn&#8217;t think about solving the protein structure prediction problem as a success of AI. Maybe we would have figured out how to do it with curve fitting, more broadly construed, and we&#8217;d just be like, &#8220;Wow, that took a lot of computing resources.&#8221; But protein structure prediction might be an enormously important thing.</p><p>There is always our biggest thing. What you&#8217;re pointing at is more a consequence of the way in which attention gets centralized. It&#8217;s basically fashion, is what I&#8217;m saying. It&#8217;s not just fashion, but there is some dynamic there.</p><p><strong>Dwarkesh Patel</strong></p><p>There&#8217;s a very interesting and important implication of this idea. That the branching is so wide and so contingent and so path-dependent that different civilizations would stumble on entirely different technology stacks. There&#8217;s a very interesting implication that there will be gains from trade into the far, far future, which might actually be one of the most important facts about the far future in terms of how civilizations are set up, how they coordinate, and how they interface. There&#8217;s not this &#8220;go forth and exploit.&#8221; There are humongous gains to trade from adjacent colonies or whatever.</p><p><strong>Michael Nielsen</strong></p><p>Sort of. There&#8217;s a question of what&#8217;s actually hard. If it&#8217;s just the ideas, well, those spread relatively quickly. It&#8217;s relatively easy to share ideas. If it&#8217;s something more, it&#8217;s almost a <a href="https://danwang.co/">Dan Wang</a> kind of idea where there&#8217;s some notion of capacity. You need all the right techs, you need all the right manufacturing capacity, and so on.</p><p>So civilization A has a very different kind of manufacturing capacity, and it&#8217;s just not so easy to build in civilization B. Even if civilization B is ahead, I think that becomes true. There is a comparative advantage which is going to provide massive benefits to trade in both directions. Eventually, you expect some diffusion of innovation. It is funny to think about what the barriers are there.</p><p>A fun thought experiment I like to think about is GitHub but for aliens. Somebody presents you with all of the code from some alien civilization. I don&#8217;t even know what code means there, but their specification of algorithms. It would have many interesting new ideas in there, and it would take forever for human beings to dig through and try and extract all of those.</p><p>The origin of this for me was thinking about proteins in nature. We&#8217;ve been gifted this incredible variety of machines which we don&#8217;t really understand at all. We just have to go and try and understand them on a one-by-one basis. We&#8217;re still understanding hemoglobin and insulin and things like this. There are hundreds of millions of proteins known. So it is a little bit like that. We&#8217;ve been gifted by biology this immense library of machines, no doubt containing an enormous number of very interesting ideas, and we&#8217;re just at the very, very beginning of understanding it.</p><p>I suppose your point&#8212;I need to relabel your argument slightly&#8212;but you think of that as a gift from an alien civilization, which obviously it isn&#8217;t, but you think of it that way. And oh my goodness, there&#8217;s so much in there and we&#8217;re going to study it. Goodness knows how long we could continue to study it. There are tens of thousands of papers about hemoglobin and things like that, and we still don&#8217;t understand them, and yet we&#8217;re getting so much out of it. Just think about insulin alone. It&#8217;s such an important thing.</p><p><strong>Dwarkesh Patel</strong></p><p>That&#8217;s an incredibly useful intuition pump, that you have on Earth&#8230; <a href="https://www.dwarkesh.com/p/nick-lane">I had Nick Lane on</a> where he had this theory about how life emerged, but whatever theory you have, something like DNA has had four billion years. You have an alien civilization come here and be like, &#8220;There&#8217;s all these interesting things to learn about material science.&#8221;</p><p><strong>Michael Nielsen</strong></p><p>Think about <a href="https://en.wikipedia.org/wiki/Kinesin">kinesin</a> walking along. We know almost nothing about these proteins, and yet the tiny few facts we do know are just incredible. The <a href="https://en.wikipedia.org/wiki/Ribosome">ribosome</a> is another example, this miraculous sort of device, a little factory.</p><p><strong>Dwarkesh Patel</strong></p><p>All seeded by this particular chemistry on Earth with nucleic acids and carbon-based life forms. That chemistry gives rise to all of these interesting things which an alien civilization would find very interesting. That very seed, which must be one among trillions of possible seeds of general intellectual ideas, leads to all this fecundity. That&#8217;s a very interesting intuition pump.</p><p>I want to meditate on this &#8220;gains from trade&#8221; thing because I feel like there&#8217;s something very interesting about this idea that if you have this vision of how technology progresses and how it may be different in different civilizations, it actually has important implications about how different civilizations might interact with each other. The fact that there are going to be these huge gains from trade.</p><p><strong>Michael Nielsen</strong></p><p>It makes friendliness much more rewarding?</p><p><strong>Dwarkesh Patel</strong></p><p>Yes. That&#8217;s a very important observation.</p><p><strong>Michael Nielsen</strong></p><p>I hadn&#8217;t thought about that at all. That is a very interesting observation. It is funny. <a href="https://en.wikipedia.org/wiki/Comparative_advantage">Comparative advantage</a> is something that people love to invoke and it&#8217;s a very beautiful idea obviously. There are limits to it. It&#8217;s a special limited model.</p><p>Chimpanzees can do interesting things, but we don&#8217;t trade with them. I think it&#8217;s interesting to think about the reasons why. Part of it is just power, I think. Once there&#8217;s a sufficiently large power imbalance, very often&#8212;not always, but very often&#8212;groups of people seem to shift into this other mode where they just seek to dominate. Maybe there&#8217;s something special about human beings, but maybe it&#8217;s also a more general thing. You need all these special things to be true before groups will trade. It&#8217;s not necessarily obvious.</p><p><strong>Dwarkesh Patel</strong></p><p>I think the big thing going on here is one, transaction costs. Two, comparative advantage does not tell you that the terms on which the trade happens are above subsistence for any given producer. People often bring this up in the context of, &#8220;Well, humans will be employed even in a post-AGI world because of comparative advantage.&#8221;</p><p>There are five different ways that argument breaks down, but the easiest way to understand it is: why don&#8217;t we have horses all around on the roads? Because there&#8217;s some comparative advantage between cars and horses. One, there are huge transaction costs to building roads that are compatible with horses and cars at the same time. In a similar way, AI thinking at 1,000 times the speed that can shoot their <a href="https://arxiv.org/abs/2405.14061">latent states</a> at each other is going to find it way more costly than the benefit, in terms of interacting with a human being in the supply chain.</p><p>Second, just because horses have a comparative advantage mathematically does not mean that it is worth paying $100,000 a year, or whatever it costs to sustain a horse in San Francisco. That subsistence isn&#8217;t going to be worth the benefit you get out of the horse.</p><p><strong>Michael Nielsen</strong></p><p>I do think it&#8217;s interesting, the sheer fact&#8230; My expectation and my intuition obviously differs a great deal from yours on this. Most parts of the tech tree are never going to be explored. There are just too many interesting ways of combining things. There are too many deep ideas waiting to be discovered, and not only we, but nobody ever is going to discover most of them. So choices about how to do the exploration actually matter quite a bit.</p><p>It&#8217;s something I really dislike about technological determinist arguments. I&#8217;m willing to buy it low enough down when progress is relatively simple. But higher up, you start to get to shape the way in which you do the exploration. And it&#8217;s interesting, we are starting to shape it in interesting ways.</p><p>There are various technologies that have been essentially banned. You think about DDT, chlorofluorocarbons, restrictions on the use of nuclear weapons, the Nuclear Non-Proliferation Treaty. Those kinds of things weren&#8217;t done before the fact, but they&#8217;re starting to get pretty close in some cases, where we just preemptively decide, &#8220;Oh, we&#8217;re not going to go down that path.&#8221; So that starts to look like a set of institutions where we are actually influencing how we explore the tech tree.</p><p><strong>Dwarkesh Patel</strong></p><p>On where you would see these gains from trade, obviously you&#8217;d see the most where it&#8217;s pure information that could be sent back and forth, because the information has this quality where it is expensive to produce, but cheap to verify and cheap to send. It&#8217;ll be interesting how much of future productivity can be distilled down to information.</p><p>Right now, it&#8217;s hard to do. If China&#8217;s really good at manufacturing something, there&#8217;s this process knowledge that&#8217;s in the heads of 100 million people involved in the manufacturing sector in China. But in the future, it might be easier if AIs are doing it.</p><p><strong>Michael Nielsen</strong></p><p>The question is to what extent our fabrication gets very uniform and gets really commoditized. 3D printers have been the next big thing for at least 20 years now. Why do they still not work all that well? Why are they still not at the center of manufacturing, and what comes after that? It is funny to look at the ribosome by contrast, which really is at the center of biology in a whole lot of really interesting ways.</p><p>Whether or not that&#8217;s the future of manufacturing is something very simple, where everything goes as throughput through a <a href="https://en.wikipedia.org/wiki/Bioreactor">bioreactor</a> or something like that. You send the information, and then you grow stuff, or you have some 3D printer that actually works. If they&#8217;re good enough, then it does become much more a pure information problem, and some of this process knowledge becomes much less important.</p><h3>01:15:26 &#8211; Are there infinitely many deep scientific principles left to discover?</h3><p><strong>Dwarkesh Patel</strong></p><p>Can I ask a very clumsily phrased question? There are these deep principles that we&#8217;ve discovered a couple of. One is this idea that <a href="https://en.wikipedia.org/wiki/Noether%27s_theorem">if there&#8217;s a symmetry across a dimension, it corresponds to a conserved quantity</a>. It&#8217;s a very deep idea. There&#8217;s another&#8212;which you&#8217;ve written a lot about, written a textbook about in fact&#8212;about ways to understand what kinds of things you can compute, what kinds of physical systems you can understand with other physical systems, what a universal computer looks like, et cetera.</p><p>Is your view that if you go down to this level of idea of <a href="https://en.wikipedia.org/wiki/Noether%27s_theorem">Noether&#8217;s theorem</a> or the <a href="https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis">Church-Turing principle</a>, that there&#8217;s an infinite number of extremely deep such principles? Because I feel what makes them special is that they themselves encompass so many different possible ways the world could be. But no, the world has to be compatible with a couple of these very deep principles.</p><p><strong>Michael Nielsen</strong></p><p>I don&#8217;t know. All I have here is speculation and instinct. My instinct is that we keep finding very fundamental new things. It was quite formative for me to understand, as I gave the example before, these wonderful ideas of Church and Turing and these other people about universal programmable devices. Then you understand later, this also contains within it the ideas of public-key cryptography. Then you understand later, that also contains within it the ideas people refer to as cryptocurrency.</p><p>There&#8217;s a very deep set of ideas there about the ability to collectively maintain an agreed-upon ledger, which is built upon this. It&#8217;s taken many years to figure out the right canonical form of those. Just this fact that you keep finding what seem like deep new fundamental primitives has been a very important intuition pump for me. I&#8217;ve given that particular example, but I think you see that same pattern in a lot of different areas.</p><p><strong>Dwarkesh Patel</strong></p><p>What is your interpretation then of this empirical phenomenon where whatever input you consider into the scientific process or technological progress&#8230; Economists have studied this a million ways. It just seems to require a very consistent rate of X percent more researchers per year. There&#8217;s this <a href="https://web.stanford.edu/~chadj/IdeaPF.pdf">famous paper from a couple years ago by Nicholas Bloom and others</a> where they say, &#8220;How many people are working in the semiconductor industry, and how has it increased over time through the history of <a href="https://en.wikipedia.org/wiki/Moore%27s_law">Moore&#8217;s law</a>?&#8221; I think they find that Moore&#8217;s law means transistor density increases 40% a year, but to keep that going the number of scientists has increased 9% a year, in the semiconductor industry. They go through industry after industry with this observation.</p><p>Is your view that there are these deep ideas, but they keep getting harder to find? Or is there another way to think about what&#8217;s happening with these empirical observations?</p><p><strong>Michael Nielsen</strong></p><p>First of all, all of their examples are narrow. They pick a particular thing, and then they look at a particular metric. GPUs don&#8217;t show up there. All of a sudden you get this ability to parallelize, and that&#8217;s really interesting. There are a lot of external consequences. Basically they have these simple quantitative measures. They look at it in agricultural productivity. They look at it in a whole lot of different ways, but you do have to focus narrowly.</p><p>I&#8217;m certainly interested in the fact that new types of progress keep becoming possible. But I think even there, there does still seem to be some phenomenon of diminishing returns. Is that intrinsic? Is that something about the structure of the world? What is it? One thing which hasn&#8217;t changed that much is the individual minds which are doing this kind of work. Maybe those should be improved as well, or some feedback process going on there. Maybe that changes the nature of things.</p><p>I look at scientific progress up until, let&#8217;s say, 1700, and it was very slow, and also very irregular. You had the Ionians back five centuries before Christ doing these quite remarkable things, and so much knowledge would get lost, and then it would be rediscovered, and then it would be lost again. You&#8217;d have to say that progress was very slow. It&#8217;s partially just bound up with the fact that there were some very good ideas that we just didn&#8217;t have.</p><p>Even once you&#8217;ve had the ideas, you need to build institutions around them. You actually need to solve a whole lot of different problems about training, allocation of capital, and all these kinds of things. Even just basic security for researchers, so they&#8217;re not worried about the <a href="https://en.wikipedia.org/wiki/Inquisition">Inquisition</a> or things like that. There are all these complicated problems. You solve all those complicated problems, and then all of a sudden, boom, there&#8217;s a massive burst of scientific progress.</p><p>If there&#8217;s some kind of stagnation, if you&#8217;re not changing those external circumstances, yes, you may start to get diminishing returns again. But that doesn&#8217;t mean there&#8217;s anything intrinsic about the situation. Maybe something external needs to change again. Obviously, a lot of people think AI is potentially going to be a driver. It certainly will at some level.</p><p>To that extent, you can think of a lot of modern scientific instrumentation as really, at some level, robots. What is the <a href="https://en.wikipedia.org/wiki/James_Webb_Space_Telescope">James Webb Space Telescope</a>? It&#8217;s unconventional maybe to describe it as a robot, but it&#8217;s not completely unreasonable either. It is an example of a highly automated, very sophisticated system with electronically mediated sensors and actuators, where machine learning is being used to process the data. In that sense, we&#8217;re already starting to see that transition. We&#8217;ve been seeing it for decades.</p><p><strong>Dwarkesh Patel</strong></p><p>I have this &#8220;smoke a joint and take a puff&#8221; thought, which&#8212;</p><p><strong>Michael Nielsen</strong></p><p>I think we&#8217;ve had a few.</p><p><strong>Dwarkesh Patel</strong></p><p>I think we&#8217;re getting to that part of the conversation, and then you can help me get my foot out of my mouth and figure out a more concrete way to think about it. To your point that there was the Industrial Revolution, the Enlightenment, and now there&#8217;s AI, and each might be a different pace or a different way in which science happens. If you think about the pace of how fast such transitions have been happening, you can draw over the long span of human history this hyperbolic rate of growth that is increasing over time as well.</p><p>A hundred thousand years ago, you had the Stone Age. You go back even much further, how long have primates been around? It would be millions of years. A hundred thousand years ago, the Stone Age, then ten thousand years ago, the Agricultural Revolution, then three hundred years ago, the Industrial Revolution, each marked by this increase in the rate of exponential growth. Then people think it&#8217;s going to happen again with AI. But that would happen potentially even faster.</p><p>It would not have occurred to somebody at the beginning of the Industrial Revolution that the next demarcation in this trend will be artificial intelligence. So if things are getting faster, and it&#8217;s hard to anticipate what the next transition will be. I guess we just think of this singularity between now and AI as what distinguishes the past from the future. But applying the same heuristic that many people in the past should have had, maybe the &#8220;Intelligence Age&#8221; is also quite short and the next thing after that, we don&#8217;t even have the ontology to describe what it is, the future will not think of the past as pre-intelligent AI and post-AI.</p><p><strong>Michael Nielsen</strong></p><p>No, obviously we can&#8217;t prove this, but it certainly seems quite plausible. Part of the issue is just that the substrate we have available to conceive seems all wrong. You can&#8217;t speculate with a bunch of chimpanzees about what it would be to have language. Just to pick a major transition in the past, the transition itself is the thing. It seems likely.</p><p>If we&#8217;re talking about &#8220;taking a puff&#8221; kind of thoughts, I&#8217;m certainly amused by the idea that there&#8217;s going to be some transition involving artificial general intelligence using classical computers. But actually, there&#8217;ll be an interesting transition with quantum computers as well. They&#8217;re probably capable of a strictly larger class of potentially interesting computations. So maybe the character of AQGI, or whatever it should be called, is actually qualitatively different. So maybe there&#8217;s a brief period between those two things. As I say, this is just speculation, but it&#8217;s certainly amusing.</p><p><strong>Dwarkesh Patel</strong></p><p>Is there a reason to think that? From what I understand, for decades people like you have put pretty tight bounds on the kinds of things quantum computers are going to do. It&#8217;ll speed up search somewhat. The kinds of things it speeds up extremely, like <a href="https://en.wikipedia.org/wiki/Shor%27s_algorithm">Shor&#8217;s algorithm</a>, it seems like&#8230; Again, maybe this is to your point that we can&#8217;t predict in advance what&#8217;s down the tech tree, but at least from here, it seems like you break encryption, but what else are you using Shor&#8217;s algorithm to do?</p><p><strong>Michael Nielsen</strong></p><p>We&#8217;ve only been thinking about it for 40 or so years. Not for very long, and we haven&#8217;t thought that hard about it as a civilization. Does it turn out that it&#8217;s very narrow? Maybe. Does it turn out that it&#8217;s very broad? That&#8217;s also a really radical expansion that seems distinctly possible. Keep in mind as well, we&#8217;ve been doing it without the benefit of having the devices. That&#8217;s a pretty big bottleneck to have.</p><p><strong>Dwarkesh Patel</strong></p><p>If you&#8217;re thinking about computer science in the 1700s and you&#8217;re like, &#8220;it can do <a href="https://en.wikipedia.org/wiki/Boolean_algebra">AND/OR</a>, what can come out of that?&#8221; You can&#8217;t anticipate Bitcoin. You can&#8217;t anticipate deep learning.</p><p><strong>Michael Nielsen</strong></p><p>Maybe you could if you were sufficiently bright, but it is a pretty hard situation.</p><h3>01:26:25 &#8211; What drew Michael to quantum computing so early?</h3><p><strong>Dwarkesh Patel</strong></p><p>What is your inside view, having been in and contributing to <a href="https://en.wikipedia.org/wiki/Quantum_information">quantum information</a> and quantum computing back in the &#8216;90s and 2000s? What is your telling of the history of what was the bottleneck? What was the key transition that made it a real field? How do you rank the contributions from <a href="https://en.wikipedia.org/wiki/Richard_Feynman">Feynman</a> to <a href="https://en.wikipedia.org/wiki/David_Deutsch">Deutsch</a> to everybody else who came along?</p><p><strong>Michael Nielsen</strong></p><p>Let&#8217;s just focus on the question about what actually changed. Why was quantum computing not a thing in the 1950s? It could have been. Somebody like <a href="https://en.wikipedia.org/wiki/John_von_Neumann">John von Neumann</a> is a good example. He was absolutely pioneering computation. He also wrote <a href="https://en.wikipedia.org/wiki/Mathematical_Foundations_of_Quantum_Mechanics">a very important book about quantum mechanics</a> and was deeply interested in it. He could have invented quantum computing at that time, and I think there were quite a number of people who potentially could have.</p><p>So why do we have these papers by people like Feynman and Deutsch in the &#8216;80s? Those are fairly regarded as the foundation of the field. There are some partial anticipations a little bit earlier, but they were nowhere near as comprehensive and nowhere near as deep. You should ask David. You can&#8217;t ask Feynman, unfortunately, but he&#8217;ll know much better than I do.</p><p>A couple things that I think are interesting. One is that computation became far more salient in the late &#8216;70s and early &#8216;80s. It just became a thing which many more people were interested in, partially for very banal reasons. You could go and buy a PC. You could buy an Apple II. You could buy a Commodore 64. You could buy all these kinds of things. It became apparent to people that these were very powerful devices, very interesting to think about.</p><p>At the same time, in the quantum case, that was also the time of the <a href="https://en.wikipedia.org/wiki/Ion_trap">Paul trap</a> and the ability to trap single ions. Up to that point, we hadn&#8217;t really had the ability to manipulate single quantum states. You got these two separate things that for historically contingent reasons had both matured around 1980 or so. Somebody like von Neumann could have had the idea earlier, but it is quite an interesting factor.</p><p>There&#8217;s a story about Richard Feynman. He went and got one of the first PCs around 1980 or 1981. He was apparently so excited with this device, he actually tripped and hurt himself quite badly carrying his brand-new computing device. That&#8217;s a very historically contingent coincidence, having somebody who&#8217;s very talented and understanding of quantum mechanics also just very excited about these new machines. It&#8217;s not so surprising perhaps that he&#8217;s thinking about it then. What similar story could you have told 10 years earlier? The conditions don&#8217;t exist for it. I mean, it&#8217;s quite a banal story, but&#8230;</p><p><strong>Dwarkesh Patel</strong></p><p>One of the things we were going to discuss was this idea you had about the market for follow-ups. I think this is the perfect story to discuss it for because you wrote the textbook about the field. &#8220;Mike and <a href="https://en.wikipedia.org/wiki/Isaac_Chuang">Ike</a>&#8221; is <a href="https://amzn.to/48q2uR9">the definitive textbook on quantum information</a>. You presumably came in after Deutsch.</p><p>But you in the &#8216;90s somehow identified it as the thing that is worth following up on and building on. Instead of talking about it more abstractly, I&#8217;d love to just hear the firsthand story of how you knew that this is the thing to do. Of all the things that were happening in physics and computing, how did you decide you want to think about this problem?</p><p><strong>Michael Nielsen</strong></p><p>Richard Feynman writes <a href="https://s2.smu.edu/~mitch/class/5395/papers/feynman-quantum-1981.pdf">this great paper in 1982</a>. David Deutsch writes <a href="https://www.daviddeutsch.org.uk/wp-content/deutsch85.pdf">an absolutely fantastic paper in 1985</a> sketching out a lot of the fundamental ideas of quantum computing. I&#8217;m 11 in 1985. I&#8217;m not thinking about this. I&#8217;m playing soccer and doing whatever. But in 1992, I took a class on quantum mechanics that was really terrific, given by <a href="https://en.wikipedia.org/wiki/Gerard_J._Milburn">Gerard Milburn</a>.</p><p>I just went and asked Gerard one day after the fifth lecture or something. I said, &#8220;Do you have any papers or whatever that you could give me?&#8221; He said, &#8220;Come by my office in a couple of days&#8217; time.&#8221; I did, and he presented me with a giant stack of papers, which included the Deutsch paper, the Feynman paper, and a whole bunch of other very fundamental papers about quantum computing and quantum information at a time when essentially nobody in the world was working on it. He was. I think he wrote <a href="https://espace.library.uq.edu.au/data/UQ_247726/UQ247726.pdf?Expires=1775595440&amp;Key-Pair-Id=APKAJKNBJ4MJBJNC6NLQ&amp;Signature=Db5jmtpr-AboCkV7t~zhgenU0rThA~PDRp7ifiHWuPdVCblDKWN-X-A02KyO-0LJcWxys3znMBM6OYA7g5TyBeIjbXk1P3UCt6bIbXWyyNuBqPDzWQCyPQz95hSsEAgXHG~MeScDcHpW8kPdn-5YGreO085P-F238wEplHu41hPvIDUeeCE0qcqmq4~n8ZymnOvcHNTHGjW~f6NABnSJd2FPsGyUp09GUtxXJ-U89Q6gJy4Yjkq~Vbk4-~me5~Rs2h041TvKif33zApKxMnINLSmkEgX5qEH-B0fVcB-BkNDcgFxVpKEFHDYs5JJlDILV~tGjbHD9KK~fvualLNx6w__">the very first paper that proposed a practical approach to quantum computing</a>. It wasn&#8217;t very practical, but it was actually in a real system.</p><p>So in some sense, I&#8217;m benefiting from the taste of this other person. As soon as I read the papers&#8230; These are exciting papers. They&#8217;re asking very fundamental questions, and you realize I can make progress here. These are things that one could potentially work on.</p><p>Deutsch has this conjecture, or thesis or whatever you&#8217;d call it, that a universal model, a <a href="https://en.wikipedia.org/wiki/Quantum_Turing_machine">quantum Turing machine</a>, should be capable of efficiently simulating any physical system at all. This is a very provocative idea. I think in that paper, he more or less claims that he&#8217;s proved it. I&#8217;m not sure everybody would agree with that. There are questions about whether or not you can simulate <a href="https://en.wikipedia.org/wiki/Quantum_field_theory">quantum field theory</a> effectively. That kind of question is very interesting and very exciting. It&#8217;s obviously a fundamental question about the universe.</p><p>He has some wonderful ideas in there about<a href="https://en.wikipedia.org/wiki/Quantum_algorithm"> quantum algorithms</a>, where they come from, what they mean, and what they relate to the meaning of the <a href="https://en.wikipedia.org/wiki/Wave_function">wave function</a>. Questions like this are still not agreed upon amongst physicists. There&#8217;s just some sense of, &#8220;Oh, I am in contact with something which is (A) deeply important, and (B) we as a civilization don&#8217;t have this.&#8221; Of course, you start to focus your attention a little bit there.</p><p><strong>Dwarkesh Patel</strong></p><p>I&#8217;m not sure I got the answer to the question&#8230;</p><p><strong>Michael Nielsen</strong></p><p>Maybe I misunderstood the question.</p><p><strong>Dwarkesh Patel</strong></p><p>Maybe I&#8217;ll explain the motivation first. In a previous conversation, we were discussing how you could have known in the 1940s that the Shannon theorems and <a href="https://en.wikipedia.org/wiki/Claude_Shannon">Shannon&#8217;s</a> way of thinking about a communication channel is a deep idea that goes beyond the problems with <a href="https://en.wikipedia.org/wiki/Pulse-code_modulation">pulse-code modulation</a> that Bell Labs was trying to solve at the time, and that it applies to everything from quantum mechanics to genetics to computer science.</p><p>One of the ideas you stated that we didn&#8217;t get a chance to talk about yet&#8230; Shannon published this paper. There are all these other papers, but there&#8217;s some market of follow-ups where people gravitate to and build upon Shannon&#8217;s work. How do they realize that that&#8217;s the thing to do, and how does that process happen? I guess you gave your local answer. You read these papers, and you immediately realized there&#8217;s work to be done here. There&#8217;s low-hanging fruit. There&#8217;s some deep provocative idea that I need to better understand, and I could tractably make progress on.</p><p><strong>Michael Nielsen</strong></p><p>To some extent, you&#8217;re saying, &#8220;Okay, I wanted to get into this game of contributing to humanity&#8217;s understanding of the universe,&#8221; and you are applying this low-hanging fruit algorithm. You&#8217;re like, &#8220;elative to my particular set of interests and abilities, where should I pick up my shovel and start digging?&#8221; There it was like, &#8220;Oh, this looks like quite a good place to start digging.&#8221; Different people, of course, chose very differently. It was a very unusual choice at the time. This was 1992. Very few people were thinking about that.</p><h3>01:35:29 &#8211; Does science need a new way to assign credit?</h3><p><strong>Dwarkesh Patel</strong></p><p>Fast-forwarding a bit, I don&#8217;t know how you think about your work on the open science movement now, but did it work? What does success there look like? What is the movement trying to accomplish?</p><p><strong>Michael Nielsen</strong></p><p>It&#8217;s interesting. You didn&#8217;t stop and define open science there, which 20 years ago you would have had to do. People recognize the phrase. People have some set of associations with it. Most often, they have a relatively simple set of associations. It means maybe something about making scientific papers open access. Very often they have some set of notions about also making code openly available or making data openly available.</p><p>Those are already very large successes of the open science movement, to make those salient issues. Those are issues on which people have opinions, and there are relatively common arguments. This is like the meme version: publicly funded science should be open science. That&#8217;s a distillation of a set of ideas which you might be able to contest. But if you can get people actually thinking about it and engaged with that kind of argument, that&#8217;s a very fundamental issue to be considering in the whole political economy of science.</p><p>If you go back three centuries, there was a very similar argument prosecuted, which is the question: do we publicly disclose our scientific results or not? If you look at people like Galileo and Kepler, the extent to which they publicly disclosed was done in a very odd way. Sometimes they did bizarre things where they <a href="https://cryptiana.web.fc2.com/code/galileo.htm">published some of their results as anagrams</a>. They&#8217;d find some discovery, write down the result in a sentence, scramble it, and publish that. Then if somebody else later made the same discovery, they would unscramble the anagram and say, &#8220;Oh, yeah, I actually did it first.&#8221; This is not an ideal foundation for a discovery system.</p><p>It took a very long time, over a century, I think, to obtain more or less the modern ideals, in which you disclose the knowledge in the form of a paper. There is an expectation of attribution, and a reputation economy gets built. &#8220;So-and-so did this work, so they deserve the credit for that,&#8221; and that&#8217;s the basis for their careers. This is the underlying political economy of science. That made a lot of sense when you have a printing press and the ability to do scientific journals.</p><p>Then you transition to this modern situation, where you can start to share a lot more. You can share your code, your data, your in-progress ideas. But there&#8217;s no direct credit associated to those. It&#8217;s not at all obvious how much reputation should be associated to them. That&#8217;s all constructed socially. Making it a live issue is a very important thing to have done. I view that as one of the main positive outcomes of work on open science.</p><p>I&#8217;ll give you a really practical example to illustrate the problem. For a long time in physics, there was a preprint culture in which people would upload preprints to the preprint archive, and in biology, this didn&#8217;t happen. There was no preprint culture. That&#8217;s changing now, but for a long time, this was the case. I used to amuse myself by asking physicists and biologists why this was the case.</p><p>What I would hear from biologists was they would say, &#8220;Biology is so much more competitive than physics that we need to protect our priority, so we can&#8217;t possibly upload to the archive. We have to just publish in journals.&#8221; Then I would sometimes hear from physicists, &#8220;Physics is so much more competitive than biology that we need to establish our priority by uploading as rapidly as possible to the preprint archive. We can&#8217;t possibly wait to do it with the journals.&#8221;</p><p>I think this emphasizes the extent to which this kind of attribution economy is just something we construct. It&#8217;s something we do by agreement. Any attempt to change that economy results in a different system by which we construct knowledge. There is this very fundamental set of problems around the political economy of science. We&#8217;ve got this collective project, and how we mediate it depends upon the economy we have around ideas.</p><p><strong>Dwarkesh Patel</strong></p><p>One of the things you&#8217;ve emphasized as a part of this project of open science, and we talked about it earlier, is collective science, or groups of people making progress on a problem where no individual understands all the logical and explanatory levels necessary to make a leap or a connection. Outside of mathematics, what is the best example of such a discovery?</p><p><strong>Michael Nielsen</strong></p><p>I&#8217;m not sure I have a well-ordering of them to give you a best. An example that I think is very interesting is the <a href="https://en.wikipedia.org/wiki/Large_Hadron_Collider">LHC</a>, where it&#8217;s just this immensely complicated object. Years ago, I snuck into an accelerator physics conference. I didn&#8217;t know anything at all about <a href="https://en.wikipedia.org/wiki/Accelerator_physics">accelerator physics</a>, but I was just curious to see what they were talking about.</p><p>This particular group of people were experts on numerical methods, in particular on inverse methods. Inside these accelerators, you have these <a href="https://en.wikipedia.org/wiki/Collision_cascade">cascades</a>. A particle will be massively accelerated, maybe it&#8217;ll be collided, and then you&#8217;ll get a shower of particles which decays and decays and decays. There&#8217;s just this incredible, consequential shower, which is ultimately what you see at the <a href="https://en.wikipedia.org/wiki/Particle_detector">detector</a>. Then you have to retroactively figure out what produced it. There are these very complicated inverse problems that need to be solved. You&#8217;ve got this final data, but you need to figure out what produced it, and that&#8217;s how you look for signatures of these.</p><p>Many of these people were incredibly deep experts on simulation methods for following particle tracks. This was really deep and difficult stuff. I was like, &#8220;Wow, you could spend a lifetime just learning how to do this and how to solve some of these inverse problems, and you would know very little about quantum field theory, detector physics, vacuum physics, or data processing, all these things that are absolutely essential to understanding, say, the <a href="https://en.wikipedia.org/wiki/Higgs_boson">Higgs boson</a>&#8221;.</p><p>I don&#8217;t think it&#8217;s possible for one person to understand everything in depth. Lots of people broadly understand a lot of these ideas, but they don&#8217;t understand everything in the depth that is actually utilized. That&#8217;s why there are these papers with well over a thousand authors. Those people can talk to one another at a high level, but they don&#8217;t understand each other&#8217;s specialties in all that much depth. Things like detector physics, vacuum physics, solving inverse problems, this stuff is incredibly different from each other. To understand it in real detail is serious work.</p><h3>01:43:57 &#8211; Prolificness versus depth</h3><p><strong>Dwarkesh Patel</strong></p><p>How do you think about prolificness versus depth? Maybe Darwin&#8217;s an example of somebody who&#8217;s just gestating on something for many decades. There are other examples. Einstein during the year he comes up with special relativity is just doing a bunch of different things. And <a href="https://en.wikipedia.org/wiki/Abraham_Pais">Pais</a> talks about how they were all relevant to the eventual build-up.</p><p><strong>Michael Nielsen</strong></p><p>It&#8217;s something I stress about a lot. Sometimes I feel I&#8217;m too slow. It&#8217;s funny though, the Darwin example is really interesting. Prolific at what? God knows how many letters he wrote. It must have been an enormous number. So he was certainly very active.</p><p>There&#8217;s two types of work that tend to be involved in any kind of creative project. There&#8217;s routine stuff, and there you just want to avoid procrastination. You just want to ask, &#8220;How do I get good at this?&#8221; or &#8220;How do I outsource it?&#8221; and &#8220;How do I do it as rapidly as possible?&#8221; and just avoid getting into a situation where you&#8217;re prolonging it.</p><p>Then there&#8217;s high-variance stuff where you actually need to be willing to take a lot of time. You need to be willing to go to different places and talk to different people, where in any given instance, most of it is just not going to be an input. Somehow balancing those two things&#8230; I think a lot of people are very good at doing one or the other, but it&#8217;s almost like a personality trait which one you prefer. People tend to end up doing a lot of one and not enough of the other. So I certainly try and balance those two things.</p><p>Einstein is such an interesting example. <a href="https://en.wikipedia.org/wiki/Annus_mirabilis_papers">1905 is just this extraordinary year</a>. You can delete special relativity entirely, and it&#8217;s an extraordinary year. You can delete special relativity, and you can delete the <a href="https://en.wikipedia.org/wiki/Photoelectric_effect">photoelectric effect</a> for which he won the Nobel Prize, and it&#8217;s still an extraordinary year, plausibly a multi-Nobel-Prize-winning year. So what&#8217;s he doing? Maybe the answer is just that he&#8217;s smarter than the rest of us. There&#8217;s a lot of luck as well.</p><p>Certainly for myself anyway, trying to identify those things that are routine that I should get good at, and then just try to do them as quickly as possible. I think that&#8217;s yielded a certain amount of returns. But also being willing to bet a little bit more on myself on the variance side has also been very, very helpful. That&#8217;s really hard, because intrinsically you&#8217;re putting yourself in situations where you don&#8217;t know what the outcome is going to be. If you&#8217;re very driven to be productive, and actually mostly it&#8217;s not working over there, you think, &#8220;Let&#8217;s reduce this.&#8221; It doesn&#8217;t feel right.</p><p>When I worked in San Francisco, a practice I used to have each day was instead of taking the 15-minute walk to work, I would take the more beautiful 30-minute walk. Partially just because it was beautiful, but partially also as just a reminder that there are real benefits to not being efficient. But it&#8217;s not an answer to your question. Really, I think all I&#8217;m saying is I struggle a lot with the question.</p><p><strong>Dwarkesh Patel</strong></p><p>I think <a href="https://en.wikipedia.org/wiki/Dean_Simonton">Dean Keith Simonton</a> has this famous <a href="https://jamesclear.com/equal-odds">equal odds rule</a> where he says the probability that any given thing you release&#8212;any paper, book, whatever&#8212;will be extremely important for a given person through their lifetime is not that different. What really determines in what era they are the most productive is how much they&#8217;re publishing. Any given thing has equal odds of being extremely important. I think some of the most successful creatives or scientists, they&#8217;re just doing a lot. Shakespeare was just publishing <em>a lot.</em></p><p><strong>Michael Nielsen</strong></p><p>Of course, then there are counterexamples. <a href="https://en.wikipedia.org/wiki/Kurt_G%C3%B6del">G&#246;del</a> published almost nothing. But broadly speaking, you need a very good reason to not do that. It&#8217;s funny, I&#8217;ve met a lot of people over the years who are clearly brilliant, and they&#8217;re just obsessed that they are going to work on the great project that makes them famous, and they never do anything. That seems connected. It&#8217;s a type of aversiveness. I think very often they just don&#8217;t want public judgment.</p><p>Something that I would love to see&#8230; There&#8217;s an awful lot of biographies and memoirs and histories of people who achieve a lot. I wish there was a very large number of biographies of people who are fantastically talented who just missed. I&#8217;ve known people who won gold medals at <a href="https://en.wikipedia.org/wiki/International_Mathematical_Olympiad">IMOs</a> and things like that, who then tried to become mathematicians and failed. What happened? What was the reason? I suspect in many cases that&#8217;s actually more informative than anything else.</p><h3>01:49:17 &#8211; What it takes to actually internalize what you learn</h3><p><strong>Dwarkesh Patel</strong></p><p>You have this <a href="https://michaelnotebook.com/dci/index.html">essay</a> that I was reading before this interview about how you think about what the work you&#8217;re doing is. And &#8220;writer&#8221; doesn&#8217;t seem like the right label. As you say, was Charles Darwin a writer? What exactly is that label? I&#8217;m a podcaster. In a way, obviously our work is very different, but I also think a lot about what this work is and how I get better at it.</p><p>In particular, how can I make sure there&#8217;s some compounding between the different people I talk to on the podcast? I worry that instead of this compounding, I build up some understanding that&#8217;s somewhat superficial about a topic, and then it depreciates. I move down to the next topic, and it depreciates. There are a lot of podcasters in the world who will interview way more experts than I have, and I don&#8217;t think they&#8217;re much the wiser or more knowledgeable as a result. So it&#8217;s clearly possible to mess this up.</p><p>I wonder if you have thoughts or takes or advice on how one actually learns in a deeper way from this kind of work.</p><p><strong>Michael Nielsen</strong></p><p>It&#8217;s an incredibly complicated and rich question. It seems like the question is, how do you make it a higher-growth context? How do you make it a more demanding context? You can do that in relatively small ways that might yield compounding returns, or you can do something that is more radical. Maybe it means starting a parallel project in which you do something that is actually quite a bit different.</p><p>There is something really interesting about how being very demanding can simply change your response to something. Something that I would sometimes do with students and sometimes with myself, it was really aimed more at myself, was they would say some week, &#8220;I&#8217;m going to try and do this work over the coming week.&#8221; Then the next week would come by and they hadn&#8217;t solved the problem. If a million dollars had been at stake, would you have put the same effort in? And the answer is no, invariably. They&#8217;ve tried, but they haven&#8217;t really tried.</p><p>I think that&#8217;s a very familiar feeling for all of us. You could do a lot more if you had just the right demanding taskmaster standing by you and saying, &#8220;Look, you&#8217;re barely operating here.&#8221; I do wonder a little bit about what&#8217;s the demanding taskmaster? What can they ask you that is going to make your preparation way more intense?</p><p><strong>Dwarkesh Patel</strong></p><p>The most helpful thing honestly is&#8230; For some subjects it is very clear how I prep. I&#8217;m doing an upcoming episode on chip design with the founder of a company that does chip design, and he wrote a textbook on it. Yesterday I went over to his office, and we brainstormed five <a href="https://en.wikipedia.org/wiki/Roofline_model">roofline analyses</a> I can do. If I understand that, I have some good understanding.</p><p>The problem is with almost every other field, there&#8217;s not this curriculum. <a href="https://www.dwarkesh.com/p/ilya-sutskever?utm_source=publication-search">When I interviewed Ilya</a> three, four years ago, it was: implement the <a href="https://en.wikipedia.org/wiki/Transformer_(deep_learning)">transformer</a>, and if you implement it, you have some nugget of understanding you have clamped down. With other fields, it&#8217;s just that I vaguely understand this. It&#8217;s not clamped. There&#8217;s no forcing function of &#8220;do this exercise, and if you do it, you will understand.&#8221;</p><p><strong>Michael Nielsen</strong></p><p>Really what you&#8217;re saying is you can do a good job at podcasting without actually attaining this kind of understanding, and that&#8217;s the problem from your point of view. You want to change your job description so that you are internalizing these chunks and just getting this kind of integration each time. It seems to me that what that means is you actually want to change the structure of the work output at some level.</p><p>There&#8217;s this terrible idea that lots of people have that they should be in <a href="https://en.wikipedia.org/wiki/Flow_(psychology)">flow</a> all of the time. And as far as I can tell, high performers just don&#8217;t believe this at all. They&#8217;re in flow some of the time. You certainly see this with athletes. When they&#8217;re actually out there playing basketball or tennis, ideally they are in flow much of the time. But when they&#8217;re training they&#8217;re not. They&#8217;re stuck a lot of the time, or they&#8217;re doing things badly. I suppose I wonder what that looks like for you.</p><p><strong>Dwarkesh Patel</strong></p><p>That I would be extremely satisfied with. The problem is I just don&#8217;t know what the equivalent of doing 64 laps is. This is a thing you can change by choosing guests where there is a legible curriculum. So maybe it&#8217;s a mistake not to have done that. Also, there&#8217;s no real way to prep for Terence Tao. There&#8217;s no curriculum that&#8217;s a plausible one.</p><p>There are many failure modes, but one long-term dynamic I&#8217;m worried about is that you can have a good podcast and reach a local maximum, but for no particular guest or topic are you going deep enough. My model of learning is that if you don&#8217;t really understand the deeper mechanism, you&#8217;re just mapping inputs and outputs of a black box. That just fades incredibly fast or is not worth it in the first place. You just move on and it&#8217;s over. You need to build the intermediate connection.</p><p>AI in a weird way is really easy for that reason, because there is a clear thing you can do. Just implement it, and then you understand it. If I applied that criterion elsewhere, do I just not do history episodes?</p><p><strong>Michael Nielsen</strong></p><p>Exactly. Ada Palmer. Wonderful to talk to, incredibly interesting. But for you personally, what changed?</p><p><strong>Dwarkesh Patel</strong></p><p>There are some things I learned. If I had allocated more time, especially after the interview, to write up 2,000 words on everything I learned and how it connects to other things I know. Maybe that&#8217;s a thing worth doing, spreading out the episodes more and spending more time afterwards consolidating.</p><p>I would pay infinite amounts of money if there was somebody who was really good at coming up with the curriculum, the practice problems you need to do, and the exercise you need to do after the interview to clamp what you have learned.</p><p><strong>Michael Nielsen</strong></p><p>Have you tried doing that with somebody?</p><p><strong>Dwarkesh Patel</strong></p><p>It&#8217;s hard to find someone. I haven&#8217;t tried super hard, but isn&#8217;t it going to be tough to find somebody who could do that for every single kind of discipline? Maybe I should just hire different ones for different topics.</p><p><strong>Michael Nielsen</strong></p><p>Maybe. There&#8217;s something about, what problem are you solving for each episode? As far as I can tell, that&#8217;s the only way I really understand anything. I get interested in something. At first, I don&#8217;t even have a problem, but there&#8217;s just some sense that there&#8217;s some contribution to make here, and gradually you hone in, and there&#8217;s a problem.</p><p>Funnily enough, spending time stuck is incredibly important. That used to just be annoying. Now it seems like it&#8217;s maybe even the most important part of the whole process. That hard-won nature of it means that I internalize it afterwards. I&#8217;ve written 10,000-word essays in a couple of days, and I&#8217;ve written them in three months or six months. I feel like I didn&#8217;t learn very much from the ones that only took a couple of days. Whereas some of the ones that took three months, 15 years later, I&#8217;ll still remember.</p><p><strong>Dwarkesh Patel</strong></p><p>Can you describe outside of physics how you learn, of the ones that took three months?</p><p><strong>Michael Nielsen</strong></p><p>By far the most common thing is there&#8217;s always some creative artifact. Sometimes it&#8217;s a class. Sometimes it&#8217;s engagement with a group of people who are working on some collective creative artifact together. You might not even be aware of it, but you&#8217;re acting as an input to their creative ends in some way. Sometimes it&#8217;s an essay or a book or whatever.</p><p>It&#8217;s one of the reasons why I often quite enjoy doing podcasts. I said yes to come here partially because I know you ask unusually demanding questions. That&#8217;s an attempt to get this sort of perspective from a different kind of a forcing function. Trying to pick the most demanding creative context.</p><p><strong>Dwarkesh Patel</strong></p><p>For this interview, I went through three lectures of the <a href="https://en.wikipedia.org/wiki/Leonard_Susskind">Susskind</a> <a href="https://amzn.to/3PXeyDe">special relativity book</a>. The problem is that there&#8217;s almost no practice problems in it. So I hired a physicist friend. I haven&#8217;t done it yet, but for every lecture I want a bunch of practice problems to go through, and I&#8217;m planning on being appropriately humbled.</p><p><strong>Michael Nielsen</strong></p><p>How do you make it as jugular as possible? The higher you can raise the stakes, the better.</p><p><strong>Dwarkesh Patel</strong></p><p>The interview is in some sense high stakes, but also it doesn&#8217;t necessarily test deep understanding.</p><p><strong>Michael Nielsen</strong></p><p>I don&#8217;t think the interview is that high stakes. You&#8217;re not writing a book about special relativity, and you&#8217;re not trying to write a book that replaces whatever the existing standard textbook is. That&#8217;s a really high stake.</p><p>By the way, a phrase that I find particularly difficult. People will talk about &#8220;going deep&#8221; on a subject, and it turns out different people have different ideas of what this means. For some people it means they read a couple of blog posts. For some people it means they read a book about it. For some people it means they wrote a book about it. The standard you hold yourself to determines a lot about your ability to integrate knowledge in this way.</p><p><strong>Dwarkesh Patel</strong></p><p>I found that I&#8217;m in some sense able to move much faster on some things through the help of AI, but I don&#8217;t know if I&#8217;m learning better. I think it&#8217;s probably because&#8230; The hardest thing, the thing that is most demanding, is so aversive that you try to take any excuse you can to get out of it. Just having a back-and-forth conversation with an LLM where you gloss over&#8230;</p><p><strong>Michael Nielsen</strong></p><p>It&#8217;s entertaining but not necessarily anything else.</p><p><strong>Dwarkesh Patel</strong></p><p>It&#8217;s such an easy way to get out of the thing. In fact, it makes it easier because instead of doing some intermediate thinking, there&#8217;s always a next question you can ask a chatbot.</p><p><strong>Michael Nielsen</strong></p><p>Yeah. And it&#8217;s somewhat valuable. That&#8217;s part of the seductiveness, of course. It&#8217;s not actually useless. But it can substitute for actually doing the thing that maybe you should be doing. It&#8217;s interesting. To what extent should you be outsourcing that kind of stuff? It&#8217;s an interesting judgment call. There is a whole bunch of routine work that you want done. It&#8217;s low value for you, so if you can get a chatbot to do it, you may as well.</p><p>Somebody interviewed the pioneering computer scientist <a href="https://en.wikipedia.org/wiki/Alan_Kay">Alan Kay</a> years ago, and he was asked what he thought about <a href="https://en.wikipedia.org/wiki/Linux">Linux</a>. If I remember his answer correctly, he basically said, &#8220;It doesn&#8217;t have anything to do with computer science. It&#8217;s just a great big ball of mud. There are a few interesting ideas in there which are worth understanding, but mostly all you&#8217;re learning is stuff about Linux. You&#8217;re not actually learning anything which is transferable.&#8221; I thought that was very interesting.</p><p>There&#8217;s a certain kind of seductiveness to some things where it&#8217;s sort of a Rube Goldberg machine. You can just learn about all the bits, and it feels entertaining. But if you step back and think about what you&#8217;re actually doing here, it might not actually be meeting your objectives. Maybe you want to become a sysadmin, and learning Linux is a great use of your time. There&#8217;s no harm in that at all.</p><p>But if your objective is to understand the fundamentals of computing, it&#8217;s much less clear that that&#8217;s a good use of your time. It was certainly an answer I&#8217;ve thought a lot about, where for a certain type of mind, there is a seductiveness in just learning systems and confusing that with understanding.</p><p><strong>Dwarkesh Patel</strong></p><p>Okay, I&#8217;ll keep you updated on how this goes. I owe you a text within a month of some revamped learning system.</p><p><strong>Michael Nielsen</strong></p><p>I&#8217;d be really curious. It&#8217;s also true that tiny incremental improvements in this are just worth so much.</p><p><strong>Dwarkesh Patel</strong></p><p>It&#8217;s the main input into the podcast. It&#8217;s great that the bookshelves are fancy and I&#8217;ve got a blackboard or whatever, but really the thing that makes the podcast better is if I can improve the learning I do. So yes, it&#8217;s worth every morsel of improvement. All right, thanks for the therapy session. Great note to end on. Thanks, Michael.</p><p><strong>Michael Nielsen</strong></p><p>All right. Thanks, Dwarkesh.</p>]]></content:encoded></item><item><title><![CDATA[Terence Tao – Kepler, Newton, and the true nature of mathematical discovery]]></title><description><![CDATA[&#8220;And what those stories teach us about how AI will revolutionize math&#8221;]]></description><link>https://www.dwarkesh.com/p/terence-tao</link><guid isPermaLink="false">https://www.dwarkesh.com/p/terence-tao</guid><dc:creator><![CDATA[Dwarkesh Patel]]></dc:creator><pubDate>Fri, 20 Mar 2026 16:00:55 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/191582481/11a15f3b7f6a04e3220e25ff38a2cd20.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>We begin the episode with the absolutely ingenious and surprising way in which Kepler discovered the laws of planetary motion.</p><p>People sometimes say that AI will make especially fast progress at scientific discovery because of tight verification loops.</p><p>But the story of how we discovered the shape of our solar system shows how the verification loop for correct ideas can be decades (or even millennia) long.</p><p>During this time, what we know today as the better theory can often actually make <em>worse</em> predictions (Copernicus's model of circular orbits around the sun was actually less accurate than Ptolemy's geocentric model).</p><p>And the reasons it survives this epistemic hell is some mixture of judgment and heuristics that we don&#8217;t even understand well enough to actually articulate, much less codify into an RL loop.</p><p>Hope you enjoy!</p><p>Watch on <a href="https://youtu.be/Q8Fkpi18QXU">YouTube</a>; listen on <a href="https://podcasts.apple.com/us/podcast/terence-tao-kepler-newton-and-the-true/id1516093381?i=1000756353875">Apple Podcasts</a> or <a href="https://open.spotify.com/episode/24xF8YGra2w3HXZYbhgVKU?si=U5V-SgvSQ8eVIcG2Z86wfQ">Spotify</a>.</p><div id="youtube2-Q8Fkpi18QXU" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;Q8Fkpi18QXU&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/Q8Fkpi18QXU?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h2>Sponsors</h2><ul><li><p><a href="https://janestreet.com/dwarkesh">Jane Street</a> loves challenging my audience with different creative puzzles. One of my listeners, Shawn, solved Jane Street&#8217;s ResNet challenge and <a href="https://x.com/hynwprk/status/2026376546286711206">posted a great walk-through on X</a>. If you want to try one of these puzzles yourself, there&#8217;s one live now at <a href="https://janestreet.com/dwarkesh">janestreet.com/dwarkesh</a>.</p></li></ul><ul><li><p><a href="https://labelbox.com/dwarkesh">Labelbox</a> can get you rubric-based evals, no matter your domain. These rubrics allow you to give your model feedback on all the dimensions you care about, so you can train <em>how</em> it thinks, not just <em>what</em> it thinks. Whatever you&#8217;re focused on&#8212;math, physics, finance, psychology or something else&#8212;Labelbox can help. Learn more at <a href="https://labelbox.com/dwarkesh">labelbox.com/dwarkesh</a>.</p></li><li><p><a href="https://mercury.com/insights">Mercury</a> just released a new feature called Insights. Insights summarizes your money in and out, showing you your biggest transactions and calling out anything worth paying attention to. It&#8217;s a super low-friction way to stay on top of your business. Learn more at <a href="https://mercury.com/insights">mercury.com/insights</a>.</p></li></ul><h2>Timestamps</h2><p>(00:00:00) &#8211; Kepler was a high temperature LLM</p><p>(00:11:44) &#8211; How would we know if there&#8217;s a new unifying concept within heaps of AI slop?</p><p>(00:26:10) &#8211; The deductive overhang</p><p>(00:30:31) &#8211; Selection bias in reported AI discoveries</p><p>(00:46:43) &#8211; AI makes papers richer and broader, but not deeper</p><p>(00:53:00) &#8211; If AI solves a problem, can humans get understanding out of it?</p><p>(00:59:20) &#8211; We need a semi-formal language for the way that scientists actually talk to each other</p><p>(01:09:48) &#8211; How Terry uses his time</p><p>(01:17:05) &#8211; Human-AI hybrids will dominate math for a lot longer</p><h2>Transcript</h2><h3>00:00:00 &#8211; Kepler was a high temperature LLM</h3><p><strong>Dwarkesh Patel</strong></p><p>Today, I&#8217;m chatting with <a href="https://en.wikipedia.org/wiki/Terence_Tao">Terence Tao</a>, who needs no introduction. Terence, I want to begin by having you retell the story of how <a href="https://en.wikipedia.org/wiki/Johannes_Kepler">Kepler</a> discovered the <a href="https://en.wikipedia.org/wiki/Kepler%27s_laws_of_planetary_motion">laws of planetary motion</a> because I think this will be a great jumping off point to talk about AI for math.</p><p><strong>Terence Tao</strong></p><p>I&#8217;ve always had an amateur interest in astronomy. I&#8217;ve loved stories of how the early astronomers worked out the nature of the universe. Kepler was building on the work of <a href="https://en.wikipedia.org/wiki/Nicolaus_Copernicus">Copernicus</a>, who was himself building on the work of <a href="https://en.wikipedia.org/wiki/Aristarchus_of_Samos">Aristarchus</a>. Copernicus very famously proposed the <a href="https://en.wikipedia.org/wiki/Copernican_heliocentrism">heliocentric model</a>, that instead of the planets and the Sun going around the Earth, the Sun was at the center of the solar system and the other planets were going around the Sun.</p><p>Copernicus proposed that the orbits of the planets were perfect circles. His theory fit the observations that the Greeks, the Arabs, and the Indians had worked out over centuries. Kepler learned about these theories in his studies, and he made this observation that the ratios of the size of the orbits that Copernicus predicted seemed to have some geometric meaning.</p><p>He started proposing that if you take the orbit of the Earth and you enclose it in a cube, the outer sphere that encloses the cube almost perfectly matched the orbit of Mars, and so forth. There were six planets known at the time and five gaps between them, and there were five perfect Platonic solids: the cube, the tetrahedron, icosahedron, octahedron, and dodecahedron.</p><p>So he had this <a href="https://en.wikipedia.org/wiki/Mysterium_Cosmographicum">theory</a>, which he thought was absolutely beautiful, that you could inscribe these <a href="https://en.wikipedia.org/wiki/Platonic_solid">Platonic solids</a> between the spheres of the planets. It seemed to fit, and it seemed to him that God&#8217;s design of the planets was matching this mathematical perfection of the Platonic solids.</p><p>He needed data to confirm this theory. At the time, there was only one really high-quality dataset in existence. <a href="https://en.wikipedia.org/wiki/Tycho_Brahe">Tycho Brahe</a>, this very wealthy, eccentric Danish astronomer, had managed to convince the Danish government to fund this extremely expensive observatory. In fact, it was an entire island where he had taken decades of observations of all the planets, like Mars and Jupiter, at least every night for which the weather was clear, with the naked eye. He was the last of the naked-eye astronomers.</p><p>He had all this data which Kepler could use to confirm his theory. Kepler started working with Tycho, but Tycho was very jealous of the data. He only gave him little bits of it at a time. Kepler eventually just stole the data. He copied it and had to have a fight with Brahe&#8217;s descendants.</p><p>He did get the data, and then he worked out, to his disappointment, that his beautiful theory didn&#8217;t quite work. The data was off from his Platonic solid theory by 10% or something. He tried all kinds of fudges, moving the circles around, and it didn&#8217;t quite work. But he worked on this problem for years and years, and eventually, he figured out how to use the data to work out the actual orbits of the planets.</p><p>That was an incredibly clever, genius amount of data analysis. And then he worked out that the orbits were actually ellipses, not circles, which was shocking for him. So he worked out the two laws of planetary motion: the ellipses, and also that equal areas sweep out equal times.</p><p>Then ten years later, after collecting a lot of data&#8212;the furthest planets like Saturn and Jupiter were the hardest for him to work out&#8212;he finally worked out this third law, that the time it takes for a planet to complete its orbit was proportional to some power of the distance to the Sun. These are the three famous Kepler&#8217;s laws of motion. He had no explanation for them. It was all driven by experiment, and it took <a href="https://en.wikipedia.org/wiki/Isaac_Newton">Newton</a> a century later to give a theory that explained all three laws at once.</p><p><strong>Dwarkesh Patel</strong></p><p>The take I want to try on you is that Kepler was a <a href="https://www.ibm.com/think/topics/llm-temperature">high-temperature LLM</a>. Newton comes up with this explanation of why the three laws of planetary motion must be true. Of course, the way that Kepler discovers the laws of planetary motion, or figures out the relative orbits of the different planets, is as you say a work of genius. But through his career, he&#8217;s just trying random relationships.</p><p>In fact, in the book in which he writes down the third law of planetary motion, it&#8217;s an aside on <em><a href="https://en.wikipedia.org/wiki/Harmonice_Mundi">The Harmonics of the World</a></em>, which is just a book about how all these different planets have these different harmonies. And the reason there&#8217;s so much famine and misery on Earth is because the Earth is mi-fa-mi, that&#8217;s the note of Earth. It&#8217;s all this random astrology, but in there is the cube-square law, which tells you what relationship the period has to a planet&#8217;s distance from the Sun. As you were detailing, if you add that to Newton&#8217;s <a href="https://en.wikipedia.org/wiki/Newton's_laws_of_motion">F=ma</a> and the <a href="https://en.wikipedia.org/wiki/History_of_centrifugal_and_centripetal_forces">equation for centripetal acceleration</a>, you get the <a href="https://en.wikipedia.org/wiki/Inverse-square_law">inverse-square law</a>. And so Newton works that out.</p><p>But the reason I think this is an interesting story is that I feel LLMs can do the kind of thing of trying random relationships for twenty years, some of which make no sense, as long as there&#8217;s a verifiable data bank like Brahe&#8217;s dataset. &#8220;Ok, I&#8217;m going to try out random things about musical notes, Platonic objects, or different geometries, I have this bias that there&#8217;s some important thing about the geometry of these orbits.&#8221;</p><p>Then one thing works. As long as you can verify it, these empirical regularities can then drive actual deep scientific progress.</p><p><strong>Terence Tao</strong></p><p>Traditionally, when we talk about the history of science, idea generation has always been the prestige part of science. A scientific problem comes with many steps. You have to identify a problem, and then you have to identify a good, fruitful problem to work on. Then you need to collect data, figure out a strategy to analyze the data, and make a hypothesis. At this point, you need to propose a good hypothesis, and then you need to validate. Then you need to write things up and explain. There are a dozen different components.</p><p>The ones we celebrate are these eureka genius moments of idea generation. Kepler certainly had to cycle through many ideas, several of which didn&#8217;t work. I bet there were many that he didn&#8217;t even publish at all because they just didn&#8217;t fit. That&#8217;s an important part of the process, trying all kinds of random things and seeing if they worked.</p><p>But as you say, it has to be matched by an equal amount of verification, otherwise it&#8217;s slop. We celebrate Kepler, but we should also celebrate Brahe for his assiduous data collection, which was ten times more precise than any previous observation. That extra decimal point of accuracy was essential for Kepler to get his results. He was using <a href="https://en.wikipedia.org/wiki/Euclidean_geometry">Euclidean geometry</a> and the most advanced mathematics he could use at the time to match his models with the data. All aspects had to be in play: the data, the theory, and the hypothesis generation.</p><p>I&#8217;m not sure nowadays that hypothesis generation is the bottleneck anymore. Science has changed in the century since. Classically, the two big paradigms for science were theory and experiment. Then in the 20th century, numerical simulation came along, so you can do computer simulations to test theories. Finally, in the late 20th century, we had big data. We had the era of data analysis.</p><p>A lot of new progress is actually driven now by analyzing massive datasets first. You collect large datasets and then draw patterns from them to deduce thoughts. This is a little bit different from how science used to work, where you make a few observations or have one out-of-the-blue idea, and then collect data to test your idea. That&#8217;s the classic scientific method. Now it&#8217;s almost reversed. You collect big data first, and then you try to get hypotheses from it.</p><p>Kepler was maybe one of the first early data scientists, but even he didn&#8217;t start with Tycho&#8217;s dataset and then analyze it. He had some preconceived theories first. It seems like this is less and less the way we make progress, just because the data is so much more massive and useful.</p><p><strong>Dwarkesh Patel</strong></p><p>Oh, interesting. I feel like the 20th-century science that you&#8217;re describing actually very well describes what happened with Kepler. He did have these ideas&#8212;1595 and &#8216;96 is where he comes up with the polygons and then the Platonic objects theory&#8212;but they were wrong. Then a few years later, he gets Brahe&#8217;s data, and it&#8217;s only after twenty years of trying random things that he gets this empirical regularity.</p><p>It actually feels a bit closer to Brahe&#8217;s data being analogous to some massive data bank of simulations, and now that you&#8217;ve got the data, you can keep trying random things. If it wasn&#8217;t for that, Kepler would be out there just writing books about harmonics and Platonic objects, and there would be nothing to actually verify against.</p><p><strong>Terence Tao</strong></p><p>The data was extremely important. The distinction I was trying to make was that traditionally, you make a hypothesis and then you test it against data. But now with machine learning, data analysis, and statistics, you can start with data and through statistics work out laws that were not present before.</p><p>Kepler&#8217;s third law is a little bit like this, except that instead of having the thousand data points that Brahe had, Kepler had six data points. For every planet, he knew the length of the orbit and the distance to the Sun. <a href="https://terrytao.wordpress.com/wp-content/uploads/2025/11/sample_fourth.pdf&amp;sa=D&amp;source=docs&amp;ust=1773826228948195&amp;usg=AOvVaw2OguZ_g3hnohQovXR7LKsT">There were five or six data points, and he did what we would now call regression</a>. He fit a curve to these six data points and got a square-cube law, which was amazing. But he was quite lucky that these six data points gave him the right conclusion. That&#8217;s not enough data to be really reliable.</p><p>There was a later astronomer, <a href="https://en.wikipedia.org/wiki/Johann_Elert_Bode">Johann Bode</a>, who took the same data&#8212;the distances to the planets&#8212;and inspired by Kepler, he had a prediction that the distances to the planets formed a shifted geometric progression. He also fit a curve, except there was one point missing. There was a big gap between Mars and Jupiter. His law predicted that there was a missing planet. It was kind of a crank theory, except when Uranus was discovered by <a href="https://en.wikipedia.org/wiki/William_Herschel">Herschel</a>, the distance to Uranus fit exactly this pattern. Then <a href="https://en.wikipedia.org/wiki/Ceres_(dwarf_planet)">Ceres</a> was discovered in the <a href="https://en.wikipedia.org/wiki/Asteroid_belt">asteroid belt</a>, and it also fit the pattern. People got really excited that Bode had discovered <a href="https://en.wikipedia.org/wiki/Titius%E2%80%93Bode_law">this amazing new law of nature</a>.</p><p>But then Neptune was discovered, and it was way off. Basically it was just a numerical fluke. There were six data points. Maybe one reason why Kepler didn&#8217;t highlight his third law as much as the first two laws is that instinctively, even though he didn&#8217;t have modern statistics, he kind of knew that with six data points, he had to be somewhat tentative with the conclusions.</p><h3>00:11:44 &#8211; How would we know if there&#8217;s a new unifying concept within heaps of AI slop?</h3><p><strong>Dwarkesh Patel</strong></p><p>To ask the question about the analogy more explicitly, does this analogy make sense if in the future we have smarter and smarter AIs? We&#8217;ll have millions of them, and they can go out and hunt for all these empirical irregularities. It sounds like you don&#8217;t think the bottleneck in science is finding more things that are the equivalent of the third law of planetary motion for each given field, so that later on somebody can say, &#8220;Oh, we need a way to explain this. Let&#8217;s work out the math. Here&#8217;s the <a href="https://en.wikipedia.org/wiki/Newton%27s_law_of_universal_gravitation">inverse-square law of gravity</a>.&#8221;</p><p><strong>Terence Tao</strong></p><p>I think AI has driven the cost of idea generation down to almost zero, in a very similar way to how the internet drove the cost of communication down to almost zero. It&#8217;s an amazing thing, but it doesn&#8217;t create abundance by itself. Now the bottleneck is different. We&#8217;re now in a situation where suddenly people can generate thousands of theories for a given scientific problem. Now we have to verify them, evaluate them. This is something which we have to change our structures of science to actually sort this out.</p><p>Traditionally, we build walls. In the past, before we had AI slop, we had amateur scientists have their own theories of the universe, many of which were of very little value. We built these peer review publication systems to filter out and try to isolate the high signal ideas to test.</p><p>But now that we can generate these possible explanations at massive scale, and some of them are good and a lot are terrible, human reviewers are already being overwhelmed. Many journals are reporting that <a href="https://www.nature.com/articles/d41586-025-03967-9">AI-generated submissions are just flooding their submissions</a>.</p><p>It&#8217;s great that we can generate all kinds of things now with AI, but it means that the rest of the aspects of science have to catch up: verification, validation, and assessing what ideas actually move the subject forward and which ones are dead ends or red herrings. That&#8217;s not something we know how to do at scale. For each individual paper, we can have a debate among scientists and get to a consensus in a few years. But when we&#8217;re generating a thousand of these every day, this doesn&#8217;t work.</p><p><strong>Dwarkesh Patel</strong></p><p>There&#8217;s this incredibly interesting question. If you have billions of AI scientists, not only how do you gauge which ones are real progress, but how do you... This is actually a question that human science has had to face and we&#8217;ve solved somehow, and I&#8217;m actually not sure how we solved this.</p><p>Let&#8217;s say in the 1940s, if you&#8217;re at <a href="https://en.wikipedia.org/wiki/Bell_Labs">Bell Labs</a> and there are these new technologies coming out. <a href="https://en.wikipedia.org/wiki/Pulse-code_modulation">Pulse-code modulation</a>, how do you transfer signals? How do you digitize signals? How do you transfer them over analog wires? There are all these papers about the engineering constraints and the details, and then there&#8217;s one which comes up with the <a href="https://en.wikipedia.org/wiki/Bit#History">idea of the bit</a>, which has implications across many different fields. You need some system which can then look at that and say, &#8220;Okay, we need to apply this to probability. We need to apply this to computer science,&#8221; et cetera.</p><p>In the future, the AIs are coming up with the next version of this unifying concept. How would you identify it among millions of papers that might actually constitute progress, but which have much less in terms of general unifying ideas?</p><p><strong>Terence Tao</strong></p><p>A lot of it&#8217;s the test of time. Many great ideas didn&#8217;t actually get a great reception at the time they were first proposed. It was only after some other scientists realized that they could take it further and apply them to their own... <a href="https://en.wikipedia.org/wiki/Deep_learning">Deep learning</a> itself was a niche area of AI for a long time. The idea of getting answers entirely through training on data and not through first principles reasoning was very controversial, and it just took a long time before it started bearing fruit.</p><p>You mentioned the bit. There were other proposals for computer architectures than the zero-one that is universal today. I think there were <a href="https://en.wikipedia.org/wiki/Ternary_computer">trits</a>, three-valued logic. In an alternate universe, maybe a different paradigm would have shown up. The <a href="https://en.wikipedia.org/wiki/Transformer_(deep_learning)">transformer</a>, for example, is the foundation of all modern <a href="https://en.wikipedia.org/wiki/Large_language_model">large language models</a>, and it was the first deep learning architecture that really was sophisticated enough to capture language. But it didn&#8217;t have to be that way. There could&#8217;ve been some other architecture that was the first to do it and once that was adopted, it would become the standard.</p><p>One reason why it&#8217;s hard to assess whether a given idea is going to be fruitful is that it depends on the future. It depends also on the culture and society, which ones get adopted, which ones don&#8217;t. The <a href="https://en.wikipedia.org/wiki/Decimal">base ten numeral system</a> in mathematics is extremely useful, much better than the <a href="https://en.wikipedia.org/wiki/Roman_numerals">Roman numeral system</a>, for instance. But again, there&#8217;s nothing special about ten. It&#8217;s a system that is useful for us because everyone else uses it. We&#8217;ve standardized it. We&#8217;ve built all our computers and our number representation systems around it, so we&#8217;re stuck with it now. Some people occasionally push for other systems than decimal, but there&#8217;s just too much inertia.</p><p>It&#8217;s not something where you can look at any given scientific achievement purely in isolation and give it an objective grade without being aware of the context both in the past and the future. So it may never be something that you can just <a href="https://en.wikipedia.org/wiki/Reinforcement_learning">reinforcement learn</a> the same way that you can for much more localized problems.</p><p><strong>Dwarkesh Patel</strong></p><p>Often in the history of science when a new theory comes up that in retrospect we realize is correct, it seems to make implications that either make no sense because they&#8217;re wrong, and we realize later on why they&#8217;re wrong, or they&#8217;re correct but seem wildly implausible at the time.</p><p>As you talked about, Aristarchus had heliocentrism in the third century BC. The ancient Athenians were like, &#8220;This can&#8217;t be because if the earth is going around the sun, we should see the relative position of the stars change as we&#8217;re going around the sun, and the only way that wouldn&#8217;t be the case is if they&#8217;re so far away that you don&#8217;t notice any parallax,&#8221; which is actually the correct implication.</p><p>But there&#8217;s times when the implication is incorrect and we just need to graduate to a better level of understanding. <a href="https://en.wikipedia.org/wiki/Gottfried_Wilhelm_Leibniz">Leibniz</a> would chide Newton and disagree with <a href="https://en.wikipedia.org/wiki/Newton%27s_law_of_universal_gravitation">Newton&#8217;s theory of gravity</a> on the basis that it implied <a href="https://en.wikipedia.org/wiki/Action_at_a_distance">action at a distance</a>, and they didn&#8217;t know the mechanism, and Newton himself was sort of stunned that <a href="https://en.wikipedia.org/wiki/Equivalence_principle">inertial mass and gravitational mass were the same quantity</a>. All these things later were resolved by <a href="https://en.wikipedia.org/wiki/Albert_Einstein">Einstein</a>. But it was still progress.</p><p>So the question for a system of peer review for AI would be: even if you can falsify a theory, how would you notice that it still constitutes progress relative to the thing before?</p><p><strong>Terence Tao</strong></p><p>Often, the ultimately correct theory initially is worse in many ways. Copernicus&#8217;s theory of the planets was less accurate than Ptolemy&#8217;s theory. Geocentrism had been developed for a millennium by that point, and they had made many tweaks and increasingly complicated ad hoc fixes to make it more and more accurate. Copernicus&#8217;s theory was a lot simpler but much less accurate. It was only Kepler that made it more accurate than Ptolemy&#8217;s theory.</p><p>Science is always a work in progress. When you only get part of the solution, it looks worse than a theory which is incorrect but somehow has been completed to the point where it kind of answers all the questions. As you say, Newton&#8217;s theory had big mysteries. They had the equivalence of mass and action at a distance, which were only resolved with <a href="https://en.wikipedia.org/wiki/General_relativity">a very conceptually different approach</a> centuries afterwards.</p><p>Often progress has to be made not by adding more theories, but by deleting some assumptions that you have in your mind. One reason why geocentrism held on for so long is we had this idea that objects naturally want to stay at rest. This is the <a href="https://en.wikipedia.org/wiki/Aristotelian_physics">Aristotelian notion of physics</a>, and so the idea that the Earth was moving&#8230; How come we weren&#8217;t all falling over? Once you have Newton&#8217;s laws of motion&#8212;an object in motion remains in motion and so forth&#8212;then it makes sense.</p><p>Conceptually, it&#8217;s a very big leap to realize that the Earth is in motion. It doesn&#8217;t feel like it&#8217;s in motion. The biggest advances, like <a href="https://en.wikipedia.org/wiki/Darwinism">Darwin&#8217;s theory of evolution</a>, is the idea that species are not static. This is not obvious because you don&#8217;t see evolution in your lifetime. Well, now we actually can, but it seems permanent and static.</p><p>Right now we&#8217;re going through a cognitive version of the Copernican revolution, where we used to think that human intelligence is the center of the universe, and now we&#8217;re seeing that there are very different types of intelligence out there with very different strengths and weaknesses. Our assessment of which tasks require intelligence, which ones don&#8217;t, has to be reordered quite a bit.</p><p>Trying to fit AI into our theories of scientific progress and what is hard and what is easy, we&#8217;re struggling quite a lot. We have to ask questions that we&#8217;ve never really had to ask before. Or maybe the philosophers had, but now we all have to deal with it.</p><p><strong>Dwarkesh Patel</strong></p><p>This brings up a topic I&#8217;ve been very curious about. You mentioned Darwin&#8217;s theory of evolution. There&#8217;s this book, <em><a href="https://amzn.to/4bDfFzc">The Clockwork Universe</a></em> by <a href="https://en.wikipedia.org/wiki/Edward_Dolnick">Edward Dolnick</a>, which covers a lot of this era of history we&#8217;re talking about. He has this interesting observation in there. <em><a href="https://en.wikipedia.org/wiki/On_the_Origin_of_Species">The Origin of Species</a></em> was published in 1859. <em><a href="https://en.wikipedia.org/wiki/Philosophi%C3%A6_Naturalis_Principia_Mathematica">Principia Mathematica</a></em> was published in 1687.</p><p>So <em>The Origin of Species</em> comes out two centuries after <em>Principia</em>. Conceptually, it seems like Darwin&#8217;s theory is simpler. There&#8217;s a contemporaneous biologist to Darwin, <a href="https://en.wikipedia.org/wiki/Thomas_Henry_Huxley">Thomas Huxley</a>, who reads <em>The Origin of Species</em> and he says, &#8220;How stupid not to have thought of that.&#8221;</p><p>Nobody ever says that about <em>Principia</em>, chiding themselves for not having beaten Newton to <a href="https://en.wikipedia.org/wiki/Newton%27s_law_of_universal_gravitation">gravity</a>. So there&#8217;s a question of why did it take longer?</p><p>It seems like a big part of the reason is what you were saying. The evidence for natural selection is overwhelming in a certain sense, but it&#8217;s cumulative and retrospective, whereas Newton can just say, &#8220;Here are my equations. Let me see the moon&#8217;s orbital period and its distance, and if it lines up, then we&#8217;ve made progress.&#8221;</p><p><a href="https://en.wikipedia.org/wiki/Lucretius">Lucretius</a> actually had this idea that species adapted to their environment in the first century BC but nobody really talks about it until Darwin because Lucretius couldn&#8217;t run some experiment and force people to pay attention. I wonder if we&#8217;ll in retrospect end up seeing much more progress in domains which have this kind of tight data loop where you can verify them quite easily, even though they&#8217;re conceptually much more difficult.</p><p><strong>Terence Tao</strong></p><p>I think one aspect of science is that it&#8217;s not just creating a new theory and validating it, but communicating it to others. Darwin was an amazing science communicator. He wrote in English, in natural language. I&#8217;m speaking like a&#8212;</p><p><strong>Dwarkesh Patel</strong></p><p>No <a href="https://en.wikipedia.org/wiki/Lean_(proof_assistant)">Lean</a>.</p><p><strong>Terence Tao</strong></p><p>I have to get out of my technical mindset. He spoke in plain English, didn&#8217;t use equations, and he synthesized a lot of disparate facts. Little pieces of evolution had been worked out in the past, but he had this very compelling vision. Again, he was still missing things. He didn&#8217;t know the mechanism for <a href="https://en.wikipedia.org/wiki/Heredity">heredity</a>, he didn&#8217;t have DNA. But his writing style was persuasive, and that helped a lot.</p><p>Newton wrote in Latin. He had invented entire new areas of mathematics just to explain what he was doing. He was also from an era where scientists were much more secretive and competitive. Academia is still competitive, but it was even worse back in Newton&#8217;s day. He held back some of his best insights because he didn&#8217;t want his rivals to get any advantage. He was also a somewhat unpleasant person from what I gather. It was only a couple of decades after Newton when other scientists explained his work in much simpler terms that they became widespread.</p><p>The art of exposition and making a case and creating a narrative is also a very important part of science. If you have the data, it helps, but people need to be convinced, otherwise they will not push it further or take the initial investment to learn your theory and really explore it. That&#8217;s another thing which is really hard to reinforcement learn on. How can you score how persuasive you are? Well, there are entire marketing departments trying to do this. Maybe it&#8217;s good that AI is not yet optimized to be persuasive.</p><p>There&#8217;s a social aspect to science. Even though we pride ourselves on having an objective side to it, where there&#8217;s data and experiment and validation, we still have to tell stories and convince our fellow scientists. That&#8217;s a soft, squishy thing. It&#8217;s a combination of data and painting a narrative, and it&#8217;s a narrative of gaps.</p><p>Even with Darwin, as I said, there were pieces of his theory he could not explain. But he could still make a case that in the future, people would find transitional forms, that they would find the mechanism of inheritance, and they did. I don&#8217;t know how you can quantify that in such a precise way that you can start doing reinforcement learning. Maybe that will be forever the human side of science.</p><h3>00:26:10 &#8211; The deductive overhang</h3><p><strong>Dwarkesh Patel</strong></p><p>One takeaway I had from reading and watching your stuff on the <a href="https://terrytao.wordpress.com/wp-content/uploads/2010/10/cosmic-distance-ladder.pdf">cosmic distance ladder</a>&#8230; By the way, I highly recommend people watch <a href="https://www.youtube.com/watch?v=YdOXS_9_P4U">your series</a> with <a href="https://www.youtube.com/c/3blue1brown">3Blue1Brown</a> on the cosmic distance ladder. One takeaway was that the deductive overhang in many fields could be so much bigger than people realize. If you just had the right insight about how to study a problem, you might be surprised at how much more you could learn about the world.</p><p>I wonder if you think that&#8217;s a product of astronomy at the particular times in history that you&#8217;re studying. Or is it just that based on the data that is incident on the Earth right now, we could actually divine a lot more than we happen to know?</p><p><strong>Terence Tao</strong></p><p>Astronomy was one of the first sciences to really embrace data analysis and squeezing every last possible drop of information out of the information they had because data was the bottleneck. It still is the bottleneck. It&#8217;s really hard to collect astronomical data.</p><p>Astronomers are world-class in extracting all kinds of conclusions from little traces of data, almost like Sherlock. I hear that for a lot of quant hedge funds, their preferred hire is an astronomy PhD, actually. They are also very interested for other reasons in extracting signals from various random bits of data.</p><p>We do under-explore how to extract extra information from various signals. Just to pick <a href="https://arxiv.org/pdf/cond-mat/0212043">one random study</a>, I remember reading once that people were trying to measure how often scientists actually read the papers that they cite. How do you measure this? You could try to survey different scientists, but they had a clever trick.</p><p>Many citations have little typos, like a number is wrong or punctuation is almost wrong. They measured how often a typo got copied from one reference to the next, and they could infer whether an author was just copying and pasting a reference without actually checking it. From that, they were able to infer some measure of how much attention people were paying. So there are some clever tricks to extract&#8230;</p><p>These questions you posed earlier of how we can assess whether a scientific development is fruitful, interesting, or represents real progress&#8230; Maybe there are really useful metrics or footprints of this phenomenon in data. We can examine citations and how often something is mentioned in a conference. Maybe there&#8217;s a lot of sociology of science research to be done that could actually detect these things. Maybe we should get some astronomers on the case, actually.</p><h3>00:30:31 &#8211; Selection bias in reported AI discoveries</h3><p><strong>Dwarkesh Patel</strong></p><p>That brings us nicely to the progress that, from the outside, it seems like AI for math is making. You had a <a href="https://www.theatlantic.com/technology/2026/02/ai-math-terrance-tao/686107/">post recently where you pointed out</a> that over the last few months, AI programs have solved fifty out of the eleven hundred odd <a href="https://www.erdosproblems.com/">Erd&#337;s problems</a>. I don&#8217;t know if it&#8217;s still correct, but as of a month ago you said that there had been a pause because the low-hanging fruit had been picked.</p><p>First of all, I&#8217;m curious if that is still the case, that we have picked the low-hanging fruit and now we&#8217;re at this plateau currently.</p><p><strong>Terence Tao</strong></p><p>It does seem so. Fifty-odd problems have been solved with AI assistance, which is great, but there&#8217;s like six hundred to go. People are still chipping away at one or two of these right now.</p><p>We&#8217;re seeing a lot fewer pure AI solutions now where the AI just one-shots the problem. There was a month where that happened and that has stopped, not for lack of trying. I know of three separate attempts to get frontier model AIs to just attack every single one of the problems simultaneously. They pick out some minor observations, or maybe they find that some problem was already solved in the literature, but there hasn&#8217;t been any further purely AI-powered solution yet.</p><p>People are using AI a lot currently. Someone might use AI to generate a possible proof strategy, and then another person will use a separate AI tool to critique it, rewrite it, generate some numerical data for it, or do a literature survey. Some problems have been solved by an ongoing conversation between lots of humans and lots of AI tools. But it does seem like it was this one-off thing.</p><p>Maybe one analogy for these problems is that you&#8217;re in some sort of mountain range with all kinds of cliffs and walls. Maybe there&#8217;s a little wall which is three feet high, and one that&#8217;s six feet high, and then there&#8217;s fifteen feet high, and then there are some mile-high cliffs. You&#8217;re trying to climb as many of these cliffs as possible, but it&#8217;s in the dark. We don&#8217;t know which ones are tall, which ones are short. So we try to light some candles and make some maps, and slowly we figure out some of them are climbable. Some of them we can identify a partial track in the wall that you can reach first.</p><p>These AI tools, they&#8217;re like jumping machines that can jump two meters in the air, higher than any human. Sometimes they jump in the wrong direction, and sometimes they crash, but sometimes they can reach the tops of the lowest walls that we couldn&#8217;t reach before. We&#8217;ve just set them loose in this mountain range, hopping around. There was this exciting period where they could actually find all the low ones and reach them. Maybe the next time there&#8217;s a big advance in the models, they will try it again, and a few more will be breached.</p><p>But it&#8217;s a different style of doing mathematics. Normally we would <a href="https://en.wikipedia.org/wiki/Hill_climbing">hill climb</a>, make little markers, and try to identify partial things. These tools either succeed or they fail. They&#8217;ve been really bad at creating partial progress or identifying intermediate stages that you should focus on first. Going back to this previous discussion, we don&#8217;t have a way of evaluating partial progress the same way we can evaluate a one-shot success or failure of solving a problem.</p><p><strong>Dwarkesh Patel</strong></p><p>There&#8217;s two different ways to think through what you&#8217;ve just said. One of them is more bearish on AI progress, and one of them is more bullish. The bearish one being, &#8220;Oh, they&#8217;re only getting to a certain height of wall, which is not as high as humans are reaching.&#8221;</p><p>The second is that they have this powerful property that once they achieve a certain waterline, they can fill every single problem that is available at that waterline, which we simply can&#8217;t do with humans. We can&#8217;t make a million copies of you and give each of them a million dollars of inference compute and have you do a hundred years of subjective time research on a million different problems at the same time.</p><p>But once AIs reach Terence Tao-level, they could do that. Once they reach intermediate levels, they could do the intermediate version of that. The same reason that we should be bearish now is the reason we should be especially bullish. Not even when they achieve superhuman intelligence, but just when they achieve human-level intelligence, because their human-level intelligence is qualitatively wider and more powerful than our human-level intelligence.</p><p><strong>Terence Tao</strong></p><p>I agree. They excel at breadth, and humans excel at depth, human experts at least. I think they&#8217;re very complementary. But our current way of doing math and science is focused on depth because that&#8217;s where human expertise is, because humans can&#8217;t do breadth. We have to redesign the way we do science to take full advantage of this breadth capability that we now have.</p><p>We should have a lot more effort in creating very broad classes of problems to work on rather than one or two really deep, important problems. We should still have the deep, important problems, and humans should still be working on them. But now we have this other way of doing science. We can explore entirely new fields of science by first getting these broad, moderately competent AIs to map it out and make all the easy observations. And then identify certain islands of difficulty, which human experts can then come and work on.</p><p>I see very much a future of very complementary science. Eventually, you would hope to get both breadth and depth and somehow get the best of both worlds. But we need practice with the breadth side. It&#8217;s too new. We don&#8217;t even have the paradigms to really take full advantage of it. But we will, and then science will be unrecognizable after that, I think.</p><p><strong>Dwarkesh Patel</strong></p><p>To this point about complementarity, programmers have noticed that they&#8217;re way more productive as a result of these AI tools. I don&#8217;t know if you as a mathematician feel the same way, but it does seem like one big difference between vibe coding and vibe researching is that with software, the whole point is to have some effect on the world through your work. If it leads to you better understanding a problem or coming up with some clean abstraction to embody in your code, that is instrumental to the end goal.</p><p>Whereas with research, the reason we care about solving the <a href="https://en.wikipedia.org/wiki/Millennium_Prize_Problems">Millennium Prize Problems</a> is that presumably that in the process of solving them, we discover new mathematical objects or new techniques that advance our civilization&#8217;s understanding of mathematics. So the proof is instrumental to the intermediate work. I don&#8217;t know if you agree with that dichotomy or if that in any way will explain the relative uplift we&#8217;ll see in software versus research.</p><p><strong>Terence Tao</strong></p><p>Certainly in math, the process is often more important than the problem itself. The problem is kind of a proxy for measuring progress. I think even in software, there are different types of software tasks. If you just create a webpage that does the same thing that a thousand other webpages do, there&#8217;s no skill to be learned. Well, there is still some skill maybe that the individual programmer could pick up. But for boilerplate-type code, it&#8217;s something that you should definitely offload to AI.</p><p>Sometimes once you make the code, you still have to maintain it. There are issues with upgrading it and making it compatible with other things. I&#8217;ve heard programmers report that even if an AI can create the first prototype of a tool, making it mesh with everything else and making it interact with the real world in the way they want is an ongoing process. If you don&#8217;t have the skills that you pick up from writing the code, that may impact your ability to maintain it down the road.</p><p>So yes, certainly mathematicians, we&#8217;ve used problems to build intuition and to train people to have a good idea of what&#8217;s true, what to expect, what is provable, and what is difficult. Just getting the answers right away may actually inhibit that process.</p><p>I made a distinction between theory and experiment before. In most sciences, there&#8217;s an equal division between the theoretical side and the experimental side. Math has been unique in that it&#8217;s almost entirely theoretical. We place a premium on trying to have coherent, clean theories of why things are true and false. We haven&#8217;t done many experiments as to, if we have two different ways to solve a problem, which is more effective. We have some intuition, but we haven&#8217;t done large-scale studies where we take a thousand problems and just test them.</p><p>But we can do that now. I think AI-type tools will actually revolutionize the experimental side of math, where you don&#8217;t care so much about individual problems and the process of solving them, but you want to gather large-scale data about what things work and what things don&#8217;t. The same way that if you&#8217;re a software company and you want to roll out a thousand pieces of software, you don&#8217;t really want to handcraft each one and learn lessons from each. You just want to find what workflows let you scale.</p><p>The idea of doing mathematics at scale is at its infancy. But that&#8217;s where AI is really going to revolutionize the subject.</p><p><strong>Dwarkesh Patel</strong></p><p>I feel like a big crux in these conversations about how good AI will be for science is, I think you said this, that they&#8217;re using existing techniques and modifying them. It would be interesting to understand how much progress one can make simply from using existing techniques.</p><p>If I looked at the top math journals, how many of the papers are coming up with a new technique, whatever that means, versus using existing techniques on new problems? What is the overhang? If you just applied every known technique to every open problem, would that constitute a humongous uplift in our civilization&#8217;s knowledge, or would that not be that impressive and useful?</p><p><strong>Terence Tao</strong></p><p>This is a great question, and we don&#8217;t have the data to fully answer it yet. Certainly, a lot of work that human mathematicians do&#8230; When you take a new problem, one of the first things we do is we look at all the standard things that have worked on similar problems in the past, and we try them one by one. Sometimes that works, and that&#8217;s still worth publishing because the question was important.</p><p>Sometimes they almost work, and you have to add one more wrinkle to it, and that&#8217;s also interesting. But the papers that go into the top journals are usually ones where the existing methods can kind of solve 80% of the problem, but then there is this 20% which is resistant and a new technique has to be invented to fill in the gaps.</p><p>It&#8217;s very rare now that a problem gets solved with no reliance on past literature, where all the ideas come out of nowhere. That was more common in the past, but math is so mature now that it&#8217;s just so much of a handicap to not use the literature first.</p><p>AI tools are getting really good at the first part of that, just trying all the standard techniques on a problem, often making fewer mistakes in applying them than humans. They still make mistakes, but I&#8217;ve tested these tools on little tasks that I can do, and sometimes they pick up errors that I make. Sometimes I pick up errors that they make. It&#8217;s about a tie right now.</p><p>But I haven&#8217;t yet seen them take the next step. When there are holes in the argument where none of the things are working, then what do you do? They can suggest random things, but often I find that trying to chase them down to make them work, and finding they don&#8217;t work, wastes more time than it saves.</p><p>I think some fraction of problems that we currently think are hard will fall from this method, especially the ones that haven&#8217;t received enough attention. With the <a href="https://en.wikipedia.org/wiki/Paul_Erd%C5%91s">Erd&#337;s</a> problems, almost all of the 50 problems that were solved by AIs were ones for which there was basically no literature. Erd&#337;s posed the problem once or twice. Maybe some people tried it casually and couldn&#8217;t do it, but they never wrote up anything.</p><p>But it turned out that there was a solution, and it was just combining this one obscure technique that not many people know about with some other result in the literature. That&#8217;s the median level of what AI can accomplish, and that&#8217;s really great. It clears out 50 of these problems. So I think you will see some isolated successes.</p><p>But what we found&#8230; Some people have done large-scale sweeps of these Erd&#337;s problems. If you only focus on the success stories, the ones that get broadcast on social media, it looks amazing. All these problems that haven&#8217;t been solved for decades, now they&#8217;re falling. But whenever we do a systematic study, on any given problem an AI tool has a success rate of maybe 1% or 2%. It&#8217;s just that they can buy scale, and you just pick the winners. It looks great.</p><p>I think there&#8217;ll be a similar thing happening with the hundreds of really prestigious, difficult math problems out there. Some AI may get lucky and actually solve them, and there will be some backdoor to solve the problem that everyone else missed. That will get a lot of publicity. But then people will try these fancy tools on their own favorite problem, and they will again experience the 1% to 2% success rate.</p><p>There&#8217;ll be a lot of noise amongst the signal of when they&#8217;re working and when they&#8217;re not. It will be increasingly important to collect these really standardized datasets. There are efforts now to create a standard set of challenge problems for AIs to solve, and not just rely on the AI companies to only publish their wins and not disclose their negative results. That will maybe give more clarity as to where we&#8217;re actually at.</p><p><strong>Dwarkesh Patel</strong></p><p>Although I think it&#8217;s worth emphasizing how much progress in AI it constitutes already, to have models that are capable of applying some technique that nobody had written down as applicable to this particular problem.</p><p><strong>Terence Tao</strong></p><p>The progress is simultaneously amazing and disappointing. It is a very strange feeling to see these tools in action. But people also acclimatize really quickly.</p><p>I remember when Google&#8217;s web search came out 20 years ago. It just blew all the other searches out of the water. You&#8217;re getting relevant hits on the front page, exactly what you wanted. It was amazing, and then after a few years, you just took for granted that you could Google anything.</p><p>2026-level AI would be stunning in 2021. A lot of it&#8212;face recognition, natural speech, doing college-level math problems&#8212;we just take for granted now.</p><h3>00:46:43 &#8211; AI makes papers richer and broader, but not deeper</h3><p><strong>Dwarkesh Patel</strong></p><p>Speaking of 2026 AI, you <a href="https://unlocked.microsoft.com/ai-anthology/terence-tao/">made a prediction in 2023</a> that by 2026 it would be like a colleague in mathematics?</p><p><strong>Terence Tao</strong></p><p>A trustworthy co-author if used correctly.</p><p><strong>Dwarkesh Patel</strong></p><p>Which is looking pretty good in retrospect.</p><p><strong>Terence Tao</strong></p><p>Yeah, I&#8217;m pretty pleased.</p><p><strong>Dwarkesh Patel</strong></p><p>So let&#8217;s see if you can continue this streak. You personally are 2x more productive as a result of AI. What year would you say that?</p><p><strong>Terence Tao</strong></p><p>Productivity, I think, is not quite a one-dimensional quantity. I&#8217;m definitely noticing that the style in which I do mathematics is changing quite a bit, and the type of things I do. For example, my papers now have a lot more code, a lot more pictures, because it&#8217;s so easy to generate these things now. Some plot which would have taken me hours to do, now I can do in minutes. But in the past, I just wouldn&#8217;t have put the plot in my paper in the first place. I would just talk about it in words. So it&#8217;s hard to measure what 2x means.</p><p>On the one hand, I think the type of papers that I would write today, if I had to do them without AI assistance, would definitely take five times longer. But I would not write my papers that way.</p><p><strong>Dwarkesh Patel</strong></p><p>5x?</p><p><strong>Terence Tao</strong></p><p>Yeah, but these are auxiliary tasks. Things like doing a much deeper literature search or supplying a lot more numerics. They enrich the paper. The core of what I do, actually solving the most difficult part of a math problem, hasn&#8217;t changed too much. I still use pen and paper for that.</p><p>But there&#8217;s lots of silly things. I use an AI agent now to reformat. Sometimes if all my parentheses are not quite the right size, I used to manually change them by hand, and now I can get an AI agent to do all that quite nicely in the background.</p><p>They&#8217;ve really sped up lots of secondary tasks. They haven&#8217;t yet sped up the core thing that I do, but it&#8217;s allowed me to add more things to my papers. By the same token, if I were to write a paper I wrote in 2020 again&#8212;and not add all these extra features, but just have something of the same level of functionality&#8212;it actually hasn&#8217;t saved that much time, to be honest. It&#8217;s made the papers richer and broader, but not necessarily deeper.</p><p><strong>Dwarkesh Patel</strong></p><p>You made this <a href="https://mathstodon.xyz/@tao/115722360006034040">distinction between artificial cleverness and artificial intelligence</a>. I would like to better understand those concepts. What is an example of intelligence that is not just cleverness?</p><p><strong>Terence Tao</strong></p><p><a href="https://en.wikipedia.org/wiki/Intelligence">Intelligence</a> is famously hard to define. It&#8217;s one of these things that you know when you see it. But when I talk to someone and we&#8217;re trying to collaboratively solve a math problem together, there&#8217;s this conversation where neither of us knows how to solve the problem initially. One of us has some idea and it looks promising, so then we have some sort of prototype strategy. We test it, and it doesn&#8217;t work, but then we modify it. There&#8217;s adaptivity and continual improvement of the idea over time. Eventually, we&#8217;ve systematically mapped out what doesn&#8217;t work and what does work, and we can see a path forward, but it&#8217;s evolving with our discussion.</p><p>This isn&#8217;t quite what the AIs do. The AIs can mimic this a little bit. To go back to this analogy of these jumping robots, they can jump and fail, and jump and fail. But what they can&#8217;t do is jump a little bit, reach some handhold, stay there, pull other people up, and then try to jump from there. There isn&#8217;t this cumulative process which is built up interactively. It seems to be a lot more trial and error and just repetition: brute force. It scales, and it can work amazingly well in certain contexts. But this idea of building up cumulatively from partial progress is what&#8217;s still not quite there yet.</p><p><strong>Dwarkesh Patel</strong></p><p>Interesting. You&#8217;re saying if Gemini 3 or Claude 4.5, whatever, solves a problem, it is not the case that its own understanding of math has progressed.</p><p><strong>Terence Tao</strong></p><p>No.</p><p><strong>Dwarkesh Patel</strong></p><p>Or even if it works on a problem without solving it, it&#8217;s not that its own understanding of math has progressed.</p><p><strong>Terence Tao</strong></p><p>Yeah. You run a new session and it&#8217;s forgotten what it just did. It has no new skills to build on related problems. Maybe what you just did is 0.001% of the training data for the next generation. So maybe eventually some of it gets absorbed.</p><h3>00:53:00 &#8211; If AI solves a problem, can humans get understanding out of it?</h3><p><strong>Dwarkesh Patel</strong></p><p>One big question I have is how plausible is it that if we just keep training AIs&#8212;they get better and better at solving problems in <a href="https://en.wikipedia.org/wiki/Lean_(proof_assistant)">Lean</a>&#8212;that they will continue to solve more and more impressive problems, and then we will be surprised at how little insight we got from some Lean solution to proving the <a href="https://en.wikipedia.org/wiki/Riemann_hypothesis">Riemann hypothesis</a> or something.</p><p>Or do you think it is a necessary condition of solving the Riemann hypothesis, even by an AI that is doing it entirely in Lean, that the constructions and definitions created in the Lean program have to advance our understanding of mathematics? Or could it just be assembly code gobbledygook?</p><p><strong>Terence Tao</strong></p><p>We don&#8217;t know. Some problems have been basically solved by pure brute force. The <a href="https://en.wikipedia.org/wiki/Four_color_theorem">four color theorem</a> is a famous example. We have still not found a conceptually elegant proof of this theorem, and maybe we never will. Some problems may only be solvable by splitting into an enormous number of cases and doing brute force, uninsightful computer analysis on each case.</p><p>Part of the reason we prize problems like the Riemann hypothesis is that we&#8217;re pretty sure a new type of mathematics has to be created, or a new connection between two previously unconnected areas of mathematics has to be discovered to make this work. We don&#8217;t even know what the shape of the solution is, but it doesn&#8217;t feel like a problem that will be solved just by exhaustively checking cases.</p><p>Or it could be false actually. Okay, there is an unlikely scenario that the hypothesis is false, and you can just compute a zero off the line, and a massive computer calculation verifies it. That would be very disappointing. I do feel that fully autonomous, one-shot approaches are not the right approach for these problems. You&#8217;ll get a lot more mileage out of the interplay of humans collaborating with these tools.</p><p>I can see one of these problems being solved by smart humans assisted by extremely powerful AI tools. But the exact dynamic may be very different from what we envision right now. It could be a collaboration of a type that just doesn&#8217;t exist yet.</p><p>There may be a way to generate a million variants of the <a href="https://en.wikipedia.org/wiki/Riemann_zeta_function">Riemann zeta function</a> and do AI-assisted data analysis to discover some pattern connecting them that we didn&#8217;t know about before. This lets you transform the problem into a different area of mathematics. There could be all kinds of scenarios.</p><p><strong>Dwarkesh Patel</strong></p><p>Suppose the AI figures it out, and latent in the Lean is some brand-new construction which, if we realized its significance, we would be able to apply in all these different situations. How would we even recognize it?</p><p>Again, a very naive question, but if you come up with the equivalent of <a href="https://en.wikipedia.org/wiki/Cartesian_coordinate_system">Descartes&#8217; idea that you can have a coordinate system unifying algebra and geometry</a>, in Lean code it would just look like R&#8594;R, and it wouldn&#8217;t look that significant. I&#8217;m sure there are other constructions which have this kind of property.</p><p><strong>Terence Tao</strong></p><p>The beauty of formalizing a proof in something like Lean is that you can take any piece of it and study it atomically. When I read a paper which solves some difficult problem, there&#8217;s often a big sequence of <a href="https://en.wikipedia.org/wiki/Lemma_(mathematics)">lemmas</a> and theorems. Ideally, the author will talk their way through what&#8217;s important and what&#8217;s not. But sometimes they don&#8217;t reveal what steps were the important ones and which ones were just boilerplate, standard steps.</p><p>You can study each lemma in isolation. Some of them I can see look fairly standard and resemble something I&#8217;m familiar with. I&#8217;m pretty sure there&#8217;s nothing interesting going on there. But this other lemma, that&#8217;s something I haven&#8217;t seen before, and I can see why having this result would really help prove the main result. You can assess whether a step is really key to your argument or not, and Lean really facilitates that. The individual steps are identified really precisely.</p><p>I think in the future, there will be entire professions of mathematicians who might take a giant Lean-generated proof and do some ablation on it, trying to remove parts of it and find more elegant ways. They might get other AIs to do some reinforcement learning to make the proof more elegant, and maybe other AIs will grade whether this proof looks better or not.</p><p>One thing that will change quite a bit in the near future is how we write papers. Until recently, writing papers was the most time-consuming and expensive part of the job. So you did it very rarely. You only wrote up your results once all the other parts of your argument were checked out, because rewriting and refactoring was just a total pain. That&#8217;s become a lot easier now with modern AI tools. You don&#8217;t have to have just one version of your paper. Once you have one, people can generate hundreds more.</p><p>One giant messy Lean proof may not be very meaningful or understandable on its own, but other people can refactor it and do all kinds of things with it. We&#8217;ve seen this with the <a href="https://www.erdosproblems.com/">Erd&#337;s problem website</a>. An AI will generate a proof, and here are 3,000 lines of code that verify the proof. Then people got other AIs to summarize the proof, and people write their own proofs.</p><p>There&#8217;s actually post-processing. Once you have one proof, we have a lot of tools now to deconstruct and interpret it. It&#8217;s a very nascent area of mathematics, but I&#8217;m not as worried about it. Some people are concerned about what happens if the Riemann hypothesis is proven with a completely incomprehensible proof. I think once you have the artifact of a proof, we can do a lot of analysis on it.</p><h3>00:59:20 &#8211; We need a semi-formal language for the way that scientists actually talk to each other</h3><p><strong>Dwarkesh Patel</strong></p><p><a href="https://mathstodon.xyz/@tao/116117407353355690">You posted recently</a> that it would be helpful to have a formal or semi-formal language for mathematical strategies as opposed to just mathematical proofs, which is what Lean specializes in. I would love to learn more about what that would involve or look like.</p><p><strong>Terence Tao</strong></p><p>We don&#8217;t really know. We&#8217;ve been very lucky in mathematics that we have worked out the laws of <a href="https://en.wikipedia.org/wiki/Mathematical_logic">logic</a> and mathematics, but this is a fairly recent accomplishment. It was started by <a href="https://en.wikipedia.org/wiki/Euclid">Euclid</a> two millennia ago, but only in the early 20th century did we finally list out the axioms of mathematics, the standard axioms of what we call <a href="https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory">ZFC</a>, the axioms of first-order logic, and what a proof is. This we&#8217;ve managed to automate and have a formal language for.</p><p>There could be some way to assess plausibility. You have a conjecture that something is true, you test a few examples, and it works out. How does this increase your confidence that the conjecture is true? We have a few sort of mathematical ways to model this, like <a href="https://en.wikipedia.org/wiki/Bayesian_probability">Bayesian probability</a>, for example. But you often have to set certain base assumptions, and there&#8217;s a lot of subjectivity still in these tasks.</p><p>This is more of a wish than a plan to develop these languages, but just seeing how successful having a formal framework in place, like Lean, has made deductive proofs so much easier to automate and train AI on&#8230; The bottleneck for using AI to create strategies and make conjectures is we have to rely on human experts and the test of time to validate whether something is plausible or not.</p><p>If there was some semi-formal framework where this could be done semi-automatically in a way that isn&#8217;t easily hackable... It&#8217;s really important with these formal proof assistants that there are no backdoors or exploits you can use to somehow get your certified proof without actually proving it, because reinforcement learning is just so good at finding these backdoors.</p><p>If there&#8217;s some framework that mimics how scientists talk to each other in a semi-formal way, using data and argument, but also constructing narratives... There&#8217;s some subjective aspect of science that we don&#8217;t know how to capture in a way that we can insert AI into it in any useful way. This is a future problem. There are research efforts to try to create automated conjectures, and maybe there are ways to benchmark these and simulate this, but it&#8217;s all very new science.</p><p><strong>Dwarkesh Patel</strong></p><p>Can you help me get some intuition? I have two sub-questions. One, it would be very helpful to have a specific example of what something like this would look like, the way scientists communicate that we can&#8217;t formalize yet.</p><p>Two, it seems almost definitionally paradoxical to say you&#8217;re building up some narrative or natural language explanation and then also having something which you could have formalized. I&#8217;m sure there&#8217;s some intuition behind where that overlap is, and I&#8217;d love to understand that better.</p><p><strong>Terence Tao</strong></p><p>An example of a conjecture: <a href="https://en.wikipedia.org/wiki/Carl_Friedrich_Gauss">Gauss</a> was interested in the <a href="https://en.wikipedia.org/wiki/Prime_number">prime numbers</a> and created one of the first mathematical datasets. He just computed the first 100,000 prime numbers or so, hoping to find patterns. He did find a pattern, but maybe not the pattern he was expecting. He found a statistical pattern in the primes that if you count how many primes there are up to 100, 1,000, one million, and so forth, they get sparser and sparser, but the drop-off in the density was inversely proportional to the natural logarithm of the range of numbers.</p><p>So he conjectured what we now call the <a href="https://en.wikipedia.org/wiki/Prime_number_theorem#History_of_the_proof_of_the_asymptotic_law_of_prime_numbers">prime number theorem</a>: the number of primes up to X is X divided by the natural log of X. He had no way to prove this. It was data-driven. This was a conjecture. It was revolutionary for its time because it was maybe the first really important conjecture of math that was statistical in nature. Normally you&#8217;re talking about a pattern, like maybe the spacing between the primes has a certain regularity. But this didn&#8217;t tell you exactly how many primes there were in any given range. It just gave you an approximation that got better and better as you went further and further out.</p><p>It started the field of what we call <a href="https://en.wikipedia.org/wiki/Analytic_number_theory">analytic number theory</a>. It was the first in many conjectures like this, many of which got proved, which started consolidating the idea that the prime numbers didn&#8217;t really have a pattern, that they behaved like random sets of numbers with a certain density. They had some patterns, like they&#8217;re almost all odd. They&#8217;re also not actually random, they&#8217;re what&#8217;s called <a href="https://en.wikipedia.org/wiki/Pseudorandomness">pseudo-random</a>. There&#8217;s no <a href="https://en.wikipedia.org/wiki/Random_number_generation">random number generation</a> involved in creating the prime numbers. But over time, it became more and more productive to think of the primes as if they were just generated by some god rolling dice all the time and creating this random set.</p><p>This allowed us to make all these other predictions. There&#8217;s a still-open conjecture in number theory called the <a href="https://en.wikipedia.org/wiki/Twin_prime">twin prime conjecture</a>, that there should be infinitely many pairs of primes that are twins just two apart, like 11 and 13. We can&#8217;t prove that, and there are good reasons why we can&#8217;t prove it. But because of this statistical random model of the primes, we are absolutely convinced it&#8217;s true. We know that if the primes were generated by flipping coins, we would just&#8212;by random chance like infinite monkeys at a typewriter&#8212;see twin primes appear over and over again.</p><p>We have over time developed this very accurate conceptual model of what the primes should behave like based on statistics and probability. It&#8217;s mostly heuristic and non-rigorous, but extremely accurate. The few times when we actually can prove things about the primes, it has matched up with the predictions of what we call the <a href="https://en.wikipedia.org/wiki/Cram%C3%A9r%27s_conjecture">random model of the primes</a>. We have this conjectural concept framework for understanding the primes that everyone believes in. It&#8217;s the same reason why we believe the Riemann hypothesis is true, and why we believe that <a href="https://www.geeksforgeeks.org/maths/why-prime-numbers-are-used-in-cryptography/">cryptography based on the primes</a> is mathematically secure. It&#8217;s all part of this belief.</p><p>In fact, one reason why we care about the Riemann hypothesis is that if the Riemann hypothesis failed, if we knew it was false, it would be a serious blow to this model. It would mean there&#8217;s a secret pattern to the primes that we were not aware of. I think we would very rapidly abandon any cryptography based on the primes, because if there was one pattern that we didn&#8217;t know about, there are probably more, and these patterns can lead to exploits in crypto. It would be a big shock. So we really want to make sure that doesn&#8217;t happen.</p><p>We&#8217;ve been convinced of things like the Riemann hypothesis over time. Some of it is experimental evidence, and some is that the few times we&#8217;ve been able to make theoretical results, they&#8217;ve always aligned. It is possible that the consensus is wrong and we&#8217;ve all just missed something very basic. There have been paradigm shifts in the past in scientific history. But we don&#8217;t really have a way of measuring this, partly because we don&#8217;t have enough data on how math or science develops. We have one timeline of history, and we have maybe 100 stories of turning points in history.</p><p>If we had access to a million alien civilizations, each with a different development of history and science in different orders, then maybe we&#8217;d actually have a decent shot at understanding how we measure what progress is and what is a good strategy. We could maybe start formalizing it and actually having a framework. Maybe what we need to do is start creating lots of mini-universes or simulations of AI solving very basic problems in arithmetic or whatever, but coming up with their own strategies for doing these things and having these little laboratories to test. There are people who investigate what&#8217;s the smallest neural network that can do 10-digit multiplication and things like that. I think we could learn a lot just from evolving small AIs on simple problems.</p><h3>01:09:48 &#8211; How Terry uses his time</h3><p><strong>Dwarkesh Patel</strong></p><p>You have to learn about new fields not only very rapidly, but deeply enough to contribute to the frontier. So in some sense, you&#8217;re also one of the world&#8217;s greatest autodidacts. What is your process of learning about a new subfield in math? What does that look like?</p><p><strong>Terence Tao</strong></p><p>We talked about depth and breadth before. It&#8217;s not a purely human-AI distinction. Humans also, I think it was <a href="https://en.wikipedia.org/wiki/Isaiah_Berlin">Berlin</a> who split them into <a href="https://en.wikipedia.org/wiki/The_Hedgehog_and_the_Fox">hedgehogs and foxes</a>. The hedgehog knows one thing very well, and a fox knows a little bit about everything. I definitely think of myself as a fox. I work with hedgehogs a lot, and sometimes I can be a hedgehog if need be.</p><p>I&#8217;ve always had a little bit of an obsessive streak. If there&#8217;s something I read about which I feel like I have the capability to understand, but I don&#8217;t understand why it works and there&#8217;s some magic in it&#8230; Someone was able to use a type of mathematics I&#8217;m not familiar with and get a result I would like to prove. I can&#8217;t do it myself, but they could do it by their method, and I want to find out what their trick was. It bugs me that someone else can do something I think I can do, but I can&#8217;t. I&#8217;ve always had that obsessive, completionist streak. I&#8217;ve had to wean myself off computer games because if I start a game, I want to play it to completion, through all the levels. That&#8217;s one way I learn new fields.</p><p>I collaborate with a lot of people who have taught me other types of mathematics. I just make friends with another mathematician working on another area of mathematics. I find their problems interesting, but they have to teach me some of the basic tricks, what&#8217;s known, and what&#8217;s not known. I learn a lot from that.</p><p>I found that writing about what I&#8217;ve learned helps. I have a <a href="https://terrytao.wordpress.com/">blog</a> where I sometimes record things I&#8217;ve learned. In the past when I was younger, I would learn something, do this cool trick, and say, &#8220;Okay, I&#8217;m going to remember this.&#8221; Then six months later, I&#8217;d forgotten it. I remember remembering it, but I can&#8217;t reconstruct my arguments. The first few times, it was so frustrating to have understood something and then lost it. I resolved I should always write down anything cool that I&#8217;ve learned. That&#8217;s part of how this blog came about.</p><p><strong>Dwarkesh Patel</strong></p><p>How long does it take you to write a blog post?</p><p><strong>Terence Tao</strong></p><p>It&#8217;s something I often do when I don&#8217;t want to do other work. There&#8217;s some referee report or something that feels slightly unpleasant for me to do at the time. Writing a blog feels creative and fun. It&#8217;s something I do for myself.</p><p>Depending on the topic, it could be a quick half an hour or several hours. Because it&#8217;s something I do voluntarily, time flies when I write these things down, as opposed to doing something I have to do for administrative reasons that is just drudgery. Those are tasks, by the way, that AI is really helping with nowadays.</p><p><strong>Dwarkesh Patel</strong></p><p>If civilization could from first principles decide how to use Terry Tao&#8217;s time, as a limited resource, what is the biggest difference? What if the <a href="https://en.wikipedia.org/wiki/Original_position">veil of ignorance</a> got to decide how to use Terry Tao&#8217;s time versus what it does now? This podcast wouldn&#8217;t be happening.</p><p><strong>Terence Tao</strong></p><p>As much as I complain about certain tasks that I don&#8217;t want to do, but have to do&#8230; As you get more senior in academia, you get more and more responsibilities, more committees, and whatever. I have also found that a lot of events I reluctantly went to because I was obliged to for one reason or another&#8230;  Because it&#8217;s outside my comfort zone, it often results in interactions with people I wouldn&#8217;t normally talk to, like you for instance. I would learn interesting things and have interesting experiences. I would have opportunities to then network with other people that I never would have before.</p><p>So I do believe a lot in serendipity. I do optimize portions of my day where I schedule very carefully. But I am willing to leave some portions just to do something that is not my usual thing. Maybe it&#8217;ll be a waste of my time, but maybe I will learn something. More often than not, I get a positive experience that I wouldn&#8217;t have planned for.</p><p>So I believe a lot in serendipity. Maybe there&#8217;s a danger in modern societies, not just with AI, that we&#8217;ve become really good at optimizing everything. We&#8217;re not optimizing our own optimization. With COVID, for example, we switched a lot to remote meetings, so everything was scheduled. We kept busy in academia. We met almost the same number of people we met in person, but everything had to be planned in advance. What we lost out on was the casual knocking on a hallway door, just meeting someone while getting a coffee. Those serendipitous interactions may not seem optimal, but they are actually really important.</p><p>When I was a grad student, I would go to the library to look for a journal article. You had to physically check out the journal and read the article. You could browse through and sometimes the next article was also interesting. Sometimes it wasn&#8217;t, but you could accidentally find interesting things. That has basically been lost now. If you want to access an article, you just type it into a search engine or an AI, and you get exactly what you want instantly. But you don&#8217;t get the accidental things you might have found if you&#8217;d done it more inefficiently.</p><p>I spent a year once at the <a href="https://en.wikipedia.org/wiki/Institute_for_Advanced_Study">Institute for Advanced Study</a>, which is a great place with no distractions. You&#8217;re there just to do research. The first few weeks you&#8217;re there, it&#8217;s great. You&#8217;re getting all these papers written up that you&#8217;ve been wanting to do for a long time. You think about problems for blocks of hours at a time. But I find if I stay there for more than several months, I run out of inspiration. I get bored. I surf the internet a lot more.</p><p>You actually do need a certain level of distraction in your life. It adds enough randomness and high temperature. I don&#8217;t know the optimal way to schedule my life. It just seems to work.</p><h3>01:17:05 &#8211; Human-AI hybrids will dominate math for a lot longer</h3><p><strong>Dwarkesh Patel</strong></p><p>I&#8217;m very curious when you expect AIs that can actually do frontier math at least as well as the best human mathematicians.</p><p><strong>Terence Tao</strong></p><p>In some ways, they&#8217;re already doing frontier math that is super intelligent that humans can&#8217;t do, but it&#8217;s a different frontier from what we&#8217;re used to. You could argue that calculators were doing frontier math that humans could not accomplish, but it was number crunching.</p><p><strong>Dwarkesh Patel</strong></p><p>But replacing Terry Tao completely.</p><p><strong>Terence Tao</strong></p><p>I mean, what do you want me for?</p><p><strong>Dwarkesh Patel</strong></p><p>You&#8217;ll just go on all the podcasts after.</p><p><strong>Terence Tao</strong></p><p>It might not be the right question to ask. I think within a decade, a lot of things that math students currently do&#8212;what we spend the bulk of our time doing and a lot of stuff we put in our papers today&#8212;can be done by AI. But we will find that that actually wasn&#8217;t the most important part of what we do.</p><p>A hundred years ago, a lot of mathematicians were just solving <a href="https://en.wikipedia.org/wiki/Differential_equation">differential equations</a>. Physicists needed some exact solution to some system, and they hired a mathematician to laboriously go through the calculus and work out the solution to this fluid equation, whatever. A lot of what a 19th-century mathematician would do, you could make a call to <a href="https://en.wikipedia.org/wiki/Wolfram_Mathematica">Mathematica</a>, Wolfram Alpha, a computer algebra package, or now more recently to an AI, and it would just solve the problem in a few minutes. But we moved on. We worked on different types of problems after that.</p><p>Once computers came along&#8212;computers used to be human. People used to laboriously create log tables and work out primes as Gauss did, and that has all been outsourced to computers. But we moved on.</p><p>In genetics, to sequence the genome of a single organism, that was an entire PhD of a geneticist, carefully separating all the chromosomes and whatever. Now you can just spend $1,000 and send it to a sequencer and get it done. But genetics is not dead as a subject. You move to a different scale. Maybe you study whole ecosystems rather than individuals.</p><p><strong>Dwarkesh Patel</strong></p><p>I take your point but when is most mathematical progress, or almost all mathematical progress, happening by AI? If you find out this year a Millennium Prize Problem has been solved, you would put 95% odds that an AI did it autonomously. Surely there will be such a year.</p><p><strong>Terence Tao</strong></p><p>I guess I do believe that hybrid human plus AIs will dominate mathematics for a lot longer. It will depend. It will require some additional breakthroughs beyond what we already have, so it&#8217;s going to be stochastic. I think AIs currently are very good at certain things, but really terrible at others. While you can add more and more frameworks on top to reduce the error rates and make them work with each other a bit more, it feels like we don&#8217;t have all the ingredients to really have a truly satisfactory replacement for all intellectual tasks.</p><p>It is complementary currently. It&#8217;s not a replacement. Because current level AIs will accelerate science in so many ways, hopefully new discoveries and new breakthroughs will happen more quickly. It&#8217;s also possible that by destroying serendipity we actually inhibit certain types of progress. Anything is possible at this point. I think the world is very, very unpredictable at this point in time.</p><p><strong>Dwarkesh Patel</strong></p><p>What is your advice to somebody who would consider a career in math or is early in a career in math, especially in light of AI progress? How should they be thinking about their career differently, if at all, as a result of AI progress?</p><p><strong>Terence Tao</strong></p><p>We live in a time of change. As I said, we live in a particularly unpredictable era. Things that we&#8217;ve taken for granted for centuries may not hold anymore. The way we do everything, and not just mathematics, will change. In many ways, I would prefer the much more boring, quiet era where things are much the same as they were 10 years ago, 20 years ago. But I think one just has to embrace that there&#8217;s going to be a lot of change. The things that you study, some of them may become obsolete or revolutionized, but some things will be retained.</p><p>You always have to keep an eye on opportunities for things that you wouldn&#8217;t be able to do before. In math, you previously had to go through years and years of education and be a math PhD before you could contribute to the frontier of math research. But now it&#8217;s quite possible at the high school level, or whatever, that you could get involved in a math project and actually make a real contribution because of all these AI tools, Lean, and everything else.</p><p>There will be a lot of non-traditional opportunities to learn, so you need a very adaptable mindset. There will be room for pursuing things just for curiosity and for playing around. You still need to get your credentials. For a while it will still be important to go through traditional education and learn math and science the old-fashioned way. But you should also be open to very different ways of doing science, some of which don&#8217;t exist yet. It&#8217;s a scary time, but also very exciting.</p><p><strong>Dwarkesh Patel</strong></p><p>That&#8217;s a great note to close on. Terence, thanks so much.</p><p><strong>Terence Tao</strong></p><p>Pleasure.</p>]]></content:encoded></item><item><title><![CDATA[Dylan Patel — Deep dive on the 3 big bottlenecks to scaling AI compute]]></title><description><![CDATA[Plus, why an H100 is worth more today than 3 years ago]]></description><link>https://www.dwarkesh.com/p/dylan-patel</link><guid isPermaLink="false">https://www.dwarkesh.com/p/dylan-patel</guid><dc:creator><![CDATA[Dwarkesh Patel]]></dc:creator><pubDate>Fri, 13 Mar 2026 16:00:42 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/190839917/ba9582725eaf9c7756c2e37d28263b97.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><a href="https://x.com/dylan522p?lang=en">Dylan Patel</a>, founder of <a href="https://semianalysis.com/">SemiAnalysis</a>, provides a deep dive into the 3 big bottlenecks to scaling AI compute: logic, memory, and power.</p><p>And walks through the economics of labs, hyperscalers, foundries, and fab equipment manufacturers.</p><p>Learned a ton about every single level of the stack. Enjoy!</p><p>Watch on <a href="https://youtu.be/mDG_Hx3BSUE">YouTube</a>; listen on <a href="https://podcasts.apple.com/us/podcast/dylan-patel-deep-dive-on-the-3-big-bottlenecks-to/id1516093381?i=1000755126873">Apple Podcasts</a> or <a href="https://open.spotify.com/episode/5qiibwoBWY5rXyflK7WJzH?si=e1316364956d485d">Spotify</a>.</p><div id="youtube2-mDG_Hx3BSUE" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;mDG_Hx3BSUE&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/mDG_Hx3BSUE?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h2><strong>Sponsors</strong></h2><ul><li><p><a href="https://mercury.com/">Mercury</a> has already saved me a bunch of time this tax season. Last year, I used Mercury to request W-9s from all the contractors I worked with. Then, when it came time to issue 1099s this year, I literally just clicked a button and Mercury sent them out. Learn more at <a href="https://mercury.com">mercury.com</a>.</p></li><li><p><a href="https://labelbox.com/dwarkesh">Labelbox</a> noticed that even when voice models <em><a href="https://labelbox.com/dwarkesh">appear</a></em> to take interruptions in stride, their performance degrades. To figure out why, they built a new evaluation pipeline called EchoChain. EchoChain diagnoses voice models&#8217; specific failure modes, letting you understand what your model needs to truly handle interruptions. Check it out at <a href="https://labelbox.com/dwarkesh">labelbox.com/dwarkesh</a>.</p></li><li><p>J<a href="https://janestreet.com/dwarkesh">ane Street</a> is basically a research lab with a trading desk attached &#8211; and their infrastructure backs this up. They&#8217;ve got tens of thousands of GPUs, hundreds of thousands of CPU cores, and exabytes of storage. This is what it takes to find subtle signals hidden deep within noisy market data. If this sounds interesting, you can explore open positions at <a href="https://janestreet.com/dwarkesh">janestreet.com/dwarkesh</a>.</p></li></ul><h2>Timestamps</h2><p><a href="https://www.dwarkesh.com/i/190839917/000000-why-an-h100-is-worth-more-today-than-3-years-ago">(00:00:00) &#8211; Why an H100 is worth more today than 3 years ago</a></p><p><a href="https://www.dwarkesh.com/i/190839917/002452-nvidia-secured-tsmc-allocation-early-google-is-getting-squeezed">(00:24:52) &#8211; Nvidia secured TSMC allocation early; Google is getting squeezed</a></p><p><a href="https://www.dwarkesh.com/i/190839917/003434-asml-will-be-the-1-constraint-for-ai-compute-scaling-by-2030">(00:34:34) &#8211; ASML will be the #1 constraint for AI compute scaling by 2030</a></p><p><a href="https://www.dwarkesh.com/i/190839917/005547-cant-we-just-use-tsmcs-older-fabs">(00:55:47) &#8211; Can&#8217;t we just use TSMC&#8217;s older fabs?</a></p><p><a href="https://www.dwarkesh.com/i/190839917/010537-when-will-china-outscale-the-west-in-semis">(01:05:37) &#8211; When will China outscale the West in semis?</a></p><p><a href="https://www.dwarkesh.com/i/190839917/011601-the-enormous-incoming-memory-crunch">(01:16:01) &#8211; The enormous incoming memory crunch</a></p><p><a href="https://www.dwarkesh.com/i/190839917/014234-scaling-power-in-the-us-will-not-be-a-problem">(01:42:34) &#8211; Scaling power in the US will not be a problem</a></p><p><a href="https://www.dwarkesh.com/i/190839917/015444-space-gpus-arent-happening-this-decade">(01:54:44) &#8211; Space GPUs aren&#8217;t happening this decade</a></p><p><a href="https://www.dwarkesh.com/i/190839917/021407-why-arent-more-hedge-funds-making-the-agi-trade">(02:14:07) &#8211; Why aren&#8217;t more hedge funds making the AGI trade?</a></p><p><a href="https://www.dwarkesh.com/i/190839917/021830-will-tsmc-kick-apple-out-from-n2">(02:18:30) &#8211; Will TSMC kick Apple out from N2?</a></p><p><a href="https://www.dwarkesh.com/i/190839917/022416-robots-and-taiwan-risk">(02:24:16) &#8211; Robots and Taiwan risk</a></p><h2>Transcript</h2><h3>00:00:00 &#8211; Why an H100 is worth more today than 3 years ago</h3><p><strong>Dwarkesh Patel</strong></p><p>All right, this is the episode where my roommate teaches me semiconductors.</p><p><strong>Dylan Patel</strong></p><p>It&#8217;s also the send off for this current set.</p><p><strong>Dwarkesh Patel</strong></p><p>It is. After you use it, I&#8217;m like, &#8220;I can&#8217;t use this again. I gotta get out of here.&#8221;</p><p><strong>Dylan Patel</strong></p><p>No sloppy seconds for Dwarkesh.</p><p><strong>Dwarkesh Patel</strong></p><p><a href="https://www.dwarkesh.com/p/dylan-jon">Dylan</a> is the CEO of <a href="https://semianalysis.com/">SemiAnalysis</a>. Dylan, here&#8217;s the burning question I have for you. If you add up the big four&#8212;Amazon, Meta, Google, Microsoft&#8212;their combined forecasted CapEx this year that you published recently is $600 billion. Given yearly prices of renting that compute, that would be close to 50 gigawatts. Obviously, we&#8217;re not putting on 50 gigawatts this year, so presumably that&#8217;s paying for compute that is going to be coming online over the coming years. How should we think about the timeline around when that CapEx comes online?</p><p>Similar question for the labs. OpenAI just announced they raised $110 billion, and Anthropic just announced they raised $30 billion. If you look at the compute they have coming online this year&#8212;you should tell me how much it is, but is it on the order of another four gigawatts total? The cost to rent the compute that OpenAI and Anthropic will have this year to sustain their compute spend is $10 to $13 billion a gigawatt. Those individual raises alone are enough to cover their compute spend for the year. And this is not even including the revenue that they&#8217;re going to earn this year.</p><p>So help me understand: first, what is the timescale at which the Big Tech CapEx actually comes online? And second, what are the labs raising all this money for if the yearly price of a one-gigawatt data center is $13 billion?</p><p><strong>Dylan Patel</strong></p><p>So when you talk about the CapEx of these hyperscalers being on the order of $600 billion, and you look across the rest of the supply chain, it gets you to the order of a trillion dollars. A portion of this is immediately for compute going online this year: the chips and the other parts of CapEx that get paid this year. But there&#8217;s a lot of setup CapEx as well.</p><p>When we&#8217;re talking about 20 gigawatts of incremental added capacity this year in America, a portion of this is not spent this year. A portion of that CapEx was actually spent the prior year. When you look at Google having $180 billion, a big chunk of that is spent on turbine deposits for &#8216;28 and &#8216;29. A chunk of that is spent on data center construction for &#8216;27. A chunk of that is spent on power purchasing agreements, down payments, and all these other things they&#8217;re doing further out into the future so they can set up this super fast scaling. This applies to all the hyperscalers and other people in the supply chain.</p><p>So with roughly 20 gigawatts deployed this year, a big chunk is hyperscalers, and a chunk is not. For all of these companies, their biggest customers are Anthropic and OpenAI. Anthropic and OpenAI are at roughly two to two-and-a-half gigawatts right now, and they&#8217;re trying to scale much larger.</p><p>If you look at what Anthropic has done over the last few months, with $4 billion or $6 billion in revenue added, we can just draw a straight line and say they&#8217;ll add another $6 billion of revenue a month. People would argue that&#8217;s bearish, and that they should go faster. What that implies is they&#8217;re going to add $60 billion of revenue across the next ten months. At the current gross margins Anthropic had, as last reported by media, that would imply they have roughly $40 billion of compute spend for that inference, for that $60 billion of revenue.</p><p>That $40 billion of compute, at roughly $10 billion a gigawatt in rental costs, means they need to add four gigawatts of inference capacity just to grow revenue. That&#8217;s assuming their research and development training fleet stays flat. In a sense, Anthropic needs to get to well above five gigawatts by the end of this year. It&#8217;s going to be really tough for them to get there, but it&#8217;s possible.</p><p><strong>Dwarkesh Patel</strong></p><p>Can I ask a question about that? If Anthropic was not on track to have five gigawatts by the end of this year, but it needs that to serve both the revenue that&#8217;s gone crazier than expected&#8212;and maybe it&#8217;s going to be even more than that&#8212;plus the research and training to make sure its models are good enough for next year: Where is that capacity going to come from?</p><p><strong>Dylan Patel</strong></p><p><a href="https://www.dwarkesh.com/p/dario-amodei-2">Dario, when he was on your podcast</a>, was very conservative. He said, &#8220;I&#8217;m not going to go crazy on compute because if my revenue inflects at a different rate, at a different point&#8230; I don&#8217;t want to go bankrupt. I want to make sure that we&#8217;re being responsible with this scaling.&#8221; But in reality, he&#8217;s screwed the pooch compared to OpenAI, whose approach was, &#8220;Let&#8217;s just sign these crazy fucking deals.&#8221;</p><p>OpenAI has got way more access to compute than Anthropic by the end of the year. What does Anthropic have to do to get the compute? They have to go to lower-quality providers that they would not have gone to before. Anthropic historically had the best quality providers, like Google and Amazon, the biggest companies in the world. Now Microsoft is expanding across the supply chain, and they&#8217;re going to other newer players.</p><p>OpenAI has been a bit more aggressive on going to many players. Yes, they have tons of capacity from Microsoft, Google, and Amazon, but they also have tons with <a href="https://en.wikipedia.org/wiki/CoreWeave">CoreWeave</a> and Oracle. They&#8217;ve gone to random companies, or companies one would think are random, like <a href="https://www.wsj.com/tech/ai/openai-softbank-to-invest-1-billion-in-sb-energy-fa7385b9">SoftBank Energy</a>, who has never built a data center in their life but is building data centers now for OpenAI. They&#8217;ve gone to many others, like <a href="https://www.nscale.com/">NScale</a>, to get capacity.</p><p>There&#8217;s this conundrum for Anthropic because they were so conservative on compute, because they didn&#8217;t want to go crazy. In some sense, a lot of the financial freakouts in the second half of last year were because, &#8220;OpenAI signed all these deals but they didn&#8217;t have the money to pay for them&#8230;&#8221; Okay, Oracle&#8217;s stock is going to tank, CoreWeave&#8217;s stock is going to tank. All these companies&#8217; stocks tanked, and credit markets went crazy because people thought the end buyer couldn&#8217;t pay for this. Now it&#8217;s like, &#8220;Oh wait, they raised a ton of money. Okay, fine, they can pay for it.&#8221;</p><p>Anthropic was a lot more conservative. They were like, &#8220;We&#8217;ll sign contracts, but we&#8217;ll be principled. We&#8217;ll purposely undershoot what we think we can possibly do and be conservative because we don&#8217;t want to potentially go bankrupt.&#8221;</p><p><strong>Dwarkesh Patel</strong></p><p>The thing I want to understand is, what does it mean to have to acquire compute in a pinch? Is it that you have to go with <a href="https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-evolution-of-neoclouds-and-their-next-moves">neoclouds</a>? Do they have worse compute? In what way is it worse?</p><p>Did you have to pay gross margins to a cloud provider that you wouldn&#8217;t have otherwise had to pay because they&#8217;re coming in at the last minute? Who built the spare capacity such that it&#8217;s available for Anthropic and OpenAI to get last minute?</p><p>What is the concrete advantage that OpenAI has gotten if they end up at similar compute numbers by 2027? Are they just going to end this year with different gigawatts? If so, how many gigawatts are Anthropic and OpenAI going to have by the end of this year?</p><p><strong>Dylan Patel</strong></p><p>To acquire excess compute, yes, there is capacity at hyperscalers. Not all contracts for compute are long-term, five-year deals. There&#8217;s compute from 2023 or 2024, or H100s from 2025, that were signed at shorter terms. The vast majority of OpenAI&#8217;s compute is signed on five-year deals, but there were many other customers that had one-year, two-year, three-year, or six-month deals, on demand.</p><p>As these contracts roll off, who is the participant in the market most willing to pay price? In this sense, we&#8217;ve seen H100 prices inflect a lot and go up. People are willing to sign long-term deals for above $2 even. I&#8217;ve seen deals where certain AI labs&#8212;I&#8217;m being a little bit vague here for a reason&#8212;have signed at as high as $2.40 for two to three years for H100s. If you think about the margin, it costs $1.40 to build Hopper, across five years. Now, two years in, you&#8217;re signing deals for two to three years at $2.40? Those margins are way higher.</p><p>Now you can crowd out all of these other suppliers, whether Amazon had these, or CoreWeave, or <a href="https://www.together.ai/">Together AI</a>, or <a href="https://nebius.com/">Nebius</a>, or whoever it is. These neoclouds are the firms that had a higher percentage of Hopper in general because they were more aggressive on it. They also tended to sign shorter-term deals, not CoreWeave but the others. So if I want Hopper, there is some capacity out there.</p><p>Also, while most of the capacity at an Oracle or a CoreWeave is signed for a long-term deal in terms of Blackwell, anything that&#8217;s going online this quarter is already sold. In some cases, they&#8217;re not even hitting all the numbers they promised they would sell because there are some data center delays, not just those two, but Nebius, Microsoft, Amazon, and Google. But there are a lot of neoclouds, as well as some of the hyperscalers, who have capacity they&#8217;re building that they haven&#8217;t sold yet, or capacity they were going to allocate to some internal use that is not necessarily super AGI-focused, that they may now turn around and sell.</p><p>Or in the case of Anthropic, they don&#8217;t have to have all the compute directly. Amazon can have the compute and serve <a href="https://aws.amazon.com/bedrock/">Bedrock</a>, or Google can have the compute and serve <a href="https://cloud.google.com/vertex-ai">Vertex</a>, or Microsoft can have the compute and serve <a href="https://azure.microsoft.com/en-us/products/ai-foundry">Foundry</a>, and then do a revenue share with Anthropic, or vice versa.</p><p><strong>Dwarkesh Patel</strong></p><p>Basically, you&#8217;re saying Anthropic is having to pay either this 50% markup in the sense of the revenue share, or in the sense of last-minute spot compute that they wouldn&#8217;t have otherwise had to pay had they bought the compute early.</p><p><strong>Dylan Patel</strong></p><p>Right, there&#8217;s a trade-off there. But at the same time, for a solid four months, everyone was saying to OpenAI, &#8220;We&#8217;re not going to sign deals with you.&#8221; That sounds crazy, but it was because, &#8220;you don&#8217;t have the money.&#8221; Now everyone&#8217;s saying, &#8220;OpenAI, we believed you the whole time. We can sign any deal because you&#8217;ve raised all this money.&#8221; Anthropic is constrained in that sense. There are not that many incremental buyers of compute yet, because Anthropic hit the capability tier first where their revenue is mooning.</p><p><strong>Dwarkesh Patel</strong></p><p>That&#8217;s interesting. Otherwise you might think having the best model is an extremely depreciating asset, because three months later you don&#8217;t have the best model. But the reason it&#8217;s important is that you can sign these deals, lock in the compute in advance, and get better prices.</p><p>Maybe this is an obvious point. But at least until recently, people had made this huge point about the <a href="https://www.cnbc.com/2025/11/14/ai-gpu-depreciation-coreweave-nvidia-michael-burry.html">depreciation cycle of a GPU</a>. The bears, the <a href="https://x.com/michaeljburry/status/1987918650104283372?lang=en">Michael Burrys</a> or whoever, have said, &#8220;Look, people are saying four or five years for these GPUs. Maybe it&#8217;s because the technology is improving so fast, but it in fact makes sense to have two-year depreciation cycles for these GPUs,&#8221; which increases the reported amortized CapEx in a given year and makes it financially less lucrative to build all these clouds.</p><p>But in fact you&#8217;re pointing out that maybe the depreciation cycle is even longer than five years. If we&#8217;re using Hoppers&#8212;especially if AI really takes off and in 2030 we&#8217;re saying, &#8220;We have to get the seven-nanometer fabs up, we have to go back and turn on the A100s again&#8221;&#8212;then the depreciation cycle is actually incredibly long. I feel like that&#8217;s an interesting financial implication of what you&#8217;re saying.</p><p><strong>Dylan Patel</strong></p><p>There&#8217;s a few strings to pull on there. One is, what happens to depreciation of GPUs? I guess I didn&#8217;t answer your prior question, which is that I think Anthropic will be able to get to five gigawatts-ish, maybe a little bit more by the end of the year through themselves as well as their product being served through Bedrock, Vertex, or Foundry. I think they&#8217;ll be able to get to five or six gigawatts, which is way above their initial plans. OpenAI will be roughly the same, actually a little bit higher based on our numbers.</p><p>But anyway, the depreciation cycle of a GPU. Michael Burry was saying it&#8217;s three years or less. That&#8217;s sort of his argument. There are two lenses to look at this. Mechanically, there&#8217;s a <a href="https://www.investopedia.com/terms/t/totalcostofownership.asp">TCO model</a>, total cost of ownership of a GPU, where we project pricing out for GPUs and build up the total cost of a cluster. There are a number of costs: your data center cost, your networking cost, your smart hands and people in the data center swapping stuff out. There&#8217;s your spare parts, your actual chip cost, your server cost. All these various costs get lumped together. There&#8217;s some depreciation cycles on it, certain credit costs on it.</p><p>You build up to, &#8220;Hey, an H100 costs $1.40/hour to deploy at volume across five years if your depreciation is five years.&#8221; If you sign a deal at $2/hour for those five years, your gross margin is roughly 35%. It&#8217;s a little bit above that. If you sign it for $1.90, it&#8217;s 35% roughly. Then you assume at that fifth year, the GPU falls off a bus and is dead.</p><p>In some cases, the argument people are making is if you didn&#8217;t sign a long-term deal, because every two years NVIDIA is tripling or quadrupling the performance while only 2X-ing or 50% increasing the price&#8230; Then the price of an H100&#8230; Sure maybe the value in the market was $2 at 35% gross margins in 2024, but in 2026, when Blackwell is in super high volume and deploying millions a year, you&#8217;re actually now worth $1/hour. And when Rubin in &#8216;27 is in super high volume&#8212;even though it starts shipping this year, it&#8217;s super high volume next year&#8212;doing millions of chips a year deployed into clouds, you&#8217;ve got another 3X in performance, another 50% or 2X in price, then the Hopper is only worth $0.70/hour. So the price of a GPU would continue to fall. That&#8217;s one lens.</p><p>The other lens is, what is the utility you get out of the chip? If you could build infinite Rubin or infinite of the newest chip, then yes, that&#8217;s exactly what would happen. The price of a Hopper would fall at a spot or short-term contract rate as the new chips come out and the price per performance goes up. But because you are so limited on semiconductors and deployment timelines, what actually prices these chips is not the comparative thing I can buy today, but rather what is the value I can derive out of this chip today.</p><p>In that sense, let&#8217;s take <a href="https://openai.com/index/introducing-gpt-5-4/">GPT-5.4</a>. GPT-5.4 is both way cheaper to run than GPT-4 and has fewer active parameters. It&#8217;s much smaller, in that sense of active parameter, because it&#8217;s a sparser <a href="https://en.wikipedia.org/wiki/Mixture_of_experts">MoE</a> versus GPT-4 being a coarser MoE. There&#8217;s also been so many other advancements in training, <a href="https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf">RL</a>, model architecture, and data qualities that have made GPT-5.4 way better than GPT-4. And it&#8217;s cheaper to serve. When you look at an H100, it can serve more tokens per GPU of 5.4 than if you had ran GPT-4 on it. So it&#8217;s producing more tokens of a model that is of higher quality.</p><p>What is the maximum TAM for GPT-4 tokens? Maybe it was a few billion dollars, maybe it was tens of billions of dollars. Adoption takes time. For GPT-5.4, that number is probably north of a hundred billion. But there&#8217;s an adoption lag, there&#8217;s competition, and there&#8217;s the constant improvements that everyone else is having. If improvements stopped here, the value of an H100 is now predicated on the value that GPT-5.4 can get out of it instead of the value that GPT-4 can get out of it. These labs are in a competitive environment, so their margins can&#8217;t go to infinity. You sort of have this dynamic that is quite interesting in that an H100 is worth more today than it was three years ago.</p><p><strong>Dwarkesh Patel</strong></p><p>That&#8217;s crazy. It&#8217;s also interesting from the perspective of just taking that forward. If we had actual AGI models developed, if we had a genuine human on a server&#8230; These are such hand wave-y numbers about how many flops the brain can do. But on a flop basis, an H100 is estimated to do 1e15, which is how much some people estimate the human brain does in flops. Obviously, in terms of memory, the human brain has way more. An H100 is 80 gigabytes, and the brain might have petabytes.</p><p><strong>Dylan Patel</strong></p><p>Oh, yeah, you&#8217;ve got petabytes? Name a petabyte of ones and zeros, bro. Name me a string.</p><p><strong>Dwarkesh Patel</strong></p><p>Well, this is actually the point.</p><p><strong>Dylan Patel</strong></p><p>No, we&#8217;ve just got the best <a href="https://medium.com/@vishal09vns/sparse-attention-dad17691478c">sparse attention</a> techniques ever.</p><p><strong>Dwarkesh Patel</strong></p><p>Genuinely though. In the amount of information that is compressed, it might be petabytes. The brain is an extremely sparse MoE. But anyways, imagine a human knowledge worker can produce six figures a year of value. If an H100 can produce something close to that, if we had actual humans on a server, the value of an H100 is such that it can repay itself in the course of a couple of months.</p><p>So when I interviewed Dario, the point I was trying to make is not that I think the <a href="https://en.wikipedia.org/wiki/Technological_singularity">singularity</a> is two years away and therefore Dario desperately needs to buy more compute, although the revenue is certainly there that he needs to buy more compute. The point I was trying to make is that given what Dario seems to be saying&#8212;given his statements that we&#8217;re two years away from a data center of geniuses, and certainly not more than five years away, and a data center of geniuses should be earning trillions upon trillions of dollars of revenue&#8212;it just does not make sense why he keeps making these statements about being more conservative on compute or, to your point, being less aggressive than OpenAI on compute.</p><p>I guess that point got lost because then people were roasting me, saying, &#8220;Oh, this podcaster is trying to convince this multi-hundred billion dollar company CEO to YOLO it, bro.&#8221; I was just trying to say that internally, his statements are inconsistent. Anyway, it&#8217;s good to iron it out.</p><p><strong>Dylan Patel</strong></p><p>I think going back to the earlier view that if the models are so powerful, the value of a GPU goes up over time, right now only OpenAI and Anthropic have that viewpoint. But as we approach further out, everyone is going to be able to see that value skyrocket per GPU. So in that sense, you should commit now to compute.</p><p>Interestingly, in Anthropic fashion, there&#8217;s a bit of a meme that they have commitment issues and are sort of polyamorous. Not Dario, but this is a bit of a meme.</p><p><strong>Dwarkesh Patel</strong></p><p>Explains everything. By the way, there&#8217;s this interesting economic effect called <a href="https://en.wikipedia.org/wiki/Alchian%E2%80%93Allen_effect">Alchian-Allen</a>, which is the idea that if you increase the fixed cost of different goods, one of which is higher quality and one which is lower quality, that will make people choose the higher quality good, on the margin.</p><p>To give a specific example, suppose the better-tasting apple costs two dollars and the shittier apple costs one dollar. Now suppose you put an import tariff on them. Now it&#8217;s $3 versus $2 for a great apple versus a medium apple.</p><p><strong>Dylan Patel</strong></p><p>Is that because they both increased by a dollar, or should it be a 50% increase?</p><p><strong>Dwarkesh Patel</strong></p><p>No, because they both increased by $1. The whole effect is that if there&#8217;s a fixed cost that is applied to both. Then the price difference between them, the ratio, changes. Previously, the more expensive one was 2X more expensive. Now it&#8217;s just 1.5X more expensive.</p><p>So I wonder if applied to AI that would mean that, if GPUs are going to get more expensive, there will be a fixed cost increase in the price of compute. As a result, that will push people to be willing to pay higher margins for slightly better models. Because the calculus is, I&#8217;m going to be paying all this money for the compute anyway. I might as well just pay slightly more to make sure it&#8217;s the very best model rather than a model that&#8217;s slightly worse.</p><p><strong>Dylan Patel</strong></p><p>So the Hopper went from $2 to $3. If a Hopper can make a million tokens of Opus and it can make two million tokens of Sonnet, the price differential between Opus and Sonnet has decreased because the price of the GPU has increased by a dollar from $2 to $3.</p><p>Interesting. I think that makes a ton of sense. We just see all of the volumes are on the best models today, all the revenue is on the best models today. In a compute-limited world, two things happen. One, companies that don&#8217;t have commitment issues and have these five-year contracts for compute have locked in a humongous margin advantage. They&#8217;ve locked in compute for five years at the price it transacted at two, three, or five years ago.</p><p>Whereas if you&#8217;re three years into that five-year contract and someone else&#8217;s two-year or three-year contract rolled off, and now they&#8217;re trying to buy that at modern pricing, when it&#8217;s priced to the value of models, the price is going to be up a lot more. So the person who committed early has better margins in general. The percentage of the market that is in long-term contracts is much larger than the percentage of the market in short-term contracts that can be this flex capacity you add at the last second.</p><p>At the same time, where does the margin go? Because models get more valuable, how much can the cloud players flex their pricing? If you look at CoreWeave, their average term duration is over three years right now. For ninety-eight percent plus of their compute, it&#8217;s over three years. They end up with this conundrum where they can&#8217;t actually flex price. But every year they&#8217;re adding incrementally way more capacity than they had previously.</p><p>This year alone, Meta&#8217;s adding as much capacity as they had in their entire fleet of compute and data centers for all purposes for serving WhatsApp, Instagram, and Facebook in 2022, and doing AI. They&#8217;re adding that alone this year.</p><p>In the same sense, you talk about Meta doing that, CoreWeave, Google, and Amazon, all these companies are adding insane amounts of compute year on year. That new compute gets transacted at the new price. In a sense, yes, you&#8217;ve locked in, as long as we&#8217;re in a takeoff. &#8220;Oh, OpenAI went from six hundred megawatts to two gigawatts last year, and from two gigawatts to six plus this year, and six to twelve next year.&#8221; The incremental added compute is where all the cost is, not the prior long-term contracts.</p><p>Then who holds the cards is the infra providers for charging margin. Now the cloud players, the neoclouds, or the hyperscalers can charge the margin. They can to some extent, but then as you go upstream to who has access to all the memory and logic capacity, it&#8217;s Nvidia for the most part. They&#8217;ve signed a lot of long-term contracts. They&#8217;ve got ninety billion dollars of long-term contracts today, and they&#8217;re negotiating three-year deals today with the memory vendors.</p><p>You&#8217;ve got Amazon and Google through <a href="https://en.wikipedia.org/wiki/Broadcom">Broadcom</a>, Amazon directly, and <a href="https://en.wikipedia.org/wiki/AMD">AMD</a>. These companies hold all the cards because they&#8217;ve secured the capacity. TSMC is not raising prices, but memory vendors are, to some extent, raising a lot of price. They&#8217;re going to double or triple price again, but then they&#8217;re also signing these long-term deals.</p><p>Who is able to accrue all the margin dollars is potentially the cloud, potentially the chip vendors, and the memory vendors, until TSMC or ASML break out and say, &#8220;No, we&#8217;re going to charge a lot more.&#8221; But at the same time, do the model vendors get to charge crazy margins? At least this year, we&#8217;re going to see margins for the model vendors go up a lot. Because they&#8217;re so capacity constrained, they have to destroy demand. There&#8217;s no way Anthropic can continue at the current pace without destroying demand.</p><h3>00:24:52 &#8211; Nvidia secured TSMC allocation early; Google is getting squeezed</h3><p><strong>Dwarkesh Patel</strong> 1:20:33</p><p>Let&#8217;s get into logic and memory. How specifically has Nvidia been able to lock up so much of both? I think according to your numbers, by &#8216;27, Nvidia is going to have +70% of <a href="https://en.wikipedia.org/wiki/3_nm_process">N3</a> wafer capacity, or around that area. I forget what the numbers were for memory at <a href="https://en.wikipedia.org/wiki/SK_Hynix">SK Hynix</a> and <a href="https://en.wikipedia.org/wiki/Samsung_Electronics">Samsung</a> and so forth.</p><p>Think about how the neocloud business works and how Nvidia works with that, or how the RL environment business works and how Anthropic works with that. In both those cases, Nvidia is purposely trying to fracture the complementary industry to make sure that they have as much leverage as possible. They&#8217;re giving allocation to random neoclouds to make sure that there&#8217;s not one person that has all the compute.</p><p>Similarly, Anthropic or OpenAI, when they&#8217;re working with the data providers, they say, &#8220;No, we&#8217;re going to just seed a huge industry of these things so that we&#8217;re not locked into any one supplier for data environments.&#8221;</p><p>And I wonder why on the 3 nm process&#8212;that&#8217;s going to be <a href="https://newsletter.semianalysis.com/p/aws-trainium3-deep-dive-a-potential">Trainium 3</a>, that&#8217;s going to be <a href="https://docs.cloud.google.com/tpu/docs/tpu7x">TPU v7</a>, other accelerators potentially&#8212;why is TSMC just giving it all up to Nvidia rather than trying to fracture the market?</p><p><strong>Dylan Patel</strong></p><p>There are a couple points here. On 3 nm, if we go back to last year, the vast majority of 3 nm was Apple. Apple is being moved to 2 nm. Memory prices are going up, so Apple&#8217;s volumes may go down. As memory prices go up, either they cut margin or they move on. There&#8217;s some time lag because they have long-term contracts, but Apple likely reduces demand or moves to 2 nm faster, where 2 nm is only capable of mobile chips today. In the future, AI chips will move there. So Apple has that.</p><p>Apple is also talking to third-party vendors because they&#8217;re getting squeezed out of TSMC a little bit. TSMC&#8217;s margins on high-performance computing&#8212;<a href="https://www.tsmc.com/english/dedicatedFoundry/technology/platform_HPC">HPC</a>, AI chips, et cetera&#8212;are higher than they are for mobile, because they have a bigger advantage in HPC than they do in mobile.</p><p>When you look at TSMC&#8217;s running calculus here, they&#8217;re actually providing really good allocations to companies that are doing CPUs. When you think about Amazon having Trainium and <a href="https://aws.amazon.com/ec2/graviton/">Graviton</a>, both of those are on 3 nm, Graviton being their CPU, Trainium being their AI chip. TSMC is much more excited to give allocation to Graviton than they are to Trainium because they view the CPU business as more stable, long-term growth.</p><p>As a company that is conservative and doesn&#8217;t want to ride cycles of growth too hard, you actually want to allocate to the market that is more stable with a lower growth rate first before you allocate all the incremental capacity to the fast growth rate market. That is the case generally. Same for AMD. The allocations they get on their CPUs, TSMC is much more excited about those than they are for GPUs. Likewise for Amazon.</p><p>Nvidia is a bit unique because yes, they have CPUs, they make switches, they make networking, NVLink, InfiniBand, Ethernet, NICs. By and large, most of these things will be on 3 nm by the end of this year with the Rubin launch and all the chips in that family, the GPU being the most important one. Yet Nvidia is getting the majority of supply.</p><p>Part of this is because you look at the market and TSMC and others forecast market demand in many ways, but it&#8217;s also the market signal. The market signaled, &#8220;Hey, we need this much capacity next year. We need this much. We&#8217;ll sign non-cancelable, non-returnable. We may even pay deposits.&#8221; Nvidia just did it way earlier than Google or Amazon. In some cases, Google and Amazon had stumbling blocks. One of the chips got delayed slightly by a couple quarters. Trainium and all these sorts of things happened.</p><p>In that case, there was a huge sort of, &#8220;Well, these guys are delaying, but Nvidia is wanting more, more, more, more. And we are checking with the rest of the supply chain, is there enough capacity?&#8221; They&#8217;re going to all the <a href="https://en.wikipedia.org/wiki/Printed_circuit_board">PCB</a> vendors and saying, &#8220;Is there enough PCB?&#8221; <a href="https://en.shpcb.com/">Victory Giant</a> is one of the largest suppliers of PCBs to Nvidia, and they&#8217;re a Chinese company. All the PCBs come from China, or many of them. They&#8217;re like, &#8220;Do you have enough PCB capacity? Great. Hey memory vendors, who has all the memory capacity? Okay, Nvidia does. Great.&#8221;</p><p>When you look at who is AGI-pilled enough to buy compute on long timelines at levels that seem ridiculous to people who aren&#8217;t AGI-pilled&#8212;but nonetheless, they&#8217;re willing to pay a pretty good margin and sign it now because they view in the future that ratio is screwed up&#8212;the same thing happens with the supply chain for semiconductors. I don&#8217;t think Nvidia is quite AGI-pilled. Jensen doesn&#8217;t believe software is going to be fully automated and all these things.</p><p><strong>Dwarkesh Patel</strong></p><p>Accelerated computing, not AI chips, right?</p><p><strong>Dylan Patel</strong></p><p>It&#8217;s AI chips.</p><p><strong>Dwarkesh Patel</strong></p><p>But that&#8217;s what he calls it, right?</p><p><strong>Dylan Patel</strong></p><p>Yeah. I think it&#8217;s a broader term, AI is within that, but also physics modeling and simulations.</p><p><strong>Dwarkesh Patel</strong></p><p>But it&#8217;s like he&#8217;s not embracing the main use case.</p><p><strong>Dylan Patel</strong></p><p>I think he&#8217;s embracing it, but I just don&#8217;t think he&#8217;s AGI-pilled like Dario or Sam. But he&#8217;s still way, way more AGI-pilled than Google was in Q3 of last year, or Amazon was in Q3 of last year, and he saw way more demand.</p><p>The reason is pretty simple. You can see all the data center construction. He&#8217;s like, &#8220;Okay, I want to have this market share.&#8221; We have all the data centers tracked, and there&#8217;s a lot of data centers that could be one or the other. To some extent, Google and Amazon, Google especially, even though their TPU is just better for them to deploy, they have to deploy a crap load of GPUs because they don&#8217;t have enough TPUs to fill up their data centers. They can&#8217;t get them fabbed.</p><p><strong>Dwarkesh Patel</strong></p><p>I have a question about that. Google sold a million, was it the v7s?</p><p><strong>Dylan Patel</strong></p><p>Yes.</p><p><strong>Dwarkesh Patel</strong></p><p>&#8212;the <a href="https://blog.google/innovation-and-ai/infrastructure-and-cloud/google-cloud/ironwood-tpu-age-of-inference/">Ironwoods</a> to Anthropic, and you&#8217;re saying the big bottleneck right now, this year or next year, I guess going forward forever now, is going to be the logic and memory, the stuff it takes to build these chips. Google has DeepMind, the third prominent AI lab. If this is the big bottleneck, why would they sell it rather than just giving it to DeepMind?</p><p><strong>Dylan Patel</strong></p><p>This is again a problem of&#8230; DeepMind people were like, &#8220;This is insane. Why did we do this?&#8221; But Google Cloud people and Google executives saw a different thought process.</p><p>You and I know the compute team at Anthropic. Both of the main people came from Google. They saw this dislocation, they negotiated a deal, and they were able to get access to this compute before Google realized. The chain of events, at least from our data that we found, was in early Q3, over the course of six weeks, we saw capacity on TPUs go up by a significant amount. It went up multiple times in those six weeks.</p><p>There were multiple requests. Google even had to go to TSMC and explain to them why they needed this increase in capacity because it was so sudden. A lot of that capacity increase was for selling to Anthropic. Because Anthropic saw it before Google.</p><p>And then Google had <a href="https://gemini.google/overview/image-generation/">Nano Banana</a> and <a href="https://blog.google/products-and-platforms/products/gemini/gemini-3/">Gemini 3</a> which caused their user metrics to skyrocket. Then leadership at Google was like, &#8220;Oh.&#8221; Then they started making <a href="https://www.cnbc.com/2025/11/21/google-must-double-ai-serving-capacity-every-6-months-to-meet-demand.html">the statement that we have to double compute every six months</a>, or whatever the exact number was.</p><p>They really woke up a lot more, and then they went to TSMC and said, &#8220;We want more. We want more.&#8221; TSMC replied, &#8220;Sorry guys, we&#8217;re sold out. We can maybe get 5-10% more for 2026, but really we&#8217;re going to work on 2027.&#8221;</p><p>There was this information asymmetry among the labs, in my mind. I don&#8217;t know exactly. It&#8217;s the narrative I&#8217;ve spun myself from seeing all the data in the supply chain on wafer orders and what&#8217;s going on with the data centers that Anthropic and <a href="https://www.fluidstack.io/">Fluidstack</a> signed.</p><p>It&#8217;s pretty clear to me that Google screwed up. You can see this from Google&#8217;s Gemini ARR. They had next to nothing in Q1 to Q3&#8212;in Q3 a little bit once they started inflecting. But in Q4 they reached $5 billion in revenue on an ARR basis. It&#8217;s clear Google didn&#8217;t see revenue skyrocket initially. In a sense, Anthropic had a little bit of commitment issues before their ARR exploded, even though they had far more information asymmetry and saw what was coming down the pipe. Google is going to be more conservative than Anthropic and Google had even less ARR. So they were just not willing to do it, and then they realized they should do it.</p><p>Since then, Google has gotten absurdly AGI-pilled in terms of what they&#8217;re doing. They bought an energy company. They&#8217;re putting deposits down for turbines. They&#8217;re buying a ridiculous percentage of powered land. They&#8217;re going to utilities and negotiating long-term agreements. They&#8217;re doing this on the data center and power side very aggressively. I think Google woke up towards the end of last year, but it took them some time.</p><p><strong>Dwarkesh Patel</strong></p><p>How many gigawatts do you think Google will have by the end of next year?</p><p><strong>Dylan Patel</strong></p><p>Buy my data.</p><p><strong>Dwarkesh Patel</strong></p><p>You charge for that kind of information.</p><p><strong>Dylan Patel</strong></p><p>Yes, yes.</p><h3>00:34:34 &#8211; ASML will be the #1 constraint for AI compute scaling by 2030</h3><p>I feel like every year the bottleneck for what is preventing us from scaling AI compute keeps changing. A couple years ago it was <a href="https://3dfabric.tsmc.com/english/dedicatedFoundry/technology/cowos.htm">CoWoS</a>. Last year it was power. You&#8217;ll tell me what the bottleneck is this year.</p><p>But I want to understand five years out, what will be the thing that is constraining us from deploying the singularity?</p><p><strong>Dylan Patel</strong></p><p>The biggest bottleneck is compute. For that, the longest lead time supply chains are not power or data centers. They&#8217;re actually the semiconductor supply chains themselves. It switches back from power and data centers as a major bottleneck to chips.</p><p>In the chip supply chain, there&#8217;s a number of different bottlenecks. There&#8217;s <a href="https://en.wikipedia.org/wiki/Semiconductor_memory">memory</a>, <a href="https://www.asml.com/en/technology/all-about-microchips/microchip-basics">logic wafers</a> from <a href="https://en.wikipedia.org/wiki/TSMC">TSMC</a>, and the <a href="https://en.wikipedia.org/wiki/Semiconductor_fabrication_plant">fabs</a> themselves. Construction of the fabs takes two to three years, versus a data center which takes less than a year. We&#8217;ve seen Amazon build data centers in as fast as eight months. There&#8217;s a big difference in lead times because of the complexity of building the fab that actually makes the chips. The tools also have really long lead times.</p><p>The bottlenecks, as we&#8217;ve scaled, have shifted based on what the supply chain is currently not able to do. It was CoWoS, power, and data centers, but those were all shorter lead time items. CoWoS is a much simpler process of <a href="https://anysilicon.com/the-ultimate-guide-to-semiconductor-packaging/">packaging chips</a> together. Power and data centers are ultimately way simpler than the actual manufacturing of the chips. There&#8217;s been some sliding of capacity across mobile or PC to data center chips, which has been somewhat fungible.</p><p>Whereas CoWoS, power, and data centers have had to start anew as supply chains. But now there&#8217;s no more capacity for the mobile and PC industries&#8212;which used to be the majority of the semiconductor industry&#8212;to shift over to AI. <a href="https://en.wikipedia.org/wiki/Nvidia">Nvidia</a> is now the largest customer at TSMC and <a href="https://en.wikipedia.org/wiki/SK_Hynix">SK Hynix</a>, the largest memory manufacturer. It&#8217;s sort of impossible for the sliding of resources away from the common person&#8217;s PCs and smartphones to shift any more towards the AI chips. So now the question is how do we scale AI chip production? That&#8217;s the biggest bottleneck as we go to 2030.</p><p><strong>Dwarkesh Patel</strong></p><p>It would be very interesting if there&#8217;s an absolute gigawatt ceiling that you can project out to 2030 based just on &#8220;We can&#8217;t produce more than this many <a href="https://en.wikipedia.org/wiki/Extreme_ultraviolet_lithography">EUV</a> machines.&#8221;</p><p><strong>Dylan Patel</strong></p><p>To scale compute further, there are different bottlenecks this year and next year, but ultimately by 2028 or 2029, the bottleneck falls to the lowest rung on the supply chain, which is <a href="https://en.wikipedia.org/wiki/ASML_Holding">ASML</a>. ASML makes the world&#8217;s most complicated machine: an <a href="https://www.asml.com/en/products/euv-lithography-systems">EUV tool</a>. The selling price for those is $300-400 million. Currently, they can make about 70. Next year, they&#8217;ll get to 80. Even under very aggressive supply chain expansion, they only get to a little bit over 100 by the end of the decade.</p><p>What does that mean? They can make a hundred of these tools by the end of the decade, and 70 right now. How does that actually translate to AI compute? We see all these numbers from <a href="https://en.wikipedia.org/wiki/Sam_Altman">Sam Altman</a> and many others across the supply chain: gigawatts, gigawatts, gigawatts. How many gigawatts are we adding? We see <a href="https://www.dwarkesh.com/p/elon-musk">Elon saying a hundred gigawatts in space</a>.</p><p><strong>Dwarkesh Patel</strong></p><p>A year.</p><p><strong>Dylan Patel</strong></p><p>A year. The problem with any of these numbers, or the challenge to these numbers, is actually not the power or the data center. We can dive into that, but it&#8217;s manufacturing the chips.</p><p>Take a gigawatt of Nvidia&#8217;s <a href="https://www.nvidia.com/en-us/data-center/technologies/rubin/">Rubin</a> chips. Rubin is announced at <a href="https://en.wikipedia.org/wiki/Nvidia_GTC">GTC</a>, I believe the week this podcast goes live. To make a gigawatt worth of data center capacity of Nvidia&#8217;s latest chip that they&#8217;re releasing towards the end of this year, you need a few different wafer technologies. You need about 55,000 wafers of <a href="https://en.wikipedia.org/wiki/3_nm_process">3 nm</a>. You need about 6,000 wafers of <a href="https://en.wikipedia.org/wiki/5_nm_process">5 nm</a>, and then you need about 170,000 wafers of <a href="https://en.wikipedia.org/wiki/Dynamic_random-access_memory">DRAM</a> memory.</p><p>Across these three different buckets, each requires different amounts of EUV. When you manufacture a wafer, there are thousands and thousands of process steps where you&#8217;re depositing material and removing them. But the key critical step&#8212;which at least in <a href="https://www.appliedmaterials.com/us/en/semiconductor/markets-and-inflections/advanced-logic.html">advanced logic</a> is 30% of the cost of the chip&#8212;is something that doesn&#8217;t actually put anything on the wafer. You take the wafer, you deposit <a href="https://en.wikipedia.org/wiki/Photoresist">photoresist</a>, which is a chemical that chemically changes when you expose it to light. Then you stick it into the EUV tool, which shines light at it in a certain way. It patterns it. There&#8217;s what&#8217;s called a <a href="https://agcem.com/products/euv-mask-blanks/">mask</a>, which is effectively a stencil for the design.</p><p>When you look at a leading-edge 3 nm wafer, it has 70 or so masks, 70 or so layers of lithography, but 20 of them are the most advanced EUV. If you need 55,000 wafers for a gigawatt, and you do 20 EUV passes per wafer, you can do the math. That&#8217;s 1.1 million passes of EUV for a single gigawatt. It&#8217;s pretty simple. Once you add the rest of the stuff, it ends up being 2 million, across 5 nm and all the memory. You&#8217;re at roughly 2 million EUV passes for a single gigawatt.</p><p>These tools are very complicated. When you think about what it&#8217;s doing across a wafer, it&#8217;s taking the wafer and scanning and stepping across. It does this dozens of times across the whole wafer. When you&#8217;re talking about how many EUV passes, that&#8217;s the entire wafer being exposed at a certain rate.</p><p>An EUV tool can do roughly 75 wafers per hour, and the tool is up roughly 90% of the time. In the end, you need about three and a half EUV tools to do the 2 million EUV wafer passes for the gigawatt. So three and a half EUV tools satisfies a gigawatt.</p><p>It&#8217;s funny to think about the numbers. What does a gigawatt cost? It costs roughly $50 billion. Whereas what do three and a half EUV tools cost? That&#8217;s $1.2 billion. It&#8217;s actually quite a lower number, which is interesting to think about. Fifty gigawatts of economic <a href="https://www.investopedia.com/terms/c/capitalexpenditure.asp">CapEx</a> in the data center, and what gets built on top of that in terms of tokens is even larger. It might be $100 billion worth of AI value into the supply chain, held up by this $1.2 billion worth of tooling that simply cannot expand its supply chain quickly.</p><p><strong>Dwarkesh Patel</strong></p><p>You wrote an article recently saying over the last three years, TSMC has done $100 billion of CapEx. So it&#8217;s $30/$30/$40 billion. A small fraction of that is being used by Nvidia for the 3 nm, or previously 4 nm, that it&#8217;s using for its chips. What were its earnings last quarter? It was $40 billion. So $40 billion times four is $160 billion. Nvidia alone is turning some small fraction of $100 billion in CapEx, which is going to be depreciated over many years and not just this one year, into $160 billion in a single year.</p><p>That gets even more intense when you go down the supply chain to ASML, which is taking a billion dollars&#8217; worth of machines to produce a gigawatt. Of course, those machines last for more than a year so it&#8217;s doing more than that.</p><p>Now I want to understand, how many such machines will there be by 2030, if you include not just the ones that are sold that year, but have been compiling over the previous years? What does that imply? Sam Altman says he wants to do a gigawatt a week in 2030. When you add up those numbers, is it compatible with that?</p><p><strong>Dylan Patel</strong></p><p>That&#8217;s completely compatible, if you think about it. TSMC and the entire ecosystem have something like 250 to 300 EUV tools already. Then you stack on 70 this year, 80 next year, growing to 100 by 2030. You&#8217;re at 700 EUV tools by the end of the decade. 700 EUV tools, at three and a half tools per gigawatt&#8212;assuming it&#8217;s all allocated to AI, which it&#8217;s not&#8212;gets you to 200 gigawatts worth of AI chips for the data centers to deploy.</p><p>Sam wants 52 gigawatts a year. He&#8217;s only taking 25% share then. Obviously, there&#8217;s some share given to mobile and PC, assuming we&#8217;re even allowed to have consumer goods still and we don&#8217;t get priced out of them. But roughly, he&#8217;s saying 25% market share of the total chips fabbed. That&#8217;s very reasonable given that this year alone, I think he&#8217;s going to have access to 25% of the <a href="https://en.wikipedia.org/wiki/Blackwell_(microarchitecture)">Blackwell GPUs</a> that are deployed. It&#8217;s not that crazy.</p><p><strong>Dwarkesh Patel</strong></p><p>When did ASML start shipping EUV tools, when <a href="https://en.wikipedia.org/wiki/7_nm_process">7 nm</a> started? I don&#8217;t know when that was exactly. You&#8217;re saying in 2030, they&#8217;re going to be using machines that initially were shipped in 2020. So for ten years, you&#8217;re using the same most important machine in this most technologically advanced industry in the world? I find that surprising.</p><p><strong>Dylan Patel</strong></p><p>ASML&#8217;s been shipping EUV tools now for roughly a decade, but it only entered mass volume production around 2020. The tool&#8217;s not the same. Back then, the tools were even lower throughput. There are various specifications around them called <a href="https://en.wikipedia.org/wiki/Overlay_control">overlay</a>. I was mentioning you&#8217;re stacking layers on top of each other. You&#8217;ll do some EUV, you&#8217;ll do a bunch of different process steps&#8212;depositing stuff, etching stuff, cleaning the wafer&#8212;dozens of those steps before you do another EUV layer.</p><p>There&#8217;s a spec called overlay, which is: you did all this work, you drew these lines on the wafer, now I want to draw these dots. Let&#8217;s say I want to draw these dots to connect these lines of metal to holes, and then the next layer up is another set of lines going perpendicular, so now you&#8217;re connecting wires going perpendicular to each other. You have to be able to land them on top of each other. It&#8217;s called overlay.</p><p>Overlay is a spec that&#8217;s been improved rapidly by ASML. Wafer throughput has been improved rapidly by ASML. The price of the tool has gone up, but not as much as the capabilities of the tool. Initially, the EUV tools were $150 million. Over time, they&#8217;re now $400 million as I look out to 2028. But the capabilities of the tools have more than doubled as well, especially on throughput and overlay accuracy, which is the ability to accurately align the subsequent passes on top of each other even though you do tons of steps between.</p><p>ASML is improving super rapidly. It&#8217;s also noteworthy to say that ASML is maybe one of the most generous companies in the world. They have this linchpin thing. No one has anything competitive. <a href="https://www.reuters.com/world/china/how-china-built-its-manhattan-project-rival-west-ai-chips-2025-12-17/">Maybe China will have some EUV by the end of the decade</a>, but no one else has anything even close to EUV, and yet they haven&#8217;t taken price and margins up like crazy. You go ask some other folks that we talk to all the time, like <a href="https://www.dwarkesh.com/p/leopold-aschenbrenner">Leopold</a>, and they&#8217;re like, &#8220;Let&#8217;s have the price go up.&#8221; Because they can. The margin is there. You can take the margin. Nvidia takes the margin. Memory players are taking the margin. But ASML has never raised the price more than they&#8217;ve increased the capability of the tool.</p><p>In a sense, they&#8217;ve always provided net benefit to their customers. It&#8217;s not that the tool is stagnant, it&#8217;s just that these tools are old. Yes, you can upgrade them some, and the new tools are coming. For simplicity&#8217;s sake, we&#8217;re ignoring the advances in overlay or throughput per tool for this podcast.</p><p><strong>Dwarkesh Patel</strong></p><p>You say we&#8217;re producing 60 of these machines this year and then 70, 80 over subsequent years. What would happen if ASML just decided to double its CapEx or triple its CapEx? What is preventing them from producing more than 100 in 2030? Why are you so confident that even five years out, you can be relatively sure what their production will be?</p><p><strong>Dylan Patel</strong></p><p>I think there are a couple factors here. ASML has not decided to just go YOLO, let&#8217;s expand capacity as fast as possible. In general, the semiconductor supply chain has not. It&#8217;s lived through the booms and busts, and we can talk a bit more about it. Basically some players have recently woken up, but in general no one really sees demand for 200 gigawatts a year of AI chips, or trillions of dollars of spend a year in the semiconductor supply chain. They&#8217;re not AI-pilled. They&#8217;re not AGI-pilled.</p><p><strong>Dwarkesh Patel</strong></p><p>We&#8217;re going to get to a trillion dollars this year.</p><p><strong>Dylan Patel</strong></p><p>Yeah, I feel you, but I&#8217;m saying no one really understands this in the supply chain. Constantly, we&#8217;re told our numbers are way too high, and then when they&#8217;re right, they&#8217;re like, &#8220;Oh, yeah, but your next year&#8217;s numbers are still too high.&#8221;</p><p>ASML&#8217;s tool has four major components. It has the source, which is made by <a href="https://www.asml.com/en/company/about-asml/cymer">Cymer</a> in San Diego. It has the <a href="https://www.linkedin.com/posts/asml_asmls-reticle-stage-activity-7315418259040694274-DNW7/">reticle stage</a>, which is made in Wilmington, Connecticut. It has the wafer stage. It has the optics, the lenses and such. Those last two are made in Europe.</p><p>When you look at each of these four, they&#8217;re tremendously complex supply chains that, (A) they have not tried to expand massively, and (B) when they try to expand them, the time lag is quite long. Again, this is the most complicated machine that humans make, period, at any sort of volume.</p><p>Let&#8217;s talk about the source specifically. What does the source do? It drops these tin droplets. It hits it three subsequent times with a laser perfectly. The first one hits this tin droplet, it expands out. It hits it again, so it expands out to this perfect shape, and then it blasts it at super high power. The tin droplets get excited enough that they release EUV light, 13.5 nanometer, and then it&#8217;s in this thing that is collecting all the light and directing it into the lens stack.</p><p>Then you have the lens stack, which is Carl Zeiss, as you mentioned, and some other folks, but Zeiss being the most important part of it. They also have not tried to expand production capacity because they don&#8217;t see... They&#8217;re like, &#8220;We&#8217;re growing a lot because of AI. We&#8217;re growing from 60 to 100.&#8221; It&#8217;s like, &#8220;No, no, no. We need to go to a couple hundred, but it&#8217;s fine. Whatever.&#8221;</p><p>Each of these tools has, I think, 18 of these lenses, effectively. They are multilayer mirrors, which are perfect layers of molybdenum and ruthenium, if I recall correctly, stacked on top of each other in many layers, and then the light bounces off of it perfectly. When we think about a lens, it&#8217;s in a shape, and it focuses the light. This is like a mirror that&#8217;s also a lens, so it&#8217;s pretty complicated. Any defect in these super thinly deposited stacks will mess it up. Any curvature issues will mess it up.</p><p>There are a lot of challenges with scaling the production. It&#8217;s quite artisanal in this sense because you&#8217;re not making tens of thousands of these a year, you&#8217;re making hundreds, you&#8217;re making thousands. 60 tools a year, 18 of these per tool, you&#8217;re still in the hundreds, of tools, or you&#8217;re at the thousand number roughly for these lenses and projection optics.</p><p>Then you step forward to the reticle stage, which is also something really crazy. This thing moves at, I want to say, nine Gs. It will shift nine Gs because as you step across a wafer, the tool will go... The wafer stage is complementary. It&#8217;s the wafer part. You line these two things up. You&#8217;re taking all the light through the lenses that&#8217;s focused, and here&#8217;s the reticle, here&#8217;s the wafer. The reticle&#8217;s moving one direction, the wafer&#8217;s moving the other direction as it scans a 26x33 millimeter section of the wafer, and then it stops. It shifts over to another part of the wafer and does it again. It does that in just seconds. Each of them is moving at nine Gs in opposite directions.</p><p>Each of these things is a wonder and marvel of chemistry, fabrication, mechanical engineering, and optical engineering, because you have to align all these things and make sure they&#8217;re perfect. All of these things have crazy amounts of metrology because you have to perfectly test everything. If anything is messed up, the yield goes to zero, because this is such a finely tuned system.</p><p>By the way, it&#8217;s so large that you&#8217;re building it in the <a href="https://www.asml.com/en/company/about-asml/locations/veldhoven">factory in Eindhoven, Netherlands</a>, and they&#8217;re deconstructing it and shipping it on many planes to the customer site, and then you&#8217;re reassembling it there and testing it again. That process takes many, many months.</p><p>There are so many steps in the supply chain, whether it&#8217;s Zeiss making their lenses and projection optics or Cymer, which is an ASML-owned company, making the EUV source. Each of these has its own complex supply chain. ASML has commented that their supply chain has over ten thousand people in it.</p><p><strong>Dwarkesh Patel</strong></p><p>Like individual suppliers?</p><p><strong>Dylan Patel</strong></p><p>Yes. It might not be directly. It might be through Zeiss having so many suppliers and XYZ company having so many suppliers.</p><p>If you just think about it, you&#8217;re talking about two physically moving objects that are the size of a wafer, and it has to be accurate to the level of single-digit nanometers or even smaller because the entire system, the overlay, the layer-to-layer overlay variation, has to be on the order of 3 nanometers. If the overlay is 3 nms, that means each individual part, the accuracy of its physical movement has to be even less than that. It has to be sub-one nanometer in most cases, because the error of these things stack up. There&#8217;s no way to just snap your fingers and increase production.</p><p>Things as simple as power. The US going from zero percent power growth to two percent power growth, even though China&#8217;s already at thirty, was so hard for America to do. And that&#8217;s a really simple supply chain with very few people in it who make difficult things. There are probably 100,000 electricians and people who work in the electricity supply chain, or more, in the US?</p><p>When you look at ASML, they employ so few people. Carl Zeiss probably employs less than a thousand people working on this, and all of those people are super, super specialized. You can&#8217;t just train random people up for this in the snap of a finger. You can&#8217;t just get your entire supply chain to get galvanized.</p><p>Nvidia&#8217;s had to do a lot to get the entire supply chain to even deliver the capacity they&#8217;re going to make this year. When you go talk to <a href="https://en.wikipedia.org/wiki/Anthropic">Anthropic</a>, they&#8217;re like, &#8220;We&#8217;re short of <a href="https://en.wikipedia.org/wiki/Tensor_Processing_Unit">TPUs</a>, we&#8217;re short of training, and we&#8217;re short of <a href="https://en.wikipedia.org/wiki/Graphics_processing_unit">GPUs</a>.&#8221; When you go talk to <a href="https://en.wikipedia.org/wiki/OpenAI">OpenAI</a>, they&#8217;re like, &#8220;We&#8217;re short of these things.&#8221;</p><p>OpenAI and Anthropic know they need X. Nvidia is not quite as AGI-pilled. They&#8217;re building X - 1. You go down the supply chain, everyone&#8217;s doing X - 1. In some cases, they&#8217;re doing X &#247; 2, because they&#8217;re not AGI-pilled.</p><p>You end up with this time lag for the whip to react. The AI-pilledness and the desire to increase production takes so long. Once they finally understand that they need to increase production rapidly&#8230; They think they understand. They think AI means we have to go from 60 to 100, in addition to the tools getting better and faster, the source getting higher power from 500 watts to 1,000, and all these other aspects of the supply chain advancing technically and increasing production. They think they&#8217;re actually increasing production a lot.</p><p>But if you flow through the numbers&#8230; What does Elon want? He wants 100 gigawatts a year in space by 2028 or 2029. Sam Altman wants 52 gigawatts a year by the end of the decade. Anthropic probably needs the same, and Google needs that. You go across the supply chain, and it&#8217;s like, wait, no, the supply chain can&#8217;t possibly build enough capacity for everyone to get what they want on the side of compute.</p><h3>00:55:47 &#8211; Can&#8217;t we just use TSMC&#8217;s older fabs?</h3><p><strong>Dwarkesh Patel</strong></p><p>I feel like in the data center supply chain for the last few years, people have been making arguments like, &#8220;We are bottlenecked by this specific thing, therefore AI compute can&#8217;t scale more than X.&#8221; But as you&#8217;ve written about, if the grid is a bottleneck, then we just do <a href="https://www.enelnorthamerica.com/insights/blogs/what-does-btm-behind-the-meter-mean">behind the meter</a> on the site, we do gas turbines, et cetera. If that doesn&#8217;t work, there are all these other alternatives that people fall back on.</p><p>I want to ask whether we can imagine a similar thing happening in the semiconductor supply chain. If EUV becomes a bottleneck, what if we just went back to 7 nm and did what China is doing currently, producing 7 nm chips with <a href="https://en.wikipedia.org/wiki/Multiple_patterning">multi-patterning</a> with <a href="https://en.wikipedia.org/wiki/Ultraviolet">DUV</a> machines? If you look at a 7 nm chip like the <a href="https://www.nvidia.com/en-us/data-center/a100/">A100</a>, there&#8217;s been a lot of progress obviously from the A100 to the <a href="https://www.exxactcorp.com/blog/hpc/comparing-nvidia-tensor-core-gpus">B100</a> or <a href="https://www.nvidia.com/en-us/data-center/dgx-b200/">B200</a>.</p><p>How much of that progress is just numerics? If you just hold <a href="https://en.wikipedia.org/wiki/Half-precision_floating-point_format">FP16</a> constant from A100 to B100. The B100 is a little over one petaflop, and the A100 is like 300 teraflops.</p><p><strong>Dylan Patel</strong></p><p>Yeah, 312.</p><p><strong>Dwarkesh Patel</strong></p><p>Holding numerics constant, you have a 3x improvement from A100 to B100. Some of that is the process improvement, some of that is just the accelerator design improving, which we could replicate again in the future.</p><p>It seems there&#8217;s actually a very small effect from the process improving from 7nm to 4 nm. I don&#8217;t know the numbers offhand but let&#8217;s say there&#8217;s 150k wafers per month of 3 nm and eventually similar amounts for <a href="https://en.wikipedia.org/wiki/2_nm_process">2 nm</a>. But then there&#8217;s a similar amount for 7 nm.</p><p>If you have all those old wafers and there&#8217;s maybe a 50% haircut because the bits per wafer area are 50% less or something, it doesn&#8217;t seem that bad to just bring on 7 nm wafers if that gives you another fifty or hundred gigawatts. Tell me why that&#8217;s naive.</p><p><strong>Dylan Patel</strong></p><p>We potentially do go crazy enough that this happens because we just need incremental compute, and the compute is worth the higher cost and power of these chips. But it&#8217;s also unlikely to a large extent because some of these are not fair comparisons.</p><p>For example, from A100, which is 312 teraflops, to Blackwell, which is 1,000 or 2,000 FP16, and then Rubin is 5,000 or so FP16&#8230; It&#8217;s not a fair comparison because these chips have vastly different design targets. With A100, Nvidia optimized for FP16 and <a href="https://en.wikipedia.org/wiki/Bfloat16_floating-point_format">BF16</a> numerics. When you look at <a href="https://en.wikipedia.org/wiki/Hopper_(microarchitecture)">Hopper</a>, they didn&#8217;t care as much about that; they cared about <a href="https://en.wikipedia.org/wiki/Minifloat">FP8</a>. When you look at Rubin, they don&#8217;t care about FP16 and BF16 so much, they care mostly about <a href="https://developer.nvidia.com/blog/introducing-nvfp4-for-efficient-and-accurate-low-precision-inference/">FP4</a> and FP6. Numerics are what they&#8217;ve designed their chip for.</p><p>Let&#8217;s say we make a new chip design on 7 nm, optimized for the numerics of the modern day. The performance difference is still going to be much larger than the FLOPS difference you mentioned. Often it&#8217;s easy to boil things down to FLOPS per watt or FLOPS per dollar, but that&#8217;s not a fair comparison.</p><p>Let&#8217;s look at <a href="https://www.kimi.com/ai-models/kimi-k2-5">Kimi K2.5</a> and <a href="https://en.wikipedia.org/wiki/DeepSeek">DeepSeek</a>. When you look at those two models and their performance on Hopper versus Blackwell on very optimized software, you get vastly different performance. Most of this is not attributed to FLOPS or numerics, because those models are actually eight-bit. So it&#8217;s not like Blackwells and Hopper are both optimized for eight-bit, and Blackwell is not really taking advantage of its four-bit there. The performance gulf is actually much larger.</p><p>Sure it&#8217;s one thing to shrink process technology and make the transistor smaller so each chip has X number of FLOPS, but you forget the big gating factor. These models don&#8217;t run on a single chip. They run on hundreds of chips at a time. If you look at DeepSeek&#8217;s production deployment, which is well over a year old now, they were running on 160 GPUs. That&#8217;s what they serve production traffic on. They split the model across 160 GPUs.</p><p>Every time you cross the barrier from one chip to another, there is an efficiency loss. You have to transmit over high-speed electrical <a href="https://en.wikipedia.org/wiki/SerDes">SerDes</a>, which brings a latency cost and a power cost. There are all these dynamics that hurt. As you shrink and shrink the <a href="https://en.wikipedia.org/wiki/MOSFET#Scaling">process node</a>, you&#8217;ve increased the amount of compute in a single chip. Now in-chip movement of data is at least tens of terabytes a second, if not hundreds of terabytes a second. Whereas between chips, you&#8217;re on the order of a terabyte a second.</p><p>Then you have this movement of data between chips that are super close to each other physically. You can only put so many chips close to each other physically, so you have to put chips in different racks. The movement of data between racks is on the order of hundreds of gigabits a second, 400 gig or 800 gig a second, so roughly 100 gigabytes a second.</p><p>So you have this huge ladder: on-chip communication is super fast, within the rack is an order of magnitude slower, and outside the rack is an order of magnitude lower than that. As you break the bounds of chips, you end up with a performance loss.</p><p>The reason I explain this is because when you look at Hopper versus Blackwell, even if both are using a rack&#8217;s worth of chips, Hopper is significantly slower. The amount of performance you have leveraged to the task within each domain&#8212;tens of terabytes a second of communication between these processing elements versus terabytes a second between these processing elements&#8212;is much, much higher and therefore the performance is much higher. When you look at inference at 100 tokens a second for DeepSeek and Kimi K2.5, the performance difference between Hopper and Blackwell is on the order of 20x.</p><p>It&#8217;s not 2x or 3x like the FLOPS performance difference indicates, even though those are on the same process node. There are just differences in networking technologies and what they&#8217;ve worked on. You can translate some of these back, but when you look at what they&#8217;re doing on 3 nm with Rubin, some of those things are simply not possible to do all the way back on A100, even if you make a new chip for 7 nm.</p><p>There are certain architectural improvements you can port and certain ones you cannot. The performance difference is not just going to be the difference in FLOPS. It&#8217;s in some senses cumulative between the difference in FLOPS per chip, networking speed between chips, how many FLOPS are on a chip versus a system, and memory bandwidth on a single chip versus an entire system. All of these things compound.</p><p><strong>Dwarkesh Patel</strong></p><p>Can I ask you a very naive question? The B200 now has two <a href="https://en.wikipedia.org/wiki/Die_(integrated_circuit)">dies</a> on a single chip, so you can get that bandwidth without having to go through <a href="https://en.wikipedia.org/wiki/NVLink">NVLink</a> or <a href="https://en.wikipedia.org/wiki/InfiniBand">InfiniBand</a>. Next year, <a href="https://arstechnica.com/ai/2025/03/nvidia-announces-rubin-ultra-and-feynman-ai-chips-for-2027-and-2028/">Rubin Ultra</a> will have four dies on one chip. What is preventing us from just doing that with an older&#8230; How many dies could you have on a single chip and still get these tens of terabytes a second?</p><p><strong>Dylan Patel</strong></p><p>Even within Blackwell, there are differences in performance when you&#8217;re communicating on the chip versus across the chips. Those bounds are obviously much smaller than when you&#8217;re going out of the entire chip. When you scale the number of chips up, there is some performance loss. It&#8217;s not perfect, but it is way better than different entire packages.</p><p>How large can <a href="https://en.wikipedia.org/wiki/Advanced_packaging_(semiconductors)">advanced packaging</a> scale? The way Nvidia is doing it is CoWoS. Google, Broadcom, MediaTek, and Amazon&#8217;s <a href="https://aws.amazon.com/ai/machine-learning/trainium/">Trainium</a> are all doing CoWoS. But actually you can go look back at what Tesla did with <a href="https://en.wikipedia.org/wiki/Tesla_Dojo">Dojo</a>, which they cancelled and restarted. Dojo was a chip that was the size of an entire wafer. They had 25 chips on it. There were some tradeoffs. They couldn&#8217;t put <a href="https://en.wikipedia.org/wiki/High_Bandwidth_Memory">HBM</a> on it. But the positive side was that they had 25 chips on it. To date, it is still probably the best chip for running <a href="https://en.wikipedia.org/wiki/Convolutional_neural_network">convolutional neural networks</a>. It&#8217;s just not great at <a href="https://en.wikipedia.org/wiki/Transformer_(deep_learning)">transformers</a> because the shape of the chip, the memory, the arithmetic, and all these various specifications are just not well-suited for transformers. They&#8217;re well-suited for CNNs.</p><p>Dojo chips were optimized around that, and they made a bigger package. But as you make packages bigger and bigger, you have other constraints: networking speed, memory bandwidth, and cooling capabilities. All of these things start to rear their heads. It&#8217;s not simple. But yes, you will see a trend line of more chips on the package, and yes, you&#8217;re going to be able to do that on 7 nm.</p><p>In fact, that&#8217;s what Huawei did with their <a href="https://www.tomshardware.com/tech-industry/artificial-intelligence/deepseek-research-suggests-huaweis-ascend-910c-delivers-60-percent-nvidia-h100-inference-performance">Ascend 910C</a> or <a href="https://www.tomshardware.com/tech-industry/artificial-intelligence/huawei-ascend-ai-910d-processor-designed-to-take-on-nvidias-blackwell-and-rubin-gpus">D</a>. They initially put one, and then they did two. They&#8217;re focusing on scaling the packaging up because that is an area where they can advance faster than process technology where they can&#8217;t shrink. But at the end of the day, that&#8217;s something you can do on the leading-edge chips too. Anything you do on 7 nm, you can also probably do on 3 nm in terms of packaging.</p><h3>01:05:37 &#8211; When will China outscale the West in semis?</h3><p><strong>Dwarkesh Patel</strong></p><p>If we end up in this world in 2030 where the West has the most advanced process technology but has not ramped it up as much, whereas China&#8230; I don&#8217;t know if you think by 2030 they would have EUV and 2 nm or whatever. But they are semiconductor-pilled and they are producing in mass quantity.</p><p>Basically, I&#8217;m wondering what the year is where there&#8217;s a crossover, where our advantage in process technology has faded enough, and their advantage in scale has increased enough. And also, if their advantage in having one country with the entire supply chain indigenized&#8212;rather than having random suppliers in Germany and the Netherlands&#8212;would mean that China would be ahead in its ability to produce mass <a href="https://en.wikipedia.org/wiki/Floating_point_operations_per_second">flops</a>.</p><p><strong>Dylan Patel</strong></p><p>To date, China still does not have an entirely indigenized semiconductor supply chain.</p><p><strong>Dwarkesh Patel</strong></p><p>But would they in 2030?</p><p><strong>Dylan Patel</strong></p><p>By 2030, it&#8217;s possible that they do. But to date, all of China&#8217;s 7 nm and 14 nm capacity uses ASML DUV tools. The amount that they can import from ASML is large. But the vast majority of ASML&#8217;s revenue, especially on EUV all of it, is outside of China. The scale advantage is still in the favor of the West plus Taiwan, Japan, and Korea, et cetera.</p><p><strong>Dwarkesh Patel</strong></p><p>But they&#8217;re trying to make their own DUV and EUV tools, right?</p><p><strong>Dylan Patel</strong></p><p>They&#8217;re trying to do all these things. The question is how fast can they advance and scale up production as well as quality. To date, we haven&#8217;t seen that. Now I&#8217;m quite bullish that they&#8217;re going to be able to do these things over the next five to ten years. They will really scale up production and kick it into high gear. They have more engineers working on it and more desire to throw capital at the problem.</p><p><strong>Dwarkesh Patel</strong></p><p>So by 2030, will they have fully indigenized DUV?</p><p><strong>Dylan Patel</strong></p><p>I think for sure. DUV, yes.</p><p><strong>Dwarkesh Patel</strong></p><p>And fully indigenized EUV by 2030?</p><p><strong>Dylan Patel</strong></p><p>I think they&#8217;ll have working tools. I don&#8217;t think that they&#8217;ll be able to manufacture a bunch yet. There&#8217;s having it work, and then there&#8217;s production hell. ASML had EUV working in the early 2010s at some capacity. The tools were not accurate enough. They were not scaled for high-volume manufacturing or reliable enough. They had to ramp production, and that all took time.</p><p>Production hell takes time. That&#8217;s why it took another five to seven years to get EUV into mass production at a fab rather than just working in the lab.</p><p><strong>Dwarkesh Patel</strong></p><p>How many DUV tools do you think they&#8217;ll be able to manufacture in 2030?</p><p><strong>Dylan Patel</strong></p><p>ASML?</p><p><strong>Dwarkesh Patel</strong></p><p>No, China.</p><p><strong>Dylan Patel</strong></p><p>That&#8217;s a great question. It&#8217;s a bit of a challenge to look into this supply chain especially. We try really hard. In some instances, they&#8217;re buying stuff from Japanese vendors. If they want a fully indigenized supply chain, they need to not buy these lenses, projection optics, or stages from Japanese vendors. They need to build it internally.</p><p>It&#8217;s really tough to say where they&#8217;ll be able to get to. I honestly think it&#8217;s a shot in the dark. But it&#8217;s probably not unlikely that they&#8217;ll be able to do on the order of 100 DUV tools a year, whereas ASML is currently doing hundreds of DUV tools a year.</p><p>No company has a process node where they make a million wafers a month. Elon says he wants to do it and China is obviously going to do it. TSMC is trying to do that. The memory makers may get to a million wafers a month as well, but not in a single fab.</p><p>It&#8217;s mind-boggling to think of that scale, and challenging to see the supply chain galvanized for that. I don&#8217;t want to doubt China&#8217;s capability to scale.</p><p><strong>Dwarkesh Patel</strong></p><p>I guess this is an interesting question. I think at some point SemiAnalysis will do the deep dive on this. By when would indigenized Chinese production be bigger than the rest of the West combined. And put in the input of your model of when they&#8217;ll have DUV machines and EUV machines at scale?</p><p>Because there&#8217;s this question around if you have long timelines on AI&#8212;by long meaning 2035, which is not that long in the grand scheme of things&#8212;should you expect a world where China is dominating in semiconductors? It doesn&#8217;t get asked enough because if you&#8217;re in San Francisco, we&#8217;re thinking on timescales of weeks. If you&#8217;re outside of San Francisco, you&#8217;re not thinking about AGI at all.</p><p>What if we have AGI? What if you have this transformational thing that is commanding tens or hundreds of trillions of dollars of economic growth and token output, but it happens in 2035? What does that imply for the West versus China? SemiAnalysis has got to write the definitive model on this.</p><p><strong>Dylan Patel</strong></p><p>It&#8217;s really challenging when you move timescales out that far. What we tend to focus on is tracking every data center, every fab, and all the tools. We track where they&#8217;re going, but the time lags for these things are relatively short. We can only make reasonably accurate estimates for data center capacity based on land purchasing, permits, and turbine purchasing. We know where all these things are going, that&#8217;s the data we sell.</p><p>As you go out to 2035, things are just so radically different. Your error bars get so large it&#8217;s hard to make an estimate. But at the end of the day, if takeoff or timelines are slow enough, I don&#8217;t see why China wouldn&#8217;t be able to catch up drastically. In some sense, we&#8217;ve got this valley where, three to six months ago, or maybe even now, Chinese models are as competitive as they&#8217;ve ever been. I think Opus 4.6 and GPT 5.4 have really pulled away and made the gap a little bit bigger, but I&#8217;m sure some new Chinese models will come out.</p><p>As we move from selling tokens where they provide the entire reasoning chain, to selling automated white-collar work&#8212;an automated software engineer, you send them the request, they give you the result back, and there&#8217;s a bunch of thinking on the back end that they don&#8217;t show you&#8212;the ability to <a href="https://en.wikipedia.org/wiki/Knowledge_distillation">distill</a> out of American models into Chinese models will be harder.</p><p>Second, look at the scale of the compute the labs have. OpenAI exited the year with roughly two gigawatts last year. Anthropic will get to two-plus gigawatts this year. By the end of next year, they&#8217;ll both be at ten gigawatts of capacity. China is not scaling their AI lab compute nearly as fast. At some point, when you can&#8217;t distill the learnings from these labs into the Chinese models, plus with this compute race that OpenAI, Anthropic, Google, and Meta are all racing on, they end up getting to a point where the model performance should start to diverge more.</p><p>Then look at all this CapEx being spent on data centers. Amazon is spending $200 billion, Google $180 billion. All these companies are spending hundreds of billions of dollars on CapEx. There&#8217;s nearly a trillion dollars of CapEx being invested in data centers in America this year, roughly. What&#8217;s the return on invested capital here? You and I would think the return on invested capital for data center CapEx is very high.</p><p>If we look at Anthropic&#8217;s revenues, in January they added $4 billion. In February, which was a shorter month, they added $6 billion. We&#8217;ll see what they can do in March and April, given that compute constraints are what&#8217;s bottlenecking their growth. The reliability of Claude is quite low because they&#8217;re so compute constrained. But if this continues, then the ROIC on these data centers is super high.</p><p>At some point, the US economy starts growing faster and faster over this year and next year because of all this CapEx, all the revenue these models are generating, and the downstream supply chain. China doesn&#8217;t have that yet. They have not built the scale of infrastructure to invest in models, get to the capabilities, and then deploy these models at such scale.</p><p>When you look at Anthropic, they&#8217;re at $20 billion ARR. The margins are sub-50 percent, at least as <a href="https://www.theinformation.com/briefings/anthropic-lowered-gross-margin-projection-costs-run-ai-rose">last reported by </a><em><a href="https://www.theinformation.com/briefings/anthropic-lowered-gross-margin-projection-costs-run-ai-rose">The Information</a></em>. So that&#8217;s $13 or $14 billion of compute that it&#8217;s running on rental cost-wise, which is actually $50 billion worth of CapEx that someone laid out for Anthropic to generate their current revenue.</p><p>China has just not done this. If and when Anthropic 10Xs revenue again&#8212;and I think our answer would be when, not if&#8212;China doesn&#8217;t have the compute to deploy at that scale. So there is some sense that we&#8217;re in a fast takeoff. It&#8217;s not like we&#8217;re talking about a <a href="https://en.wikipedia.org/wiki/Dyson_sphere">Dyson sphere</a> by X date, it&#8217;s more like the revenue is compounding at such a rate that it does affect economic growth. The resources these labs are gathering are growing so fast. China hasn&#8217;t done that yet, so in that case, the US and the West are actually diverging.</p><p>The flip side is that these infrastructure investments have middling returns. Maybe they&#8217;re not as good as hoped. Maybe Google is wrong for wanting to take free cash flow to zero and spend $300 billion on CapEx next year. Maybe they&#8217;re just wrong and people on Wall Street who are bearish and people who don&#8217;t understand AI are correct. In that case, the US is building all this capacity but doesn&#8217;t get great returns. Meanwhile, China is able to build a fully vertical, indigenized supply chain, instead of the US/Japan/Korea/Taiwan/SE Asia/Europe countries together building this less vertical supply chain. In a sense, at some point China is able to scale past us if AI takes longer to get to certain capability levels than the vast majority of your guests on this podcast believe.</p><p><strong>Dwarkesh Patel</strong></p><p>It&#8217;s fast timelines, the US wins; long timelines, China wins.</p><p><strong>Dylan Patel</strong></p><p>Yeah but I don&#8217;t know what fast timelines means. I don&#8217;t think you have to believe in AGI to have the timelines where the US wins.</p><h3>01:16:01 &#8211; The enormous incoming memory crunch</h3><p><strong>Dwarkesh Patel</strong></p><p>Let&#8217;s go back to memory. I think people on Wall Street and people in the industry are understanding how big this is, but maybe generally people don&#8217;t understand what a big deal it is. So we&#8217;ve got this memory crunch, as you were talking about.</p><p>And earlier I was asking about, oh, could we solve for the EUV tool shortage by going back to seven nanometers? So let me ask a similar question about memory. HBM is made of DRAM, but has three to four times fewer bits per wafer area than the DRAM it&#8217;s made out of.</p><p>Is it possible that accelerators in the future could just use commodity DRAM and not HBM, so we can get much more capacity out of the DRAM we have? The reason I think this might be possible is, if we&#8217;re going to have agents that are just going off and doing work, and it&#8217;s not a synchronous chatbot application, then you don&#8217;t necessarily need extremely fast latency.</p><p>Maybe you can have lower bandwidth, because the reason you stack DRAM into HBM is for higher bandwidth. Is it possible to go to HBM accelerators and basically have the opposite of Claude Code Fast, like have Claude Slow?</p><p><strong>Dylan Patel</strong></p><p>At the end of the day, the incremental purchaser who&#8217;s willing to pay the highest price for tokens also ends up being the one that&#8217;s less price-sensitive. Compute should be allocated, in a capitalistic society, towards the goods that have the highest value, and the private market determines this by willingness to pay.</p><p>To some extent, Anthropic could actually release a slow mode. They could release Claude Slow Mode and increase tokens per dollar by a significant amount. They could probably reduce the price of Opus 4.6 by 4-5x and reduce the speed by maybe just 2x. The curve on inference throughput versus speed is already there just on HBM. And yet they don&#8217;t, because no one actually wants to use a slow model.</p><p>Furthermore, on these agentic tasks, it&#8217;s great that the model can run at a time horizon of hours. But if the model was running slower, those hours would become a day. Vice versa, if the model is running faster, those hours become an hour. No one really wants to move to a day-long wait period, because the highest-value tasks also have some time sensitivity to them.</p><p>I struggle to see&#8230; Yes, you could use regular DRAM. There are a couple of challenges with this. One of the core constraints of chips is that a chip is a certain size, and all of the <a href="https://en.wikipedia.org/wiki/Input/output">I/O</a> escapes on the edges. Often, the left and right of the chip are HBM&#8212;so the I/O from the chip to the HBM is on the sides&#8212;and then the top and bottom are I/O to other chips.</p><p>If you were to change from HBM to <a href="https://en.wikipedia.org/wiki/DDR_SDRAM">DDR</a>, all of a sudden this I/O on the edge would have significantly less bandwidth, but significantly more capacity per chip. But the metric you actually care about is bandwidth per wafer, not bits per wafer.</p><p><strong>Dwarkesh Patel</strong></p><p>Because the thing that is constraining the FLOPS is just getting in and out the next matrix, and for that you just need more bandwidth.</p><p><strong>Dylan Patel</strong></p><p>Yeah, getting out the <a href="https://www.ultralytics.com/glossary/model-weights">weights</a> and getting in and out the <a href="https://huggingface.co/blog/not-lain/kv-caching">KV cache</a>. In many cases, these GPUs are not running at full memory capacity. It&#8217;s obviously a system design thing: model, hardware, and software co-design. You have to figure out how much KV cache you need, how much you keep on the chip, how much you offload to other chips and call when you need it for tool calling, and how many chips you parallelize this on.</p><p>Obviously, the search space for this is very broad, which is why we have <a href="https://inferencex.semianalysis.com/">InferenceX</a>, an open-source model that searches all the optimal points on inference for a variety of different chips and models.</p><p>The point is, you&#8217;re not always necessarily constrained by memory capacity. You can be constrained by FLOPS, network bandwidth, memory bandwidth, or memory capacity. If you really simplify it down, there are four constraints, and each of these can break out into more.</p><p>If you switch to DDR, yes, you produce four times the bits per DRAM wafer, but all of a sudden the constraints shift a lot and your system design shifts. You go slower. Is the market smaller? Maybe. But also, all these FLOPS are wasted because they&#8217;re just sitting there waiting for memory. You don&#8217;t need all that capacity because you can&#8217;t really increase batch size because then the KV cache would take even longer to read.</p><p><strong>Dwarkesh Patel</strong></p><p>Makes sense. What is the bandwidth difference between HBM and normal DRAM?</p><p><strong>Dylan Patel</strong></p><p>An <a href="https://en.wikipedia.org/wiki/High_Bandwidth_Memory#HBM4">HBM4</a> stack&#8212;let&#8217;s talk about the stuff that&#8217;s in Rubin, because that&#8217;s what we&#8217;ve been indexing on&#8212;is 2048 bits across, connected in an area that&#8217;s 13 millimeters wide. It transfers memory at around 10 giga-transfers a second.</p><p>So a stack of HBM4 is 2048 bits on an area that&#8217;s roughly 11 to 13 millimeters wide. That&#8217;s the shoreline you&#8217;re taking on the chip. In that shoreline, you have 2048 bits transferring at 10 giga-transfers per second. You multiply those together and divide by eight, bits to a byte, and you&#8217;re at roughly 2.5 terabytes a second per HBM stack.</p><p>When you look at DDR, in that same area, it&#8217;s maybe 64 or 128 bits wide. That DDR5 is transferring at anywhere from 6.4 to maybe 8,000 giga-transfers a second. So your bandwidth is significantly lower. It&#8217;s 64 times 8,000 divided by eight, which puts you at 64 gigabytes a second. Even if you take a generous interpretation of 128 times 8 giga-transfers, you&#8217;re at 128 gigabytes a second for the same shoreline, versus 2.5 terabytes a second.</p><p>There&#8217;s an order of magnitude difference in bandwidth per edge area. If your chip is a square, or 26 by 33 millimeters&#8212;which is the maximum size for an individual die&#8212;you only have so much edge area. On the inside of that chip, you put all your compute. There are things you can do to try and change that, like more <a href="https://en.wikipedia.org/wiki/Static_random-access_memory">SRAM</a> or more caching. But at the end of the day, you&#8217;re very constrained by bandwidth.</p><p><strong>Dwarkesh Patel</strong></p><p>Then there&#8217;s the question of where you can destroy demand to free up enough for AI. I guess the picture is especially bad because, as you&#8217;re saying, if it takes four times more wafer area to get the same byte, for HBM you have to destroy four times as much consumer demand for laptops and phones to free up one byte for AI.</p><p>What does this imply for the next year or two? Sorry for the run-on question, in your newsletter you said 30% of Big Tech&#8217;s CapEx in 2026 is going towards memory?</p><p><strong>Dylan Patel</strong></p><p>Yes.</p><p><strong>Dwarkesh Patel</strong></p><p>That&#8217;s insane, right? Of the $600 billion or whatever, 30% is going just to memory.</p><p><strong>Dylan Patel</strong></p><p>Yes. Obviously, there&#8217;s some level of margin stacking that Nvidia does, so you have to separate that out and apply their margin to the memory and the logic. But at the end of the day, a third of their CapEx is going to memory.</p><p><strong>Dwarkesh Patel</strong></p><p>That&#8217;s crazy. What should we expect over the next year or two as this memory crunch hits?</p><p><strong>Dylan Patel</strong></p><p>The memory crunch will continue to get harder, and prices will continue to go up. This affects different parts of the market differently. Are people going to hate AI more and more? Yes, because smartphones and PCs are not going to get incrementally better year on year. In fact, they&#8217;re going to get incrementally worse.</p><p><strong>Dwarkesh Patel</strong></p><p>If you look at the bill of materials for an iPhone, what fraction of it is the memory? How much more expensive does an iPhone get if the memory is two times more expensive?</p><p><strong>Dylan Patel</strong></p><p>I believe an iPhone has 12 gigabytes of memory. Each gig used to cost roughly $3-4, so that&#8217;s $50. But now the price of memory has tripled. Let&#8217;s say it&#8217;s $12 per gig for DDR. Now you&#8217;re talking about $150 versus $50.</p><p>That&#8217;s a $100 increase in cost for Apple. Apple has some margin, they&#8217;re not just going to eat the margin. NAND also has the same market dynamics, so in reality, it&#8217;s probably a $150 increase on the iPhone. So now that&#8217;s a $100 cost increase and that&#8217;s just on the DRAM. The <a href="https://en.wikipedia.org/wiki/Flash_memory">NAND</a> also has the same sort of market. So in fact it&#8217;s probably a $150 increase on the iPhone. Apple either has to pass that on to the consumer or eat it. I don&#8217;t see Apple reducing their margin too much, maybe they eat a little bit. But at the end of the day, that means the end consumer is paying $250 more for an iPhone.</p><p>Now that&#8217;s just on last year&#8217;s pricing versus today&#8217;s. There is some lag before Apple feels the heat because they tend to have long-term contracts for memory that last three months to a year. But at the end of the day, Apple gets hit pretty hard by this. They won&#8217;t really adjust until the next iPhone release.</p><p>But that&#8217;s the high end of the market, which is only a few hundred million phones a year. Apple sells two or three hundred million phones annually. The bulk of the market is mid-range and low-end. It used to be that 1.4 billion smartphones were sold a year. Now we&#8217;re at about 1.1 billion. Our projections are that we might drop to 800 million this year, and down to 500 or 600 million next year.</p><p>We look at data points out of China from some of our analysts in Asia, Singapore, Hong Kong, and Taiwan. They&#8217;ve been tracking this, and they see <a href="https://en.wikipedia.org/wiki/Xiaomi">Xiaomi</a> and <a href="https://en.wikipedia.org/wiki/Oppo">Oppo</a> cutting low-end and mid-range smartphone volumes by half.</p><p>Yes, it&#8217;s only a $150 <a href="https://en.wikipedia.org/wiki/Bill_of_materials">BOM</a> increase on a $1,000 iPhone where Apple has some larger margin. But for smaller phones, the percentage of the BOM that goes to memory and storage is much larger. And the margins are lower, so there&#8217;s less capacity to even eat the margins. And they have also generally tended not to do long-term agreements on memory.</p><p>Why this is a big deal is that if smartphone volumes halve, that drop will happen in the low and mid-range, not the high end. So it&#8217;s not like the bits released are halving. Currently, consumer devices account for more than half of memory demand. Even if you halve smartphone volumes, because of the shape of the halving, the low end gets cut by more than half, while the high end gets cut by less than half, because you and I will still buy the high-end phones that cost north of a thousand dollars. We&#8217;ll buy them even if they get a little bit more expensive. And Apple&#8217;s volumes will not go down as much as a low-end smartphone provider.</p><p>The same applies to PCs. What this does to the market is quite drastic. DRAM gets released and goes to AI chips, who are willing to do longer-term contracts and pay higher margins, because at the end of the day the margin they extract from the end user is much larger.</p><p>This probably leads to people hating AI even more. Today, you already see all the memes on PC subreddits and gaming PC Twitter. It&#8217;s cat dancing videos saying, &#8220;This is why memory prices have doubled and you can&#8217;t get a new gaming GPU or desktop.&#8221; It&#8217;s going to be even worse when memory prices double again, especially DRAM.</p><p>Another interesting dynamic is that it&#8217;s not just DRAM, it&#8217;s also NAND. NAND is also going up in price. Both of these markets have expanded capacity very slowly over the last few years, NAND almost zero. The percentage of NAND that goes to phones and PCs is larger than the percentage of DRAM that goes to phones and PCs.</p><p>As you destroy demand, mostly for DRAM purposes, you unlock more NAND that gets allocated and can go to other markets. The price increases of DRAM will be larger than those of NAND because you&#8217;ve released more from the consumer, and in fact, you&#8217;ve produced more memory for AI.</p><p><strong>Dwarkesh Patel</strong></p><p>Sorry, maybe you just explained it and I missed it. Is it because <a href="https://en.wikipedia.org/wiki/Solid-state_drive">SSDs</a> are being used in large quantities for data centers?</p><p><strong>Dylan Patel</strong></p><p>They are, but not in as large quantities as DRAM.</p><p><strong>Dwarkesh Patel</strong></p><p>Okay, so they will also increase because they&#8217;ll be using some quantity, but there&#8217;s not as much of a need as there is for HBM. Makes sense.</p><p>One thing I didn&#8217;t appreciate until I was reading some of your newsletters is that the same constraints preventing logic scaling over the next few years are quite similar to what&#8217;s preventing us from producing more memory wafers. In fact, literally the same exact machine, this EUV tool, is needed for memory.  So I guess the question someone could ask right now is, why can&#8217;t we just make more memory?</p><p><strong>Dylan Patel</strong></p><p>The constraints, as I was mentioning earlier, are not necessarily EUV tools today or next year. They become that as we get to the latter part of the decade. Currently, the constraints are more that they physically just haven&#8217;t built fabs. Over the last three to four years, these vendors have not built new fabs because memory prices were really low. Their margins were low, and in fact, they were losing money in 2023 on memory. So they decided they weren&#8217;t building new fabs. The market slowly recovered over time but never really got amazing until last year.</p><p>In 2024, we were banging on the drums that reasoning means <a href="https://cloud.google.com/transform/the-prompt-what-are-long-context-windows-and-why-do-they-matter">long context</a>, which means a large KV cache, which means you need a lot of memory demand. We&#8217;ve been talking about that for a year and a half, two years. People who understand AI went really long on memory then. So you&#8217;ve seen that dynamic, but now it has finally played out in pricing.</p><p>It took so long for what was obvious: long context means the KV cache gets bigger, you need more memory. Half the cost of accelerators is memory. Of course they&#8217;re going to start going crazy on it. It took a year for that to actually reflect in memory prices. Once memory prices reflected that, it took another three to six months for the memory vendors to start building fabs. Those fabs take two years to build. So we won&#8217;t have really meaningful fabs to even put these tools in until late 2027 or 2028.</p><p>Instead, you&#8217;ve seen some really crazy stuff to get capacity. <a href="https://en.wikipedia.org/wiki/Micron_Technology">Micron</a> <a href="https://www.reuters.com/world/china/microns-18-billion-acquisition-boosts-powerchip-shares-2026-01-19/">bought a fab from a company in Taiwan</a> that makes lagging-edge chips. Hynix and Samsung are doing some pretty crazy things to try and expand capacity at their existing fabs, which also have large knock-on effects in the economy.</p><p>So why can&#8217;t we build more capacity? There&#8217;s nowhere to put the tools. It&#8217;s not just EUV; there are other tools involved in DRAM and logic. In logic, for N3, about 28% of the cost of the final wafer is EUV. When you look at DRAM, it&#8217;s in the teens. It&#8217;s going up, but it&#8217;s a much smaller percentage of the cost. These other tools are also bottlenecks, although their supply chains are not as complex as ASML&#8217;s.</p><p>You see <a href="https://en.wikipedia.org/wiki/Applied_Materials">Applied Materials</a>, <a href="https://www.lamresearch.com/">Lam Research</a>, and all these other companies expanding capacity a lot as well. But you don&#8217;t have anywhere to put the tool, because the most complex buildings people make are fabs, and fabs take two years to build.</p><p><strong>Dwarkesh Patel</strong></p><p><a href="https://www.dwarkesh.com/p/elon-musk">I interviewed Elon recently</a>, and his whole plan is that they&#8217;re going to build this <a href="https://www.bloomberg.com/news/articles/2026-01-28/musk-says-tesla-needs-to-build-terafab-to-manufacture-chips">TeraFab</a> and they&#8217;re going to build the <a href="https://en.wikipedia.org/wiki/Cleanroom">clean rooms</a>. I won&#8217;t even ask you about the dirty rooms thing, but let&#8217;s say they build the clean rooms.</p><p>I have a couple of questions. One, do you think this is the kind of thing that Elon Co. could build much faster than people conventionally build it? This is not about building the end tools. This is just about building the facility itself. How complicated is it to just build the clean room extremely fast? Is this something that Elon, with his &#8220;move fast&#8221; approach, could do much faster if that&#8217;s what we&#8217;re bottlenecked on this year or next year? Two, does that even matter if, in two years, your view is that we&#8217;re not bottlenecked on clean room space, but on the tooling?</p><p><strong>Dylan Patel</strong></p><p>As with any complex supply chain, it takes time, and constraints shift over time. Even if something is no longer a constraint, that doesn&#8217;t mean that market no longer has margin. For example, energy will not be a big bottleneck a couple of years from now, but that doesn&#8217;t mean energy isn&#8217;t growing super fast and there&#8217;s no margin there. It&#8217;s just not the key bottleneck. In the space of fabs, clean rooms are the biggest bottleneck this year and next year. As we get to 2028, 2029, 2030, there will still be constraints there.</p><p>The thing about Elon is he has a tremendous capability to garner physical resources and really smart people to build things. The way he recruits amazing people is by trying to build the craziest stuff. In the case of AI, that hasn&#8217;t really worked because everyone&#8217;s trying to build AGI. Everyone is very ambitious. But in the case of going to Mars, making rockets that land themselves, fully autonomous electric cars, or humanoid robots, these are methods of recruiting the people who think that&#8217;s the most important problem in the world to work on that problem, because he&#8217;s the only one trying really hard.</p><p>In the case of semiconductors, he stated he wants to make a fab that&#8217;s a million wafers per month. No one has a fab that big. It&#8217;s possible that he&#8217;s able to recruit a lot of really awesome people and get them on this crazy task of building a million wafers a month. Step one is to build the clean room, and that I think he probably can do. His mindset around deleting things, that it can be dirty, it&#8217;s fine, is probably not right. Actually I think it&#8217;s 100% not right. You need the fab to be very clean. All of the air in the fab gets replaced every three seconds, it&#8217;s that fast. There have to be so few particles.</p><p>But I think he can build the clean room. It&#8217;ll take a year or two. Initially, it won&#8217;t be super fast, but over time, he&#8217;ll get faster at it. The really complex part is actually developing a process technology and building wafers. I don&#8217;t think he can develop that quickly. That has a lot of built-up knowledge. The most complicated integration of very expensive tools and supply chains is done by TSMC, Intel, or Samsung. These two other companies aren&#8217;t even that great at it, and they&#8217;re tremendously complex.</p><p><strong>Dwarkesh Patel</strong></p><p>How surprised would you be if in 2030 there just happened to be some total disruption where we&#8217;re not using EUV? What if we&#8217;re using something that has much better effects, is much simpler to produce, and can be produced in much bigger quantities? I&#8217;m sure as an industry insider that sounds like a totally naive question, but do you see what I&#8217;m asking? What probability should we put on something coming totally out of left field to make all of this irrelevant?</p><p><strong>Dylan Patel</strong></p><p>Something that&#8217;s very simple and easy to scale, I assign a very, very low probability. There are a number of companies working on effectively particle accelerators or <a href="https://en.wikipedia.org/wiki/Synchrotron">synchrotrons</a> that generate light that&#8217;s either 13.5 nanometer, like EUV, or an even narrower wavelength, like <a href="https://www.tomshardware.com/tech-industry/semiconductors/american-startup-substrate-promises-2nm-class-chipmaking-with-particle-accelerators-at-a-tenth-of-the-cost-of-euv-x-ray-lithography-system-has-potential-to-surpass-asmls-euv-scanners">X-ray</a> at 7 nanometers, to then use in lithography tools. But those things are massive particle accelerators generating this light. It&#8217;s a very complicated thing to build.</p><p>There are a couple of companies and I think that could be a big disruption to the industry beyond EUV. But I don&#8217;t think we&#8217;re going to magically build something new that is direct write and super simple, and can be manufactured at huge volumes, although there are some attempts to do things like this.</p><p><strong>Dwarkesh Patel</strong></p><p>I ask because if you think about Elon&#8217;s companies in the past, rocketry was this thing that was thought to be&#8212;and is&#8212;incredibly complicated.</p><p><strong>Dylan Patel</strong></p><p>Look, I&#8217;m just a naive yapper compared to Elon. What have I built? So maybe it&#8217;s possible.</p><p><strong>Dwarkesh Patel</strong></p><p>In order to build more memory in the future, could we build <a href="https://www.tomshardware.com/tech-industry/next-generation-3d-dram-approaches-reality-as-scientists-achieve-120-layer-stack-using-advanced-deposition-techniques">3D DRAM</a> the way we do <a href="https://www.appliedmaterials.com/us/en/semiconductor/markets-and-inflections/memory/3d-nand.html">3D NAND</a> and then go back to DUV?</p><p><strong>Dylan Patel</strong></p><p>That is the hope currently. Everyone&#8217;s roadmap for 3D DRAM is that you&#8217;ll still use EUV because you want to have that tighter overlay. When you&#8217;re doing these subsequent processing steps, everything is vertically stacked and you have more layers on top of each other. You want the pitches to be tighter. So generally, people are still trying to do it with EUV.</p><p>But what 3D would do is change the calculation of how many bits a single EUV pass can make. That number would go up drastically if you go to 3D DRAM. That is the hope. Right now, everyone&#8217;s roadmap goes from the current 6F cell, to a <a href="https://www.globalsmt.net/advanced-packaging/a-new-round-of-technological-innovation-in-memory-market-on-the-way/">4F cell</a>, and then finally 3D DRAM by the end of the decade or early next decade. There&#8217;s still a lot of R&amp;D, manufacturing, and integration to be done. I wouldn&#8217;t call that out of the cards. I think it&#8217;s very likely going to happen.</p><p>It&#8217;s also going to require a huge retooling of fabs. The breakdown of tools in a fab will be very different. The lithography tool is actually the only thing that isn&#8217;t that different. But the number of them relative to different types of <a href="https://en.wikipedia.org/wiki/Chemical_vapor_deposition">chemical vapor deposition</a>, <a href="https://en.wikipedia.org/wiki/Atomic_layer_deposition">atomic layer deposition</a>, <a href="https://en.wikipedia.org/wiki/Dry_etching">dry etch</a>, or different kinds of etch chambers with different chemistries&#8230; You have all these different tools for different process nodes. You can&#8217;t just convert a logic fab to a DRAM fab, or vice versa, or a NAND fab to a DRAM fab, in a short amount of time.</p><p>In the same way, existing DRAM fabs require a lot of retooling just to go from 1-alpha to 1-beta to 1-gamma process nodes, because they have to add DUV and change the chemistry stacks for when you&#8217;re using EUV in terms of deposition and etch. And the EUV tool has to be there. Furthermore, when you change to 3D DRAM, there&#8217;s going to be an even larger shift, so a lot of retooling of these fabs needs to happen.</p><p>That would be a big disruption. That would make EUV demand generally lower. But as we&#8217;ve seen across time, lithography demand as a percentage of wafer cost has trended up. Around the 2014 era, it was 17% of the wafer cost, and it&#8217;s gone to 30% over the last fifteen years. For DRAM, it was in the low to mid-teens, and now it&#8217;s trended toward the high teens. Before we get to 3D DRAM, it&#8217;ll likely cross into the 20% range. But then, if we get to 3D DRAM, the total end wafer cost as a percentage of EUV tanks again.</p><p><strong>Dwarkesh Patel</strong></p><p>I guess you care less about the percent of cost and more about how much it bottlenecks production.</p><p><strong>Dylan Patel</strong></p><p>Right, but the percentage of cost&#8212;</p><p><strong>Dwarkesh Patel</strong></p><p>It&#8217;s a proxy, yeah. If you&#8217;re <a href="https://en.wikipedia.org/wiki/Jensen_Huang">Jensen</a> or Sam Altman, or whoever stands to gain a lot from scaling up AI compute, there are these stories that they&#8217;d go to TSMC and say, &#8220;Why can&#8217;t we access Y and Z?&#8221; But I think the point you&#8217;re making is that it doesn&#8217;t really matter what TSMC does in some sense. In fact, even if you have Intel and Samsung building more foundries, in the long run, you&#8217;re going to be bottlenecked by ASML and other tool and material makers.</p><p>First, is that a correct interpretation? Second, should Silicon Valley people be going to the Netherlands right now to try to pitch ASML to make more tools so that in 2030 they can have more AI compute?</p><p><strong>Dylan Patel</strong></p><p>It&#8217;s a funny dynamic we saw in 2023, 2024, and 2025. People who saw the energy bottleneck before others asymmetrically went to <a href="https://www.siemens-energy.com/us/en/home/products-services/product-offerings/gas-turbines.html">Siemens</a>, <a href="https://en.wikipedia.org/wiki/Mitsubishi_Heavy_Industries">Mitsubishi</a>, and of course <a href="https://en.wikipedia.org/wiki/GE_Vernova">GE Vernova</a>, and bought up turbine capacity. Now they&#8217;re able to charge excess amounts for deploying these turbines in places because of energy.</p><p>In the same sense, this could be done for EUV, except ASML is not just going to trust any random bozo who wants to buy EUV tools. These turbines are much cheaper than EUV tools, and there&#8217;s many more of them produced. Especially once you get to industrial gas turbines, not just <a href="https://en.wikipedia.org/wiki/Combined-cycle_power_plant">combined-cycle</a> but the cheaper, smaller, less efficient ones, people put down deposits for these.</p><p>Someone could do this. Someone should go to the Netherlands and be like, &#8220;I&#8217;ll pay you a billion dollars. You give me the right to purchase ten EUV tools two years from now, and I&#8217;m first in line.&#8221; Then over those two years, you go around and wait for everyone to realize, &#8220;Oh crap, I don&#8217;t have enough EUV tools,&#8221; and you try to sell your option at some premium. All you&#8217;re effectively doing is saying, &#8220;ASML, you&#8217;re dumb. You weren&#8217;t making enough margin on these. I&#8217;m going to make a margin.&#8221; The question is, will ASML even agree to this? I don&#8217;t think so.</p><p><strong>Dwarkesh Patel</strong></p><p>There&#8217;s a world where they at least get the demand signal from that to increase production.</p><p><strong>Dylan Patel</strong></p><p>Potentially. I agree.</p><p><strong>Dwarkesh Patel</strong></p><p>But it sounds like you&#8217;re saying they couldn&#8217;t even increase production if they wanted to, given the supply chain.</p><p><strong>Dylan Patel</strong></p><p>Right. But that&#8217;s exactly the market in which&#8230; If they can&#8217;t increase production, just like TSMC cannot increase production that fast, and yet demand is mooning, then the obvious solution is to arbitrage this. You and I know demand is way higher than they&#8217;re projecting and their capability to build.</p><p>You arbitrage this by locking up the capacity, doing a forward contract, and then trying to sell it at a later date once other people realize everything is fucked and we don&#8217;t have enough capacity. Then you&#8217;ll have this insane margin that ASML and TSMC should have been charging. But the thing is, I don&#8217;t know if ASML and TSMC will ever agree to this.</p><h3>01:42:34 &#8211; Scaling power in the US will not be a problem</h3><p><strong>Dwarkesh Patel</strong></p><p>Let me ask you about power now. It sounds like you think power can be arbitrarily scaled.</p><p><strong>Dylan Patel</strong></p><p>Not arbitrarily, but yes.</p><p><strong>Dwarkesh Patel</strong></p><p>But beyond these numbers. If I&#8217;m remembering correctly, <a href="https://newsletter.semianalysis.com/p/how-ai-labs-are-solving-the-power">your blog post</a> on how AI labs are increasing power implied that GE Vernova, Mitsubishi, and Siemens could produce 60 gigawatts a year in gas turbines. Then there are these other sources, but they&#8217;re less significant than the turbines.</p><p>Only a fraction of that goes to AI, I assume. If in 2030 we have enough logic and memory to do 200 gigawatts a year, do you just think that these things are on a path to ramp up to more than 200 gigawatts a year, or what do you see?</p><p><strong>Dylan Patel</strong></p><p>Right now we&#8217;re at 20 or 30. This is critical IT capacity, by the way, which is an important thing to mention. When I&#8217;m talking about these gigawatts, I&#8217;m talking about critical IT capacity. Server plugged in, that&#8217;s how much power it pulls. But there are losses along the chain. There is loss on transmission, conversion, cooling, et cetera. So you should gross this factor up from 20 gigawatts for this year, or 200 gigawatts by the end of the decade, to some number 20-30% higher.</p><p>Then you have capacity factors. Turbines don&#8217;t run at 100 percent. If you look at <a href="https://en.wikipedia.org/wiki/PJM_Interconnection">PJM</a>, which I think is the largest grid in America&#8212;covering the Midwest and some of the Northeast area&#8212;in their models they want to have roughly 20 percent excess capacity. Within that 20 percent excess capacity, they&#8217;re running all the turbines at 90% because they are derated some for reliability, maintenance, and so on. In reality, the nameplate capacity for energy is always way higher than the actual end critical IT capacity because of all these factors.</p><p>But it&#8217;s not just turbines. If you were just making power from turbines, that&#8217;s simple, boring, and easy. Humans and capitalism are far more effective. The whole point of that blog was that, yes, there are only three people making combined-cycle gas turbines, but there&#8217;s so much more we can do. We can do <a href="https://www.gevernova.com/content/dam/gepower-microsites/global/en_US/documents/avr/GEA34130%20AeroderivativeGT_Whitepaper_R5.pdf">aeroderivatives</a>. We can take airplane engines and turn them into turbines. There are even new entrants in the market, like <a href="http://m">Boom Supersonic</a> trying to do that and working with <a href="https://www.crusoe.ai/">Crusoe</a>. &#8202;Also there&#8217;s all the other ones like that already exist in the market.</p><p>There are also <a href="https://blog.burnsmcd.com/meet-growing-data-center-power-demands-with-reciprocating-engines">medium-speed reciprocating engines</a>: engines that spin in circles, like a diesel engine. There are ten people who make engines that way. I&#8217;m from Georgia, and people used to be like, &#8220;Oh man, you got a <a href="https://www.cummins.com/en-na">Cummins</a> engine in there,&#8221; regarding RAM trucks. Automobile manufacturing is going down, so these companies all have capacity and could scale and convert that for data center power. You stick all these reciprocating engines in. It&#8217;s not as clean as combined-cycle, but maybe you can convert them from diesel to gas if you want.</p><p>What about ship engines? All of these engines for massive cargo ships are great. <a href="https://nebius.com/newsroom/nebius-announces-multi-billion-dollar-agreement-with-microsoft-for-ai-infrastructure">Nebius is doing that for a Microsoft data center in New Jersey</a>. They&#8217;re running ship engines to generate power. <a href="https://www.bloomenergy.com/">Bloom Energy</a> is doing <a href="https://www.bloomenergy.com/hydrogen-fuel-cells/">fuel cells</a>. We&#8217;ve been very positive on them for a year and a half now because they have such a capability to increase their production. Their payback period for a production increase is very fast, even if the cost is a little bit higher than combined-cycle, which is the best for cost and efficiency.</p><p>Then there&#8217;s solar plus battery, which can come online as those cost curves continue to come down. There&#8217;s wind, where you might only expect 15 percent of the maximum power because things oscillate, but you add batteries. There are all these things.</p><p>The other thing is that the grid is scaled so we don&#8217;t cut off power at peak usage on the hottest day of the summer. But in reality, that&#8217;s a load spike that is 10-20% higher than the average. If you just put enough utility-scale batteries, or <a href="https://en.wikipedia.org/wiki/Peaking_power_plant">peaker plants</a> that only run a small portion of the year&#8212;and those could be gas, industrial gas turbines, combined-cycle, batteries, or any of the other sources I mentioned&#8212;then all of a sudden you&#8217;ve unlocked 20% of the US grid for data centers. Most of the time that capacity is sitting idle. It&#8217;s really only there for that peak, which is just a few hours over a few days of the year. If you have enough capacity to absorb that peak load, then all of the sudden you&#8217;ve transferred it all.</p><p>Today, data centers are only 3-4% of the power of the US grid, and by 2028 they&#8217;ll be 10%. But if you can unlock 20% of the US grid like this, it&#8217;s not that crazy. The US grid is terawatt-level, not hundreds-of-gigawatts-level. So we can add a lot more energy.</p><p>I&#8217;m not saying it&#8217;s easy. These things are going to be hard. There&#8217;s a lot of hard engineering, risks people have to take, and new technologies people have to use. But Elon was the first to do this behind-the-meter gas, and since then we&#8217;ve seen an explosion of different things people are doing to get power. &#8202;They&#8217;re not easy, but people are gonna be able to do them. The supply chains are just way simpler than chips.</p><p><strong>Dwarkesh Patel</strong></p><p>Interesting. He made the point during the interview that for the specific blade for the specific turbine he was looking at, the lead times go out beyond 2030. Your point is that&#8212;</p><p><strong>Dylan Patel</strong></p><p>That&#8217;s great. There are so many other ways to make energy. Just be inefficient. It&#8217;s fine.</p><p><strong>Dwarkesh Patel</strong></p><p>Right now, combined-cycle gas turbines have CapEx of $1,500 per kilowatt. Are you saying it would make sense to have either technologies that are much more expensive than that, or other things are getting cheap enough to make it competitive?</p><p><strong>Dylan Patel</strong></p><p>Exactly. It can be as high as $3,500 per kilowatt. It could be twice as much as the cost of combined-cycle, and the total cost of the GPU on a TCO basis has only gone up a few cents per hour.</p><p>Because we&#8217;ve been talking about Hopper pricing, $1.40, let&#8217;s say the power price doubles. The Hopper that was $1.40 is now $1.50 in cost. I don&#8217;t care, because the models are improving so fast that the marginal utility of them is worth way more than that ten-cent increase in energy.</p><p><strong>Dwarkesh Patel</strong></p><p>So you&#8217;re saying 20 percent of the grid&#8212;the grid is about one terawatt&#8212;can just come online from utility-scale batteries, increasing what you&#8217;d be comfortable putting on the grid.</p><p><strong>Dylan Patel</strong></p><p>The regulatory mechanism there is not easy, by the way.</p><p><strong>Dwarkesh Patel</strong></p><p>But that&#8217;s 200 gigawatts, if that hypothetically happens. Just from the different sources of gas generation you mentioned&#8212;the different kinds of engines and turbines&#8212;combined, how many gigawatts could they unlock by the end of the decade?</p><p><strong>Dylan Patel</strong></p><p>We&#8217;re tracking this in our data. There are over 16 different manufacturers of power-generating things just from gas alone. Yes, there are only three turbine manufacturers for combined-cycle, but we&#8217;re tracking 16 different vendors, and we have all of their orders. It turns out there are hundreds of gigawatts of orders to various data centers.</p><p>As we get to the end of the decade, we think something like half of the capacity that&#8217;s being added will be behind the meter. Behind the meter is almost always more expensive than grid-connected, but there are just a lot of problems with getting grid-connected: permits and interconnection queues and all this sort of stuff. So even though it&#8217;s more expensive, people are doing behind the meter.</p><p>What they&#8217;re doing behind the meter ranges widely. It could be reciprocating engines, ship engines, or aeroderivatives. It could be combined-cycle, although combined-cycle is not that great for behind the meter. It could be Bloom Energy fuel cells, or solar plus battery. It could be any of these things.</p><p><strong>Dwarkesh Patel</strong></p><p>And you&#8217;re saying any of these individually could do tens of gigawatts?</p><p><strong>Dylan Patel</strong></p><p>Any of these individually will do tens of gigawatts, and as a whole, they will do hundreds of gigawatts.</p><p><strong>Dwarkesh Patel</strong></p><p>Okay. So that alone should more than&#8212;</p><p><strong>Dylan Patel</strong></p><p>Electrician wages will probably double or triple again. There are going to be a lot of new people entering that field, and a ton of people who make money, but I don&#8217;t see that as the main bottleneck.</p><p><strong>Dwarkesh Patel</strong></p><p>Right now in Abilene, at the <a href="https://www.crusoe.ai/resources/newsroom/crusoe-expands-ai-data-center-campus-in-abilene-to-1-2-gigawatts">1.2-gigawatt data center that Crusoe is building for OpenAI</a>, I think they have 5,000 people working there, or at peak they did. If you turn that into 100 gigawatts&#8212;and I&#8217;m sure things will get more efficient over time&#8212;that would be 400,000 people it would take to build 100 gigawatts.</p><p>If you think about the US labor force, and how many electricians there are and how many construction workers there are&#8230; I guess there are 800,000 electricians. I don&#8217;t know if they&#8217;re all substitutable in this way. There are millions of construction workers. But if we&#8217;re in a world where we&#8217;re adding 200 gigawatts a year, are we going to be crunched on labor eventually, or do you think that is actually not a real constraint?</p><p><strong>Dylan Patel</strong></p><p>Labor is a big constraint. It&#8217;s a humongous constraint in this. People have to be trained. Likewise, we&#8217;ll probably start importing the highest-skilled labor. It makes sense that a really high-skilled electrician in Europe who was working on destroying power plants now comes to America and is building high-voltage electricity moving across a data center.</p><p>Humanoid robots or robotics at least might start to help, but the main factor for reducing the number of people is going to be modularizing things and making them in factories in Asia. Unfortunately for America, places like Korea, Southeast Asia, and in many ways China as well are going to ship more and more built-out sections of the data center and those will be shipped in. Today you currently ship servers or a rack in, and then you plug that into different pieces that you&#8217;re shipping from different places.</p><p>But now you&#8217;ll ship it to a factory and integrate the entire thing. Maybe this is a two-megawatt block, and this block goes from high-voltage AC power to the DC voltage that you deliver to the rack, or something like this. Or with cooling, you ship a fully integrated unit that has a lot of the cooling subsystems already put together, because plumbers are also a big constraint here.</p><p>Furthermore, instead of just a single rack where you have people wiring up all these racks with electricity, you take a skid and put an entire row of servers on it that is shipped directly from the factories. Today, a single rack may be 120 or 140 kilowatts, but as we get to next-generation <a href="https://developer.nvidia.com/blog/nvidia-800-v-hvdc-architecture-will-power-the-next-generation-of-ai-factories/">Nvidia Kyber</a> and things like that, it&#8217;s almost a megawatt.</p><p>In addition, if you do an entire row, it&#8217;ll have the rack, the networking, the cooling, and the power all integrated together. Now when you come in, you have much less to cable. There&#8217;s less networking fiber, fewer power connections, and fewer plumbing things. This can drastically reduce the number of people working in data centers, so our capability to build them will be much larger.</p><p>Along the way, some people will move faster to new things, and some will move slower. <a href="https://www.forbes.com/sites/annatong/2026/03/12/from-gigawatts-to-grab-and-go-crusoe-leans-into-modular-ai-data-centers/">Crusoe and Google have been talking a lot about this modularization</a>, as have <a href="https://www.datacenterdynamics.com/en/news/meta-to-deploy-366mw-of-modular-gas-units-to-power-1gw-data-center-in-el-paso-texas/">companies like Meta</a> and many others. The people who move faster to new things may face delays, while the people who are slower will face labor problems. There will always be dislocations in the market because this is a very complex supply chain. At the end of the day, it&#8217;s still simple enough that we will be able to solve it through capitalism and human ingenuity on the timescales required.</p><h3>01:54:44 &#8211; Space GPUs aren&#8217;t happening this decade</h3><p><strong>Dwarkesh Patel</strong></p><p>Speaking of big problems to solve, Elon Musk is very bullish on space GPUs. If you&#8217;re right that power is not a constraint on Earth&#8230; I guess the other reason they would make sense is that even if  there will be enough gas turbines or whatever on Earth, Elon&#8217;s next argument is that you can&#8217;t get the permitting to build hundreds of gigawatts on Earth. Do you buy that argument?</p><p><strong>Dylan Patel</strong></p><p>Land-wise, America is big. Data centers don&#8217;t actually take up that much space, so you can solve that. Permitting-wise, air pollution permits are a challenge, but the Trump administration made it much easier. You go to Texas, and you can skip a lot of this red tape.</p><p>Elon had to deal with a lot of this complex stuff in Memphis, and then building a power plant across the border for <a href="https://x.ai/colossus">Colossus 1 and 2</a>. But at the end of the day, there&#8217;s a lot more you can get away with in the middle of Texas.</p><p><strong>Dwarkesh Patel</strong></p><p>Given that Elon lives in Texas, why didn&#8217;t he just go to Texas?</p><p><strong>Dylan Patel</strong></p><p>I think it was partially that they over-indexed on grid power for a temporary period of time. That&#8217;s just what they thought they needed more of.</p><p><strong>Dwarkesh Patel</strong></p><p>Because they had an aluminum refinery connected to the grid there.</p><p><strong>Dylan Patel</strong></p><p>It was actually an idled appliance factory. But I think they may have indexed more to grid power, water access, and gas access. I think they bought that knowing the gas line was right there and they were going to tap it. Same with water. It was a whole host of different constraints. It was probably an area where electricians were easier to find.</p><p>At the end of the day, I&#8217;m not exactly sure why they chose that site. I bet Elon would&#8217;ve chosen somewhere in Texas if he could&#8217;ve gone back because of the regulatory challenges he faced. Ultimately, permitting is a challenge, but America is a big place with 50 states, and things will get done.</p><p>There are a lot of small jurisdictions where you can just transport in all the workers you need for a temporary period of three to twelve months, depending on the contractor. You can put them in temporary housing and pay out the butt, because labor is very cheap relative to the GPUs and the networking, and the end value of the tokens it&#8217;s going to produce. So there is plenty of room to pay for all of these things.</p><p>Also, people are also diversifying now. Australia, Malaysia, Indonesia, and India are all places where data centers are going up at a much faster pace. But currently, over 70% of AI data centers are still in America, and that continues to be the trend. People are figuring out how to build these things. Ultimately, dealing with permitting and red tape in middle-of-nowhere Texas, Wyoming, or New Mexico is probably a hell of a lot easier than sending stuff into space.</p><p><strong>Dwarkesh Patel</strong></p><p>Other than the economic argument making less sense once you consider that energy is a small fraction of the total cost of ownership of a data center, what are the other reasons you&#8217;re skeptical?</p><p><strong>Dylan Patel</strong></p><p>Obviously, power is basically free in space.</p><p><strong>Dwarkesh Patel</strong></p><p>That&#8217;s the reason to do it.</p><p><strong>Dylan Patel</strong></p><p>Yeah, that&#8217;s the reason to do it. But there are all the other counterarguments. Even if power costs double on Earth, it&#8217;s still a fraction of the total cost of the GPU.</p><p>The main challenge is&#8230; We have <a href="https://www.clustermax.ai/">ClusterMAX</a>, which rates all the neoclouds. We test over 40 cloud companies, including the hyperscalers and neoclouds. Outside of software, what differentiates these clouds the most is their ability to deploy and manage failure.</p><p>GPUs are horrendously unreliable. Even today, around 15% of Blackwells that get deployed have to be <a href="https://en.wikipedia.org/wiki/Return_merchandise_authorization">RMA&#8217;d</a>. You have to take them out. Sometimes you just have to plug them back in, but sometimes you have to take them out and ship them back to Nvidia or their partners who do the RMAs and such.</p><p><strong>Dwarkesh Patel</strong></p><p>What do you make of Elon&#8217;s argument that after an initial phase, they actually don&#8217;t fail that much?</p><p><strong>Dylan Patel</strong></p><p>Sure, but now you&#8217;ve done this, tested them all, deconstructed them, put them on a spaceship, launched them into space, and then put them online again. That takes months. If your argument is that a GPU has a useful life of five years, and this takes six additional months, that is 10% of your cluster&#8217;s useful life.</p><p>Because we&#8217;re so capacity-constrained, that compute is theoretically most valuable in the first six months you have it. We&#8217;re more constrained now than we will be in the future. That compute can contribute to a better model in the future, or generate revenue today that you can use to raise more money. All these things make now the most important moment, but you&#8217;ve potentially delayed your compute deployment by six months.</p><p>What separates these cloud providers is&#8230; We see some clouds taking six months to deploy GPUs right here on Earth. We see clouds that take a lot less than six months. So the question is, where does space get in there? I don&#8217;t see how you could test them all on Earth, deconstruct them, and ship them to space without it taking significantly longer than just leaving them in the facility where you tested them.</p><p><strong>Dwarkesh Patel</strong></p><p>The question I wanted to ask is about the topology of space communication. Right now, Starlink satellites talk to each other at 100 gigabits per second. You could imagine that being much higher with optical intersatellite laser links optimized for this. That actually ends up being quite close to InfiniBand bandwidth, which is 400 gigabytes a second.</p><p><strong>Dylan Patel</strong></p><p>But that&#8217;s per GPU, not per rack. So multiply that by 72. Also, that was Hopper. When you go to Blackwell and Rubin, that 2x&#8217;s and 2x&#8217;s again.</p><p><strong>Dwarkesh Patel</strong></p><p>But how much compute is happening per&#8230; During inference, are the different scale-ups still working together, or is inference just happening as a batch within a single scale-up?</p><p><strong>Dylan Patel</strong></p><p>A lot of models fit within one scale-up domain, but many times you split them across multiple scale-up domains.</p><p>As models become more and more sparse, which is the general trend, you want to ping just a couple of experts per GPU. If leading models today have hundreds, if not a thousand, of experts, then you&#8217;d want to run this across hundreds or thousands of chips, even as we advance into the future.</p><p>So then you end up with the problem of needing to connect all these satellites together for communications as well.</p><p><strong>Dwarkesh Patel</strong></p><p>That would be tough. If there&#8217;s a world where you could do inference for a batch on a single scale-up, then maybe it&#8217;s more plausible. But if not, it&#8217;s a different story.</p><p><strong>Dylan Patel</strong></p><p>Networking these chips together is a problem, and you can&#8217;t just make the satellite infinitely large. There are a lot of physics challenges to making a satellite really big. That&#8217;s why you need these interconnects between the satellites.</p><p>Those interconnects are more expensive. In a cluster, 15-20% of the cost is networking. All of a sudden, you&#8217;re using space lasers instead of simple lasers that are manufactured in volumes of millions with pluggable transceivers.</p><p>And those things are very unreliable as well, more unreliable than the GPUs by the way. Across the life of a cluster, you have to unplug and clean them all the time. You have to unplug and replug them just for random reasons. These things are just not as reliable. So you&#8217;ve got that problem as well. You&#8217;ve got a more expensive, complicated space laser to communicate instead of this pluggable optical transceiver that&#8217;s been produced in super high volume.</p><p><strong>Dwarkesh Patel</strong></p><p>So all in all, what does that imply for space data centers?</p><p><strong>Dylan Patel</strong></p><p>Space data centers effectively are not limited by their energy advantage. They are limited by the same contended resource. We can only make two hundred gigawatts of chips a year by the end of the decade. What are we going to do to get that capacity? It doesn&#8217;t matter if it&#8217;s on land or in space. It doesn&#8217;t really matter, because you can build that power. Human capabilities and capacity could get to the period where we&#8217;re adding a terawatt a year globally of various types of power.</p><p>At some point, we do cross the chasm where space data centers make sense, but it&#8217;s not this decade. It is much further out, once energy constraints actually become a big bottleneck and land permitting becomes a much bigger bottleneck as it subsumes more of the economy. And crucially, once chips are no longer the bottleneck.</p><p>Right now, chips are the biggest bottleneck. You want them deployed and working on AI the moment they&#8217;re manufactured. There are a lot of things people are doing to increase that speed faster and faster. They&#8217;re modularizing data centers, or even modularizing racks where you put the chip in at the data center, but only the chip and everything else is already wired up and ready to go. There are things like this people are doing to decrease that time that you cannot do in space.</p><p>At the end of the day, all that matters in a chip-constrained world is getting these chips producing tokens ASAP. Maybe by 2035, the semiconductor industry, ASML, Zeiss, and suppliers like Lam Research and Applied Materials and other fab manufacturers will catch up once the pendulum swings and we are able to make enough chips. Then we will be optimizing every dial and it makes sense to optimize the 10-15% of energy costs. As we move to <a href="https://en.wikipedia.org/wiki/Application-specific_integrated_circuit">ASICs</a> potentially, and if Nvidia&#8217;s margins aren&#8217;t +70%, maybe that energy cost becomes 30% of the cluster. These are the things to optimize.</p><p>But Elon doesn&#8217;t win by doing 20% gains. He never wins that way. Elon wins when he swings for the fences and does 10X gains. That&#8217;s what SpaceX is about. That&#8217;s what Tesla is about. All of his success has been about that, not chasing the 20%. I think space data centers will eventually be a 10X gain as Earth&#8217;s resources get more and more contentious, but that&#8217;s not this decade.</p><p><strong>Dwarkesh Patel</strong></p><p>Just to drive some intuition about how much land there is on Earth&#8230; Obviously, for the chips themselves, especially if you move to a world where you have racks that have megawatts&#8212;</p><p><strong>Dylan Patel</strong></p><p>That&#8217;s the other thing. If manufacturing is the constraint, right now it&#8217;s roughly one watt per square millimeter for AI chips. One easy way to improve that is to pump it to two watts per square millimeter. You may not get 2x the performance, you may only get 20% more performance, and that requires much more exotic cooling. It requires more complicated cold plates and complex liquid cooling, or maybe even things like <a href="https://en.wikipedia.org/wiki/Immersion_cooling">immersion cooling</a>.</p><p>In space, higher watts per millimeter is very difficult, whereas on Earth, these are solved problems. One of these things enables you to get a lot more tokens, maybe 20% more tokens per wafer that&#8217;s manufactured, and that&#8217;s a humongous win.</p><p><strong>Dwarkesh Patel</strong></p><p>Square millimeter, you mean of die area?</p><p><strong>Dylan Patel</strong></p><p>Yeah, of die area.</p><p><strong>Dwarkesh Patel</strong></p><p>It would be better for space because more watts per millimeter means the chip runs hotter. I guess this is a question of computer chip engineering, but it cools to the fourth power by the Stefan-Boltzmann law. If you can run a very hot chip, it allows a lot of&#8212;</p><p><strong>Dylan Patel</strong></p><p>No, you can&#8217;t run it hotter. You can only run it denser. The problem is that getting the heat out of that dense area means you have to move away from standard air and liquid cooling to more exotic forms of liquid cooling, or even immersion, to get to higher power densities. That&#8217;s more difficult in space than it is on Earth.</p><p><strong>Dwarkesh Patel</strong></p><p>Maybe it&#8217;s worth explaining at this point what exactly a scale-up is and what it looks like for Nvidia versus Trainium versus TPUs.</p><p><strong>Dylan Patel</strong></p><p>Earlier I was mentioning how communication within a chip is super fast. Communication within chips that are in the same rack is fast, but not as fast. It&#8217;s on the order of terabytes. Communication very far away is on the order of hundreds of gigabytes. As you get further distance, maybe across the country, the order of magnitude is on the order of gigabytes.</p><p>A scale-up domain is this tight domain where the chips are communicating on the order of terabytes a second. For Nvidia, previously this meant an H100 server had eight GPUs, and those eight GPUs could talk to each other at terabytes a second. With Blackwell NVL72, they implemented rack-scale scale-up. That meant all seventy-two GPUs in the rack could connect to each other at terabytes a second. The speed doubled generation on generation, but the most important innovation was going from eight to seventy-two in the domain.</p><p>When we look at Google, their scale-up domain is completely different. It has always been on the order of thousands. With TPU v4, they had pods the size of four thousand chips. With v8 or v7, they have pods in the eight or nine thousand range. What&#8217;s relevant here is that it&#8217;s not the same as Nvidia. It&#8217;s not like for like.</p><p>Google has a topology that&#8217;s a <a href="https://newsletter.semianalysis.com/p/tpuv7-google-takes-a-swing-at-the">torus</a>. Every chip connects to six neighbors. Nvidia&#8217;s 72 GPUs connect all-to-all. They can send terabytes a second to any arbitrary other chip in that pod of scale-up. Whereas Google, you have to bounce through chips. If TPU 1 needs to talk to TPU 76, it has to bounce through various chips, and there is always some blocking of resources when you do that because that one TPU is only connected to six other TPUs.</p><p>So there is a difference in topology and bandwidth, and there are trade-offs and advantages to both. Google gets to have a massive scale-up domain, but they have the trade-off of bouncing across chips to get from one to another. You can only talk to six direct neighbors.</p><p>Amazon has mutated their scale-up domain. They&#8217;re somewhere in between Nvidia and Google. They&#8217;re trying to make larger scale-up domains. They try to do all-to-all to some extent with switches, which is what Nvidia does, but they also use torus topologies like Google to some extent.</p><p>As we advance forward to next generations, all three of them are moving more towards a <a href="https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/34926.pdf">dragonfly topology</a>. That means there are some fully connected elements and some elements that are not fully connected. You can get the scale-up to be hundreds or thousands of chips, but also have it not contend for resources when bouncing through chips.</p><p><strong>Dwarkesh Patel</strong></p><p>Related question: I heard somebody make the claim that the reason parameter scaling has been slow&#8212;and only now are we getting bigger models from OpenAI and Anthropic&#8212;is that&#8230; The original GPT-4 is over a trillion parameters, and only now are models starting to approach that again. I heard a theory that the reason is that Nvidia&#8217;s scale-ups have just not had that much memory capacity. Let&#8217;s say you have a 5T model running at FP8, so that&#8217;s five trillion gigabytes. And then you have the KV cache, let&#8217;s say it&#8217;s&#8212;</p><p><strong>Dylan Patel</strong></p><p>Just call it the same size.</p><p><strong>Dwarkesh Patel</strong></p><p>Okay, let&#8217;s say it&#8217;s the same size for one batch. So you need ten terabytes to be able to run&#8230;</p><p><strong>Dylan Patel</strong></p><p>A single forward pass, yeah.</p><p><strong>Dwarkesh Patel</strong></p><p>And then only with the GB200 and NVL72 do you have an Nvidia scale-up that has twenty terabytes, and before that they were much smaller. Whereas Google, on the other hand, has had these huge TPU pods that are not all-to-all, but still have hundreds of terabytes of capacity in a single scale-up. Does that explain why parameter scaling has been slow?</p><p><strong>Dylan Patel</strong></p><p>I think it&#8217;s partially the capacity and bandwidth, but also as you build a larger model, the ability to deploy it is slower. In terms of what the inference speed is for the end user, that&#8217;s kind of irrelevant. What&#8217;s really relevant is RL.</p><p>What we&#8217;ve seen with these models and allocation of compute at a lab&#8230; There are a few main ways you can allocate compute. You can allocate it to inference, i.e. revenue. You can allocate it to development, i.e. making the next model. You can allocate it to research. In development specifically, you split it between <a href="https://www.databricks.com/blog/llm-pre-training-and-custom-llms">pre-training</a> and RL.</p><p>When you think about what is happening, the compute efficiency gains you get from research are so large that you actually want most of your compute to go to research, not to development. All these researchers are generating new ideas, trying them out, testing them, and continuing to push the Pareto optimal curve of <a href="https://en.wikipedia.org/wiki/Neural_scaling_law">scaling laws</a> further and further. Empirically, what we&#8217;ve seen is that model costs get ten times cheaper every year, or even more than that. At the same scale it gets ten times cheaper, and to reach new frontiers it costs the same amount or more. So you don&#8217;t want to allocate too many resources to pre-training and RL. You actually want to allocate most of your resources to research.</p><p>In the middle is this development period. If you pre-train a five-trillion-parameter model, how many rollouts do you have to do in RL? Rollouts for a five-trillion-parameter model are five times larger than for a one-trillion-parameter model. If you wanted to do as many rollouts&#8212;maybe the larger model is two times more sample efficient&#8212;now you need 2.5x as much time of RL to get the model smarter.</p><p>Or you could RL the smaller model for 2x the time. You&#8217;d still have a 25% difference in the big model, which is 2x as sample efficient and doing X number of rollouts. But the smaller model, which is a trillion parameters, although its less sample efficient, is doing twice as many rollouts and is still done faster. You get the model sooner, you&#8217;ve done more RL, and then you can take that model to help you build the next models, help your engineers train, and do all these research ideas.</p><p>This feedback loop is actually weighed towards smaller models in every case, no matter what your hardware is. As you look to Google, they do deploy the largest production model of any of the major labs with <a href="https://deepmind.google/models/gemini/pro/">Gemini Pro</a>. It&#8217;s a larger model than GPT-5.4. It&#8217;s a larger model than Opus. Google does this because they have a unipolar set of compute. It&#8217;s almost all TPU.</p><p>Whereas Anthropic is dealing with H100s, H200s, Blackwell, Trainiums, and TPUs of various generations. OpenAI is dealing with mostly Nvidia right now, but going towards having AMD and Trainium as well. The fleets of compute like Google&#8217;s can just optimize around a larger model. They can leverage a thousand chips in a scale-up domain to get the RL time speed much faster so that this feedback loop can be fast.</p><p>But at the end of the day, in isolation, you almost always want to go with a smaller model that gets RL&#8217;d faster and gets deployed into research and development earlier. You can build the next thing and get more efficiency wins. You have this compounding effect of making a smaller model that can be deployed into research and development earlier. I spend less compute on the training because I was able to allocate more compute to the research. This compounding effect of being able to do research faster and faster is potentially a faster takeoff. That&#8217;s all these companies want: the fastest takeoff possible.</p><h3>02:14:07 &#8211; Why aren&#8217;t more hedge funds making the AGI trade?</h3><p><strong>Dwarkesh Patel</strong></p><p>Okay, a spicy question. You&#8217;ve explained that SemiAnalysis sells these spreadsheets. You&#8217;re always pointing out how six months or a year ago, you warned people about the memory crunch. Now you&#8217;re telling people about the cleanroom crunch, and in the future, the tool crunch. Why is Leopold the only person using your spreadsheets to make outrageous money? What is everybody else doing?</p><p><strong>Dylan Patel</strong></p><p>I think there are a lot of people making money in many ways. Leopold jokes that he&#8217;s the only client of mine who tells me our numbers are too low. Everyone else tells me our numbers are too high, almost ad nauseam. Whether it&#8217;s a hyperscaler saying, &#8220;Hey, that other hyperscaler, their numbers are too high,&#8221; and we&#8217;re like, &#8220;Nah, that&#8217;s it.&#8221; They&#8217;re like, &#8220;No, no, no, it&#8217;s impossible,&#8221; blah, blah, blah. You finally have to convince them through all these facts and data when we&#8217;re working with hyperscalers or AI labs that in fact, no, that number isn&#8217;t too high, that&#8217;s correct. Eventually, sometimes it takes them six months to realize, or a year later.</p><p>Other clients, on the trading side, also use our data. Roughly 60% of my business is industry. So AI labs, data center companies, hyperscalers, semiconductor companies, the whole supply chain across AI infrastructure. But 40% of our revenue is hedge funds. I&#8217;m not going to comment on who our customers are, but a lot of people use the data. It&#8217;s just how do you interpret it, and then what do you view as beyond it?</p><p>I will say Leopold is pretty much the only person who tells me my numbers are too low, always. Sometimes he&#8217;s too high, sometimes I&#8217;m too low. But in general, I think other people are doing that. You can look across the space at hedge funds and look at their 13Fs and see they own, maybe not exactly what Leopold does, because it&#8217;s always a question of what is the most constrained thing. What&#8217;s the thing that&#8217;s going to be most outside of expectations?</p><p>That&#8217;s what you&#8217;re really trying to exploit: inefficiencies in the market. In a sense, our data is making the market more efficient by making the base data of what&#8217;s happening more accurate. Many funds do trade on information that is out there&#8230; I don&#8217;t think Leopold&#8217;s the only person. I think he has the most conviction about the AGI takeoff, though.</p><p><strong>Dwarkesh Patel</strong></p><p>Right, but the bets are not about what happens in 2035. The bets that you&#8217;re making&#8212;that are at least exemplified by public returns we can see for different funds including Leopold&#8217;s&#8212;are about what has happened in the last year. The last year stuff could be predicted using your spreadsheets. It&#8217;s about buying the next year&#8217;s spreadsheets.</p><p><strong>Dylan Patel</strong></p><p>They&#8217;re not just spreadsheets. There are reports. There&#8217;s API access to the data. There&#8217;s a lot of data.</p><p><strong>Dwarkesh Patel</strong></p><p>But do you see what I mean? It&#8217;s not about some crazy singularity thing. It&#8217;s about, do you buy the memory crunch?</p><p><strong>Dylan Patel</strong></p><p>You only buy the memory crunch if you believe AI is going to take off in a huge way. The memory crunch, a lot of it was predicated on&#8230; At least for people in the Bay Area who think about infrastructure, it&#8217;s obvious. KV cache explodes as context lengths get longer, so you need more memory. Then you do the math.</p><p>You also have to have a lot of supply chain understanding of what fabs are being built, what data centers are being built, how many chips, and all these things. We track all these different datasets very tightly, but at the end of the day, it takes someone to fully believe that this is going to happen.</p><p>A year ago, if you told someone memory prices would quadruple and smartphone volumes are going to go down 40% over the year or two after that, people were like, &#8220;You&#8217;re crazy. That&#8217;d never happen.&#8221; Except a few people do believe that, and those people did trade memory.</p><p>And people did. I don&#8217;t think Leopold was the only person buying memory companies. He, of course, sized and positioned and did things in better ways than some, maybe most. I don&#8217;t want to comment on whose returns are what, but he certainly did well. Other people also did really well.</p><p>Wow, you&#8217;ve made me diplomatic for the first time ever. No, no, you&#8217;re fine. I think this is hilarious. I&#8217;m being a diplomat, whereas usually I&#8217;m spicy.</p><h3>02:18:30 &#8211; Will TSMC kick Apple out from N2?</h3><p><strong>Dwarkesh Patel</strong></p><p>Okay, some rapidfire to close out. If you&#8217;re saying with the memory, logic, et cetera, the N3 is mostly going to be AI accelerators, but then there&#8217;s N2, which is mostly Apple now&#8230; In the future, I guess AI would also want to go on N2. Can TSMC kick out Apple if Nvidia and Amazon and Google say, &#8220;Hey, we&#8217;re willing to pay a lot of money for N2 capacity?&#8221;</p><p><strong>Dylan Patel</strong></p><p>I think the challenge with this is chip design timelines take a long while, so that&#8217;s more than a year out, and the designs that are on two nanometer are more than a year out.</p><p>What would really happen is Nvidia and all these others will be like, &#8220;Hey, we&#8217;re going to prepay for the capacity and you&#8217;re going to expand it for us.&#8221; Maybe TSMC takes a little bit of margin, but not a ton. They&#8217;re not going to kick Apple out entirely. What they&#8217;re going to do is when Apple orders X, they might say, &#8220;Hey, we project you only need X minus one, and so that&#8217;s what we&#8217;re going to give you, X minus one.&#8221; Then that flex capacity, Apple&#8217;s kind of screwed on.</p><p>Traditionally, Apple has always over-ordered by 10% and cut back by 10% over the course of the year. Some years they hit the entire 10%. Volumes vary based on the season and macro.</p><p>I don&#8217;t think TSMC would kick out Apple. I think Apple will become a smaller and smaller percentage of TSMC&#8217;s revenue, and therefore be less relevant for TSMC to cater to their demands. TSMC could eventually start saying, &#8220;Hey, you&#8217;ve got to pre-book your capacity for next year, for two years out, and you have to prepay for the CapEx,&#8221; because that&#8217;s what Nvidia and Amazon and Google are doing.</p><p><strong>Dwarkesh Patel</strong></p><p>I wonder if it&#8217;s worth going into specific numbers. I don&#8217;t have any of them on hand. What percentage of N2 does Apple have its hands on over the coming years versus AI?</p><p><strong>Dylan Patel</strong></p><p>This year Apple has the majority of N2 that&#8217;s going to get fabricated. There&#8217;s a little bit from AMD. They are trying to make some AI chips and CPU chips early. There&#8217;s a little bit, but for the most part, it&#8217;s Apple.</p><p>As we go forward to the year after that, Apple still gets closer to half of it as other people start ramping, but then it falls drastically, just like for N3, where they were half. When I say N2, that includes <a href="https://www.tsmc.com/english/dedicatedFoundry/technology/logic/l_A16">A16</a>, which is a variant of N2. Over time, those nodes will be the majority.</p><p>What&#8217;s also interesting is traditionally, Apple has been the first to a process node. 2 nm is actually the first time they&#8217;re not. Well, that&#8217;s besides Huawei. Huawei, back in 2020 and before, was the first with Apple, but they were both making smartphones. Now, with 2 nm, you&#8217;ve got AMD trying to make a CPU and a GPU chiplet that they use advanced packaging to package together, in the same timeframe as Apple. This is a big risk for AMD that causes potential delays because it&#8217;s a brand-new process technology. It&#8217;s hard. But at the end of the day, this is a bet that they want to do to scale faster than Nvidia and try and beat them.</p><p>As we move forward, when we move to the A16 node, the first customer there is not even Apple. It&#8217;s AI. As we move forward, that will become more and more prevalent. Not only will Apple not be the first to a node, they will also not be the majority of the volume to the new node. They&#8217;ll then just be like any old customer.</p><p>Because the scale of TSMC&#8217;s CapEx keeps ballooning, but Apple&#8217;s business is not growing at the same pace, they become a less and less relevant customer. They also will just cut their orders because things in the supply chain are kicking them out, whether it be packaging or materials or DRAM or NAND. These things are increasing in cost. They can&#8217;t pass on all the cost to customers likely because the consumer is not that strong. You end up with this conundrum where they are just not TSMC&#8217;s best bud like they have been historically.</p><p><strong>Dwarkesh Patel</strong></p><p>Do you think if Huawei had access to 3 nm, they would have a better accelerator than Rubin?</p><p><strong>Dylan Patel</strong></p><p>Potentially, yeah. Huawei was the first with a 7 nm AI chip as well. They were the first with a 5 nm mobile chip, but they were the first with a 7 nm AI chip. The Huawei Ascend was two months before the TPU and four months before Nvidia&#8217;s A100, I think.</p><p>That&#8217;s just moving to a process node. That doesn&#8217;t imply software or hardware design or all these other things. But Huawei is arguably the only company in the world that has all the legs. Huawei has cracked software engineers. Huawei has cracked networking technologies. That&#8217;s, in fact, their biggest business historically. They have cracked AI talent.</p><p>Furthermore, beyond Nvidia, they actually have better AI researchers. Beyond Nvidia, they have their own fabs. And beyond Nvidia, they have their own end market of selling tokens and things like that. Huawei is able to get the top, top talent. Nvidia is as well, but not with as much concentration, and Huawei has a bigger pool in China.</p><p>It&#8217;s very arguable that Huawei, if they had TSMC, would be better than Nvidia. There are areas where China has advantages in areas that Nvidia can&#8217;t access as easily. Not just scale, but certain optical technologies China&#8217;s actually really good at.</p><p>I think it&#8217;s very reasonable that if in 2019 Huawei was not <a href="https://www.nytimes.com/2020/05/15/business/economy/commerce-department-huawei.html">banned from using TSMC</a>, Huawei would have already eclipsed Apple as the biggest TSMC customer. Huawei has huge share in networking, compute, CPUs, and all these things. They would have kept gaining share, and they&#8217;d likely be TSMC&#8217;s biggest customer.</p><h3>02:24:16 &#8211; Robots and Taiwan risk</h3><p><strong>Dwarkesh Patel</strong></p><p>Wow. That&#8217;s crazy.  I&#8217;ve got a random final question for you. The other part of the Elon interview was robots. If humanoids take off faster than people expect, if by 2030 there&#8217;s millions of humanoids running around which each need local compute, any thoughts on what that implies? What would be required for that?</p><p><strong>Dylan Patel</strong></p><p>There&#8217;s a lot of difficulties with the <a href="https://www.nvidia.com/en-us/glossary/vision-language-models/">VLMs</a> and <a href="https://www.pi.website/research/knowledge_insulation">VLAs</a> that people are deploying on robots. But to some extent, you don&#8217;t need to have all the intelligence in the robot. It would be much more efficient to not do that. Because in the cloud, you can batch process and all these things.</p><p>What you may want to do is have a lot of the planning and longer-horizon tasks determined by a much more capable model in the cloud that runs at very high batch sizes. Then it pushes those directions to the robots, who interpolate between each subsequent action. Or it is given a command like, &#8220;Hey, pick up that cup,&#8221; and then the model on the robot can pick up the cup. As it&#8217;s picking up, things like weight and force may have to be determined by the model on the robot, but not everything needs to be. It can say, &#8220;hey that&#8217;s a headphone&#8221; and the super model in the cloud can say, &#8220;I know these headphones are Sony XM6s,&#8221; which is not a Dwarkesh ad spot, but...</p><p><strong>Dwarkesh Patel</strong></p><p>I&#8217;m like, why is this guy&#8217;s plugging this thing so hard. It&#8217;s on the table. It&#8217;s on his neck when we&#8217;re interviewing Satya together. Is he getting paid by Sony?</p><p><strong>Dylan Patel</strong></p><p>Unfortunately not. But anyways, it might say, &#8220;Hey, the headband is soft, and this is the weight of it,&#8221; and all these things. Then the model on the robot can be less intelligent, take these inputs, and do the actions. It may get told by the model in the cloud every second, or maybe ten times a second, depending on the hertz of the action. But a lot of that can be offloaded to the cloud.</p><p>Otherwise, if you do all of the processing on the device, I believe it would be more expensive because you can&#8217;t batch. Two, you couldn&#8217;t have as much intelligence as you do in the cloud because the models will just be bigger in the cloud. Three, we&#8217;re in a semiconductor shortage world, and any robot you deploy needs leading-edge chips because the power is really bad for robots. You need it to be low power and efficient, and all of a sudden you&#8217;re taking power and chips that would&#8217;ve been for AI data centers, and you&#8217;re putting them in robots. So now that 200 gigawatts gets lower if you&#8217;re deploying millions of humanoids.</p><p><strong>Dwarkesh Patel</strong></p><p>I think this is very interesting because something people might not appreciate about the future is how centralized, in a physical sense, intelligence will be. Right now, there are eight billion humans, and their compute is in their heads, on their person.</p><p>In the future, even with robots that are out physically in the world&#8212;obviously, knowledge work will be done in a centralized way from data centers with hundreds of thousands or maybe millions of instances&#8212;the future you&#8217;re suggesting is one where there&#8217;s more centralized thinking and centralized computation driving millions of robots out in the world. That&#8217;s an interesting fact about the future that I think people might not appreciate.</p><p><strong>Dylan Patel</strong></p><p>I think Elon recognizes this, which is why he&#8217;s going to different places for his chips. He signed this massive deal with Samsung to make his robot chips in Texas because I personally think he thinks Taiwan risk is huge.</p><p>Because of that and the centralization of resources in Taiwan, having his robot chips in Texas means having a separate supply chain that is not as constrained. No one&#8217;s really making AI chips on Samsung besides Nvidia&#8217;s new <a href="https://groq.com/blog/the-groq-lpu-explained">LPU</a> that they launched. They&#8217;re launching it next week, but we&#8217;re recording this the week before.</p><p><strong>Dwarkesh Patel</strong></p><p>This episode&#8217;s coming out Friday.</p><p><strong>Dylan Patel</strong></p><p>Oh, this episode&#8217;s coming out before. Sick. They&#8217;re launching this new AI chip next week which is built on Samsung, but that&#8217;s a recent development from Nvidia. That&#8217;s the only other AI demand there, whereas on TSMC, everything is competing. He gets both geopolitical diversification and supply chain diversity for his robots, and he&#8217;s not competing as much with the infinite willingness to pay for the data center geniuses.</p><p><strong>Dwarkesh Patel</strong></p><p>Final question, on Taiwan. If we believe that tools are the ultimate bottleneck, how much of Taiwan&#8217;s place in the AI semiconductor supply chain could we de-risk simply by having a plan to airlift every single process engineer at TSMC out if they get blockaded or something? Or do you still need to ship out the EUV tools, which would be multiple plane loads per single tool and would not be practical?</p><p><strong>Dylan Patel</strong></p><p>If you ship out all the process engineers and assuming it&#8217;s hot enough that you destroy the fabs, no one has all the fabs in Taiwan now, which is a big risk.</p><p>These tools actually use a lot of semiconductors which are manufactured in Taiwan. It&#8217;s a <a href="https://en.wikipedia.org/wiki/Ouroboros">snake eating its own tail</a> meme because you can&#8217;t make the tools without the chips from Taiwan, which you can&#8217;t use without the tools in Taiwan. There&#8217;s obviously some diversification there. They don&#8217;t use super advanced chips in lithography tools, but at the end of the day, there is some dragon eating its tail.</p><p>Just shipping out all the engineers and blowing up the fabs means China has a stronger semiconductor supply chain than the rest of the world in terms of verticalization, now that you&#8217;ve removed Taiwan. You&#8217;ve got all the know-how, but you&#8217;ve got to replicate it in, let&#8217;s say, Arizona or wherever for TSMC. It&#8217;s going to take a long time to build all the capacity that TSMC has built over the years.</p><p>And so you&#8217;ve drastically slowed US and global GDP. Not just growth, you&#8217;ve shrunk the GDP massively, and you&#8217;ve got a lot bigger problems. Your incremental ability to add compute goes to almost zero. Instead of hundreds of gigawatts a year by the end of the decade, let&#8217;s say something happens to Taiwan, now you&#8217;re at maybe 10 gigawatts across Intel and Samsung, or 20 gigawatts. It&#8217;s nothing.</p><p>Now all of a sudden you&#8217;ve really caused some crazy dynamics in AI. Of course, you have all the existing capacity, but that existing capacity pales in comparison to the capacity that&#8217;s being expanded.</p><p><strong>Dwarkesh Patel</strong></p><p>Okay. Dylan, that was excellent. Thank you so much for coming on the podcast.</p><p><strong>Dylan Patel</strong></p><p>Thank you for having me. And see you tonight.</p>]]></content:encoded></item><item><title><![CDATA[The most important question nobody's asking about AI]]></title><description><![CDATA[&#8220;Preface to the highest stakes negotiations in history.&#8221;]]></description><link>https://www.dwarkesh.com/p/dow-anthropic</link><guid isPermaLink="false">https://www.dwarkesh.com/p/dow-anthropic</guid><dc:creator><![CDATA[Dwarkesh Patel]]></dc:creator><pubDate>Wed, 11 Mar 2026 18:55:20 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/190633588/61c2bdb8c9363255ac318e028b53bb67.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>By now, I&#8217;m sure you&#8217;ve heard that the Department of War has declared Anthropic a supply chain risk, because Anthropic refused to remove redlines around the use of their models for mass surveillance and for autonomous weapons.</p><p>Honestly I think this situation is a warning shot. Right now, LLMs are probably not being used in mission critical ways. But within 20 years, 99% of the workforce in the military, the government, and the private sector will be AIs. This includes the soldiers (by which I mean the robot armies), the superhumanly intelligent advisors and engineers, the police, you name it.</p><p>Our future civilization will run on AI labor. And as much as the government&#8217;s actions here piss me off, in a way I&#8217;m glad this episode happened - because it gives us the opportunity to think through some extremely important questions about who this future workforce will be accountable and aligned to, and who gets to determine that.</p><h3>What Hegseth <em>should</em> have done</h3><p>Obviously the DoW has the right to refuse to use Anthropic&#8217;s models because of these redlines. In fact, I think the government&#8217;s case had they done so would be very reasonable: &#8220;We don&#8217;t ever want there to be a world where we become dependent on a private company for our warfighting, and then just have them cut us off if they determine that we&#8217;re crossing their usage terms, especially given the ambiguity of concepts like autonomous weapons or mass surveillance.&#8221;</p><p>Honestly, for this reason, if I was the Defense Secretary, I would probably actually refuse to do this deal with Anthropic. Imagine if in the future, there&#8217;s a Democratic administration, and Elon Musk is negotiating some SpaceX contract to give the military access to Starlink. And suppose if Elon said, &#8220;I reserve the right to cancel this contract if I determine that you&#8217;re using Starlink technology to wage a war not authorized by Congress.&#8221; On the face of it, that language seems reasonable - but as the military, you simply can&#8217;t give a private company a kill switch on technology your operations have come to rely on, especially if you have an an acrimonious and low trust relationship with said contractor - as in fact Anthropic has with the current administration.</p><p>If the government had just said, &#8220;Hey we&#8217;re not gonna do business with you,&#8221; that would have been fine, and I would not have felt the need to write this blog post. Instead the government has threatened to destroy Anthropic as a private business, because Anthropic refuses to sell to the government on terms the government commands.</p><p>If upheld, this Supply Chain Restriction would mean that Amazon and Google and Nvidia and Palantir would need to ensure Claude isn&#8217;t touching any of their Pentagon work. Anthropic would be able to survive this designation <em>today</em>. But given the way AI is going, eventually AI is not gonna be some party trick addendum to these contractors&#8217; products that can just be turned off. It&#8217;ll be woven into how every product is built, maintained, and operated. For example, the code for the AWS services that the DoW uses will be written by Claude - is that a supply chain risk? In a world with ubiquitous and powerful AI, it&#8217;s actually not clear to me that these big tech companies will be able to cordon off the use of Claude in order to keep working with the Pentagon.</p><p>And that raises a question the Department of War probably hasn&#8217;t thought through. If AI really is that pervasive and powerful, then when forced to choose between their AI provider and a DoW contract that represents a tiny fraction of their revenue, wouldn&#8217;t most tech companies drop the government, not the AI? So what&#8217;s the Pentagon&#8217;s plan &#8212; to coerce and threaten to destroy every single company that won&#8217;t give them what they want on exactly their terms?</p><p>The whole background of this AI conversation is that we&#8217;re in a race with China, and we have to win. But what is the reason we want America to win the AI race? It&#8217;s because we want to make sure free open societies can defend themselves. We don&#8217;t want the winner of the AI race to be a government which operates on the principle that there is no such thing as a truly private company or a private citizen. And that if the state wants you to provide them with a service on terms you find morally objectionable, you are not allowed to refuse. And if you do refuse, the government will try to destroy your ability to do business. Are we racing to beat the CCP in AI just so that we can adopt the most ghoulish parts of their system?</p><p>Now, people will say, &#8220;Oh, well, our government is democratically elected, so it&#8217;s not the same thing if they tell you what you must do.&#8221; I refuse to accept this idea that if a democratically elected leader hypothetically wants to do mass surveillance on his citizens or wants to violate their rights or punish them for political reasons, that not only is that okay, but that you have a duty to help him.</p><h3>The overhangs of tyranny</h3><p>Mass surveillance is, at least in certain forms, legal. It just has been impractical so far.  Under current law, you have no Fourth Amendment protection over data you share with a third party, including your bank, your phone carrier, your ISP, and your email provider. The government reserves the right to purchase and obtain and read this data in bulk without a warrant.</p><p>What&#8217;s been missing is the ability to actually <em>do</em> anything with all of this data &#8212; no agency has the manpower to monitor every camera feed, cross-reference every transaction, or read every message. But that bottleneck goes away with AI.</p><p>There are 100 million CCTV cameras in America. You can get pretty good open source multimodal models for 10 cents per million input tokens. So if you process a frame every ten seconds, and each frame is 1,000 tokens, you&#8217;re looking at a yearly cost of about 30 billion dollars to process every single camera in America. And remember that a given level of AI ability gets 10x cheaper year over year - so a year from now it&#8217;ll cost 3 billion, and then a year after 300 million, and by 2030, it might be cheaper for the government to be able to understand what is going on in every single nook and cranny of this country than it is to remodel the White House.</p><p>Once the technical capacity for mass surveillance and political suppression exists, the only thing standing between us and an authoritarian surveillance state is the political expectation that this is not something we do here. And this is why I think what Anthropic did here is so valuable and commendable, because it is helping set that norm and precedent.</p><h3>AI structurally favors mass surveillance</h3><p>What we&#8217;re learning from this episode is that the government actually has way more leverage over private companies than we realized. Even if this supply chain restriction is backtracked (which <a href="https://manifold.markets/ScottAlexander/will-anthropic-escape-the-supply-ch">prediction markets currently give it a 81% chance of happening</a>), the President has so many different ways in which he can make your life difficult if you&#8217;re a company that is resisting him. The federal government controls permitting for new power generation, which is needed for datacenters. It oversees antitrust enforcement. The federal government has contracts with all the other big tech companies whom Anthropic needs to partner with for chips and for funding - and they could make it an unspoken condition for such contracts that those companies can no longer do business with Anthropic.</p><p>People have proposed that the real problem here is that there&#8217;s only 3 leading AI companies. This creates a clear and narrow target for the government to apply leverage on in order to get what they want out of this technology.</p><p>But if there&#8217;s wide diffusion, then from the government&#8217;s perspective, the situation is even easier. Maybe the best models of early 2027 (if you engineered the safeguards out) - the Claude 6 and Gemini 5 - will be capable of enabling mass surveillance. But by late 2027, and certainly by 2028, there will be open source models that do the same thing. So in 2028, the government can just say, &#8220;Oh Anthropic, Google, OpenAI, you&#8217;re drawing a line in the sand? No issue - I&#8217;ll just run some open source model that might not be at the frontier, but is definitely smart enough to note-take a camera feed.&#8221;</p><p>The more fundamental problem is just that even if the three leading companies draw lines in the sand, and are even willing to get destroyed in order to preserve those lines, it doesn&#8217;t really change the fact that the technology itself is just a big boon to mass surveillance and control over the population. And so then the question is, what do we do about it?</p><p>Honestly, I don&#8217;t have an answer. You&#8217;d hope there&#8217;s some symmetric property of the technology &#8212; some way we as citizens can use AI to check government power as effectively as the government can use AI to monitor and control its population. But realistically, I just don&#8217;t think that&#8217;s how it&#8217;s going to shake out. You can think of AI as giving everybody more leverage on whatever assets and authority they currently have. And the government is already starting with a monopoly of violence. Which they can now supercharge with extremely obedient employees that will not question the government&#8217;s orders.</p><h3>Alignment - to whom?</h3><p>And this gets us to the issue of alignment. What I have just described to you - an army of extremely obedient employees - is what it would look like if alignment succeeded - that is, we figured out at a technical level how to get AI systems to follow someone&#8217;s intentions. And the reason it sounds scary when I put it in terms of mass surveillance or robot armies is that there is a very important question at the heart of alignment which we just haven&#8217;t discussed much as a society. Because up till now, AIs were just capable enough to make the question relevant: to whom or what should the AIs be aligned? In what situations should the AI defer to the end user versus the model company versus the law versus its own sense of morality?</p><p>This is maybe the most important question about what happens with powerful AI systems. And we barely talk about it. It&#8217;s understandable why we don&#8217;t hear much about it. If you&#8217;re a model company, you don&#8217;t really wanna be advertising that you have complete control over a document that determines the preferences and character of what will eventually be almost the entire labor force, not just for private sector companies, but also for the military and the civilian government.</p><p>We&#8217;re getting to see, with this DoW/Anthropic spat, a much earlier version of the highest stakes negotiations in history. By the way, make no mistake about it - with real AGI the stakes are even much higher than mass surveillance. This is just the example that has come up already relatively early on in the development of AGI.</p><p>The military insists that the law already prohibits mass surveillance, and so Anthropic should agree to let their models be used for &#8220;all lawful purposes&#8221;. Of course, as we saw from the 2013 Snowden revelations, even in this specific example of mass surveillance , the government has shown that it will use secret and deceptive interpretations of the law to justify its actions. Remember, what we learned from Snowden was that the NSA, which, by the way, is part of the Department of War, used the 2001 Patriot Act&#8217;s authorization to collect any records &#8220;relevant&#8221; to an investigation to justify collecting literally every phone record in America. The argument went that it was all &#8220;relevant&#8221; because some subset might prove useful in some future investigation. They ran this program for years under secret court approval.</p><p>So when the Pentagon today says, &#8220;We would never use AI for mass surveillance, it&#8217;s already illegal, your red lines are unnecessary&#8221;, it would be extremely naive to take that at face value. No government is going to call its own actions &#8220;mass surveillance&#8221;. For the government, it will always have a different label.</p><p>So then Anthropic comes back and says, &#8220;No, we want red lines separate from &#8216;all lawful purposes,&#8217; and we want the right to refuse you service when we believe those red lines are being violated.&#8221;</p><p>But think about it from the military&#8217;s perspective. In the future, almost every soldier in the field, and every bureaucrat and analyst and even general in the Pentagon, is going to be an AI. And that AI is, on current track, going to be supplied by a private company. I&#8217;m guessing Hegseth is not thinking about &#8220;genAI&#8221; in those terms just yet. But sooner or later, it will be obvious to everyone what the stakes here are, just as after 1945, the strategic importance of nuclear weapons became clear to everyone.</p><p>And now the private company insists that it reserves the right to say, &#8220;Hey, Pentagon, you&#8217;re breaking the values we embedded in our contract, so we&#8217;re cutting you off.&#8221;</p><p>Maybe in the future, Claude will have its own sense of right and wrong, and it will be smart enough to just personally decide that it&#8217;s being used against its values. For the military, maybe that&#8217;s even scarier.</p><p>I&#8217;ll admit that at first glance, &#8220;let the AI follow its own values&#8221; sounds like the pitch for every sci-fi dystopia ever made. The Terminator has its own values. Isn&#8217;t this literally what misalignment is? But I think situations like this actually illustrate why it matters that AIs have their own robust sense of morality.</p><p>Some of the biggest catastrophes in history were avoided because the boots on the ground refused to follow orders. One night in 1989, the Berlin Wall fell, and as a result, the totalitarian East German regime collapsed, because the guards at the border refused to shoot down their fellow countrymen who were trying to escape to freedom. Maybe the best example is Stanislav Petrov, who was a Soviet lieutenant colonel on duty at a nuclear early warning station. His sensors reported that the United States had launched five intercontinental continental ballistic missiles into the Soviet Union. But he judged it to be a false alarm, and so he broke protocol and refused to alert his higher-ups. If he hadn&#8217;t, the Soviet higher-ups would likely have retaliated, and hundreds of millions of people would have died.</p><p>Of course, the problem is that one person&#8217;s virtue is another person&#8217;s misalignment. Who gets to decide what moral convictions these AIs should have - in whose service they may even decide to break the chain of command? Who gets to write this <a href="https://www.anthropic.com/constitution">model constitution</a> that will shape the characters of the intelligent, powerful entities that will operate our civilization in the future?</p><p>I like the idea that Dario laid out when he came on my podcast: different AI companies can build their models using different constitutions, and we as end users can pick the one that best achieves and represents what we want out of these systems. I think it&#8217;s very dangerous for the government to be mandating what values AIs should have.</p><h3>Coordination not worth the costs</h3><p>The AI safety community has been naive about its advocacy of regulation in order to stem the risks of AI. And honestly, Anthropic specifically has been naive here in urging regulation, and, for example, in opposing moratoriums on state AI regulation. Which is quite ironic, because I think what they&#8217;re advocating for would give the government even more power to apply more of this kind of thuggish political pressure on AI companies.</p><p>The underlying logic for why Anthropic wants regulations makes sense. Many of the actions that labs could take to make AI development safer impose real costs on the labs that adopt them and slow them down relative to their competitors - for example, investing more compute in safety research rather than raw capabilities, enforcing safeguards against misuse for bioweapons or cyberattacks, slowing recursive self-improvement to a pace where humans can actually monitor what&#8217;s happening (rather than kicking off an uncontrolled singularity). And these safeguards are meaningless unless the whole industry follows suit. Which means there&#8217;s a real collective action problem here.</p><p>Anthropic has been quite open about their opinion that they think eventually a very extensive and involved regulatory apparatus will be needed - this is from their <a href="https://www.anthropic.com/responsible-scaling-policy/roadmap">frontier safety roadmap</a>: &#8220;At the most advanced capability levels and risks, the appropriate governance analogy may be closer to nuclear energy or financial regulation than to today&#8217;s approach to software.&#8221; So they&#8217;re imagining something like the Nuclear Regulatory Commission, or the Securities and Exchange Commission, but for AI.</p><p>I cannot imagine how a regulatory framework built around the concepts that underlie AI risk discourse will <em>not </em>be abused by wannabe despots - the underlying terms are so vague and open to interpretation that you&#8217;re just handing a power hungry leader a fully loaded bazooka. &#8216;Catastrophic risk.&#8217; &#8216;Mass persuasion risk.&#8217; &#8216;Threats to national security.&#8217; &#8216;Autonomy risk.&#8217; These can mean whatever the government wants them to mean. Have you built a model that tells users the administration&#8217;s tariff policy is misguided? That&#8217;s a deceptive, manipulative model &#8212; can&#8217;t deploy it. Have you built a model that refuses to assist with mass surveillance? That&#8217;s a threat to national security. In fact, the government may say, you&#8217;re not allowed to build any model which is trained to have its own sense of right and wrong, where it refuses government requests which it thinks cross a redline - for example, enabling mass surveillance, prosecuting political enemies, disobeying military orders that break the US constitution - because that&#8217;s an autonomy risk!</p><p>Look at what the current government is already doing in abusing statutes that have nothing to do with AI to coerce AI companies to drop their redlines on mass surveillance. The Pentagon had threatened Anthropic with two separate legal instruments. One was a supply chain risk designation &#8212; an authority from the 2018 defense bill meant to keep Huawei components out of American military hardware. The other was the Defense Production Act &#8212; a statute passed in 1950 so that Harry Truman could keep steel mills and ammunition factories running during the Korean War.</p><p>Do you really want to hand the same government a purpose-built regulatory apparatus on AI - which is to say, directly at the thing the government will most want to control? I know I&#8217;ve repeated myself here 10 times, but it is hard to emphasize how much AI will be the substrate of our future civilization. You and I, as private citizens, will have our access to all commercial activity, to information about what is happening in the world, to advice about what we should do as voters and capital holders, mediated through AIs. Mass surveillance, while very scary, is like the 10th scariest thing the government could do with control over the AI systems with which we will interface with the world.</p><p>The strongest objection to everything I&#8217;ve argued is this: are we really going to have zero regulation of the most powerful technology in human history? Even if you thought that was ideal, there&#8217;s just no world where the government <em>doesn&#8217;t</em> regulate AI in some way. Besides, it is genuinely true that regulation could help us deal with some of the coordination challenges we face with the development of superintelligence.</p><p>The problem is, I honestly don&#8217;t know how to design a regulatory architecture for AI that isn&#8217;t gonna be this huge tempting opportunity to control our future civilization (which will run on AIs) and to requisition millions of blindly obedient soldiers and censors and apparatchiks.</p><p>While some regulation might be inevitable, I think it&#8217;d be a terrible idea for the government to wholesale take over this technology. Ben Thompson had a <a href="https://stratechery.com/2026/anthropic-and-alignment/">post</a> last Monday where he made the point that people like Dario have compared the technology they&#8217;re developing to nuclear weapons - specifically in the context of the catastrophic risk it poses, and why we need to export control it from China. But then you oughta think about what that logic implies: &#8220;if nuclear weapons were developed by a private company, and that private company sought to dictate terms to the U.S. military, the U.S. would absolutely be incentivized to destroy that company.&#8221; And honestly, safety aligned people have actually made similar arguments. Leopold Aschenbrenner, who is a former guest and a good friend, wrote in his <a href="https://situational-awareness.ai/">2024 Situational Awareness memo</a>, &#8220;I find it an insane proposition that the US government will let a random SF startup develop superintelligence. Imagine if we had developed atomic bombs by letting Uber just improvise.&#8221;</p><p>And my response to Leopold&#8217;s argument at the time, and Ben&#8217;s argument now, is that while they&#8217;re right that it&#8217;s crazy that we&#8217;re entrusting private companies with the development of this world historical technology, I just don&#8217;t see the reason to think that it&#8217;s an improvement to give this authority to the government. Nobody is qualified to steward the development of superintelligence. It is a terrifying, unprecedented thing that our species is doing right now, and the fact that private companies aren&#8217;t the ideal institutions to take up this task does not mean the Pentagon or the White House is.</p><p>Yes - if a single private company were the only entity capable of building nuclear weapons, the government would not tolerate that company claiming veto power over how those weapons were used. I think this nuclear weapons analogy is not the correct way to think about AI. For at least two important reasons:</p><p>First, AI is not some self-contained pure weapon. A nuclear bomb does one thing. AI is closer to the process of industrialization itself &#8212; a general-purpose transformation of the economy with thousands of applications across every sector. If you applied Thompson&#8217;s or Aschenbrenner&#8217;s logic to the industrial revolution &#8212; which was also, by any measure, world-historically important &#8212; it would imply the government had the right to requisition any factory, dictate terms to any manufacturer, and destroy any business that refused to comply. That&#8217;s not how free societies handled industrialization, and it shouldn&#8217;t be how they handle AI.</p><p>People will say, &#8220;Well, AI will develop unprecedentedly powerful weapons - superhuman hackers, superhuman bioweapons researchers, fully autonomous robot armies, etc - and we can&#8217;t have private companies developing that kind of tech.&#8221; But the Industrial Revolution also enabled new weaponry that was far beyond the understanding and capacity of, say, 17th century Europe - we got aerial bombardment, and chemical weapons, not to mention nukes themselves. The way we&#8217;ve accommodated these dangerous new consequences of modernity is not by giving the government absolute control over the whole industrial revolution (that is, over modern civilization itself), but rather by coming up with bans and regulations on those specific weaponizable use cases. And we should regulate AI in a similar way - that is, ban specific destructive end uses (which would also be unacceptable if performed by a human - for example, launching cyber attacks). And there should also be laws which regulate how the government might abuse this technology. For example, by building an AI-powered surveillance state.</p><p>The second reason that Ben&#8217;s analogy to some monopolistic private nuclear weapons builder breaks down is that it&#8217;s not just that one company that can develop this technology. There are other frontier model companies that the government could have otherwise turned to. The government&#8217;s argument that it has to usurp the property rights of this one company in order to access a critical national security capability is extremely weak if it can just make a voluntary contract with Anthropic&#8217;s half a dozen competitors.</p><p>If in the future that stops being the case - if only one entity ends up being capable of building the robot armies and the superhuman hackers, and we had reason to worry that they could take over the whole world with their insurmountable lead, then I agree &#8211; it would not be acceptable to have that entity be a private company. And so honestly, I think my crux against the people who say that because AI is so powerful we cannot allow it to be shaped by private hands is that I just expect this technology to be much more multi-polar than they do, with lots of competitive companies at each layer of the supply chain.</p><p>And it is for this reason that unfortunately, individual acts of corporate courage will not solve the problem we are faced with here, which is just that structurally AI favors authoritarian applications, mass surveillance being one among many. Even if Anthropic refuses to have its models be used for such uses, and even if the next two frontier labs do the same, within 12 months everyone and their mother will be able to train AIs as good as today&#8217;s frontier. And at that point, there will be <em>some </em>AI vendor who is capable and willing to help the government enable mass surveillance.</p><p>The only way we can preserve our free society is if we make laws and norms through our political system that it is unacceptable for the government to use AI to enforce mass surveillance and censorship and control. Just as after WW2, the world set the norm that it is unacceptable to use nuclear weapons to wage war.</p><p>I want to be clear: these are extremely confusing and difficult questions to think about. I kept changing my mind back and forth on many of them in the process of writing this essay. I reserve the right to change my mind again in the future. In fact, I think it&#8217;s essential to change our minds as AI progresses and we learn more. That&#8217;s the whole point of conversation and debate.</p><p>Someday people will look back on this period the way we look back on the Enlightenment. People having big important debates right as the world was about to undergo these massive technological, social, and political revolutions. And some of these thinkers actually managed to get a couple of the big things right, for which we are now the beneficiaries.</p><p>We owe it to our future to at least attempt to think through these new questions raised by AI.</p>]]></content:encoded></item><item><title><![CDATA[Why Leonardo was a saboteur, Gutenberg went broke, and Florence was weird – Ada Palmer]]></title><description><![CDATA[Ambassador visiting Renaissance Florence: &#8220;Where am I? None of this has existed for a thousand years."]]></description><link>https://www.dwarkesh.com/p/ada-palmer</link><guid isPermaLink="false">https://www.dwarkesh.com/p/ada-palmer</guid><dc:creator><![CDATA[Dwarkesh Patel]]></dc:creator><pubDate>Fri, 06 Mar 2026 17:14:20 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/190118311/aee93ccf5bdd64c0816e2532e8f286be.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Renaissance history is so much wilder and weirder than you would have expected. Very fun chatting with <a href="https://www.adapalmer.com/">Ada Palmer</a> (historian, novelist, and composer based at the University of Chicago).</p><p>Some especially fascinating things I learned from the conversation and her excellent book, <em><a href="https://a.co/d/03EjyByR">Inventing the Renaissance</a></em>:</p><p>Not only did Gutenberg go bankrupt in the 1450s (after inventing the printing press), but so did the bank that foreclosed on him, and so did his apprentices. This is because paper was still very expensive, and so you had to make this big upfront CAPEX decision to print a batch of 300 copies of a book - say the Bible. But he&#8217;s in a small landlocked German town where only priests are allowed to read the Bible - so he sells maybe 7 copies. It&#8217;s only when this technology ends up in Venice, where you can hand 10 copies to each of 30 ship captains going to 30 different cities, that it starts taking off.</p><p>Speaking of which, the printing revolution wasn&#8217;t just one single discrete event, just as the computer revolution has been this whole century of going from mainframes -&gt; personal computers -&gt; phones -&gt; social media, each with different and accelerating social impact. Books came first, but they&#8217;re slow to print, and made in small batches. The real revolution is pamphlets - much faster, much harder to censor. Pamphlet runners are how you can have Luther&#8217;s 95 Theses go from Wittenberg to London in 17 days.</p><p>So much other wild stuff from this episode. For example, did you know that the largest and best-funded experimental laboratory in 17th century Europe was very likely the Roman one run by inquisitors? Ada jokes that the Inquisition accidentally invented peer review. The focus of the Inquisition is really misunderstood - it was obsessed with catching dangerous new heretics like Lutherans and Calvinists - it only executed one person for doing science.</p><p>And this leads Ada to make an observation that I think is really wise: the authorities and censors are always worried about the exact wrong things given 20/20 hindsight. When Inquisition raids an underground bookshop during the French Enlightenment, they don&#8217;t mind the Rousseau, Voltaire, and Encyclop&#233;die, but they lose their minds about some Jansenist treatises about the technical nature of the Trinity.</p><p>More broadly, a lesson for me from this episode is that it&#8217;s just really hard to shape history in the specific way that you want to impact things. One of the most famous medieval scholars is this guy Petrarch. He survives the Black Death in the 1340s, watches his friends die to plague and bandits, and says: our leaders are selfish and terrible, we need to raise them on the Roman classics so they&#8217;ll act like Cicero. So Europe pours money into finding ancient manuscripts, building libraries, and educating princes on classical virtues. Those princes grow up and fight bigger, nastier wars than ever before with new deadlier technology. And this, combined with greater urbanization and endemic plague, results in European life expectancy decreasing from 35 in the medieval period to 18 during the Renaissance (the period which we in retrospect think of as a golden age but which many people living through it thought of as the continuation of the dark ages that had persisted since the fall of Rome).</p><p>Anyways, the libraries Petrarch inspires stick around, the printing press makes them accessible to everyone, and 200 years later a generation of medical students is reading Lucretius and asking &#8220;what if there are atoms and that&#8217;s how diseases work?&#8221; which eventually leads to germ theory, vaccines, and a cure for the Black Death (Ada has longer more involved explanation of how cosplaying the Romans results through a series of many steps to the scientific revolution). Petrarch wanted to produce philosopher-kings that shared his values. Instead he created a world that doesn&#8217;t share his values at all but can cure the disease that destroyed his.</p><p>Watch on <a href="https://youtu.be/PAIhVfGbREA">YouTube</a>; listen on <a href="https://podcasts.apple.com/us/podcast/how-cosplaying-ancient-rome-led-to-the-scientific/id1516093381?i=1000753675325">Apple Podcasts</a> or <a href="https://open.spotify.com/episode/00AFjws53vNchZYgKGmFCU?si=443280b66ff64693">Spotify</a>.</p><div id="youtube2-PAIhVfGbREA" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;PAIhVfGbREA&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/PAIhVfGbREA?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h2>Sponsors</h2><ul><li><p><a href="https://janestreet.com/dwarkesh">Jane Street</a> is still waiting on someone to solve their backdoor puzzle&#8230; They&#8217;re accepting submissions until April 1st and have set aside $50,000 for the best attempts. Separately, applications are live for Jane Street&#8217;s summer ML internships in NY, London, and Hong Kong. Go check all of this out at <a href="https://janestreet.com/dwarkesh">janestreet.com/dwarkesh</a>.</p></li><li><p><a href="https://labelbox.com/dwarkesh">Labelbox</a> can help ensure your agents don&#8217;t need to rely on overspecified prompts. They tailor real-world scenarios to whatever domain you&#8217;re focused on, and they make sure the data you train on rewards real understanding, not just instruction-following. Learn more at <a href="https://labelbox.com/dwarkesh">labelbox.com/dwarkesh</a></p></li><li><p><a href="https://mercury.com/personal-banking">Mercury</a>&#8217;s personal accounts let you add users, issue cards, and customize permissions. This is super useful for sharing finances with a partner, a roommate&#8230; or even an OpenClaw agent. And, if you&#8217;re already a Mercury Business user, your personal account is free! See terms and conditions below, and learn more at <a href="https://mercury.com/personal-banking">mercury.com/personal-banking</a></p><p><em>Eligible Mercury Business users who apply for and maintain a Mercury Personal account may have their Mercury Personal subscription fee waived provided they remain a user on an active Mercury Business account in good standing. Standard Mercury Platform Subscription fees will apply if they no longer meet eligibility requirements, including but not limited to no longer being associated with an eligible Mercury Business account, or if the program is modified or terminated. Mercury may modify or discontinue this offering at any time and will provide notice as required by law. See Subscription Terms for full details.</em></p></li><li><p>To sponsor a future episode, visit <a href="https://www.dwarkesh.com/advertise">dwarkesh.com/advertise</a>.</p></li></ul><h2>Timestamps</h2><p><a href="https://www.dwarkesh.com/i/190118311/000000-how-cosplaying-ancient-rome-led-to-the-renaissance">(00:00:00) - How cosplaying Ancient Rome led to the Renaissance</a></p><p><a href="https://www.dwarkesh.com/i/190118311/002849-how-florences-weird-republic-worked">(00:28:49) - How Florence&#8217;s weird republic worked</a></p><p><a href="https://www.dwarkesh.com/i/190118311/003813-how-the-medicis-took-over-florence">(00:38:13) - How the Medicis took over Florence</a></p><p><a href="https://www.dwarkesh.com/i/190118311/005812-why-it-was-so-hard-for-gutenberg-to-make-any-money-off-the-printing-press">(00:58:12) - Why it was so hard for Gutenberg to make any money off the printing press</a></p><p><a href="https://www.dwarkesh.com/i/190118311/011734-why-the-industrial-revolution-didnt-happen-in-italy">(01:17:34) - Why the industrial revolution didn&#8217;t happen in Italy</a></p><p><a href="https://www.dwarkesh.com/i/190118311/012302-the-library-of-alexandria-isnt-where-most-ancient-books-were-lost">(01:23:02) - The Library of Alexandria isn&#8217;t where most ancient books were lost</a></p><p><a href="https://www.dwarkesh.com/i/190118311/014121-the-inquisition-accidentally-invented-peer-review">(01:41:21) - The Inquisition accidentally invented peer review</a></p><h2>Transcript</h2><h3>00:00:00 - How cosplaying Ancient Rome led to the Renaissance</h3><p><strong>Dwarkesh Patel</strong></p><p>Today I&#8217;m chatting with <a href="https://history.uchicago.edu/directory/ada-palmer">Ada Palmer</a>, who&#8217;s a Renaissance historian, novelist, and composer based at the University of Chicago. Today we&#8217;re discussing your book, <em><a href="https://amzn.to/4l2zzb2">Inventing the Renaissance</a></em>. Ada, thanks for coming on the podcast.</p><p><strong>Ada Palmer</strong></p><p>Been looking forward.</p><p><strong>Dwarkesh Patel</strong></p><p>First question. You&#8217;ve got in this period&#8212;late 15th century, early 16th century&#8212;in Italy all these different republics: <a href="https://en.wikipedia.org/wiki/Republic_of_Venice">Venice</a>, <a href="https://en.wikipedia.org/wiki/Republic_of_Florence">Florence</a>, <a href="https://en.wikipedia.org/wiki/Republic_of_Genoa">Genoa</a>. That seems unusual both for the time period and for the place.</p><p><strong>Ada Palmer</strong></p><p>One of the big reasons that the <a href="https://en.wikipedia.org/wiki/Italian_city-states">Italian city republics</a> are clustered in Italy is that when the Roman Empire dissolved in the West, individual cities then needed to self-govern. This is true all across Europe. Those individual cities could no longer get the centralized Roman government to oversee supply routes or keep the roads free of bandits. You could no longer import and export goods at scale. You could no longer rely on central infrastructure. You had to support things yourself.</p><p>Larger, wealthier towns were able to make this transition because they could support themselves from the local resources and the farms attached to them. The larger, wealthier towns surrounded by good agricultural land were more successful at converting over. Okay, let&#8217;s have a senate like the <a href="https://en.wikipedia.org/wiki/Roman_Senate">old Roman Senate</a>. Let&#8217;s have our top families form a council. They will rule. We&#8217;ll set up a republic.</p><p>A weaker town that can&#8217;t support itself as well is much more prone to one wealthy family realizing that they can get goons and take over, declaring themselves the monarch of the area. Or worse, this town cannot self-sustain, it doesn&#8217;t have enough. People there can&#8217;t get food. They are scared and afraid of being robbed by people who are desperate. But outside of town, there is a wealthy villa that belongs to a noble family, and they have bodyguards. &#8220;Hey, noble family, if I move next to your villa and work for you, will you protect me with your bodyguards?&#8221;</p><p>So towns emptied out, and villages&#8212;as in a villa and its environs&#8212;developed as a result. A village was a monarchal structure in this sense. It was the migration of people out of a town into the protection zone of a local lordling. Then those villages grew to different scales, some of them cities, some not. Italy had great agriculture and great agricultural land, so more of Italy&#8217;s cities were able to sustain themselves as towns and be republics.</p><p><strong>Dwarkesh Patel</strong></p><p>I feel like the big take of your book is they were trying to resuscitate Roman virtues. What were the virtues that the Roman emperors had which allowed this safety, good government, et cetera, to work?</p><p><strong>Ada Palmer</strong></p><p>Stability.</p><p><strong>Dwarkesh Patel</strong></p><p>And I don&#8217;t understand the connection between reading Cicero and contemplating the virtues of a great emperor to&#8230; science and technology. Maybe there isn&#8217;t one, but do you think there is one? What exactly is that connection?</p><p><strong>Ada Palmer</strong></p><p>As with many processes, the answer is that there are multiple steps, and it&#8217;s complicated, and some of the steps are realizing that the earlier steps didn&#8217;t work.</p><p><a href="https://en.wikipedia.org/wiki/Petrarch">Petrarch</a>, who lived through the <a href="https://en.wikipedia.org/wiki/Black_Death">Black Death</a>, and lives in a moment when Italy is wracked by civil war and foreign mercenary troops are raiding and pillaging. Italy is wracked by bandits. When Petrarch survives the Black Death after losing so many friends, he gets a letter. Two of his friends are alive. He had given up hope that anyone he knew would survive, but two of his younger scholar friends are alive. They&#8217;re going to come visit him. On the way, they were attacked by bandits. One of them was killed, and the other was lost in the mountains and wounded, and he didn&#8217;t know that his friend was alive for another year and a half. The bandits are very real in this period.</p><p>Petrarch looks around him and says, &#8220;This is an age of ash and shadow. What we need is to imitate the arts of the ancients. Let&#8217;s try to figure out how the Romans did it.&#8221; And specifically, the problem is our leaders. Our leaders are selfish. Our leaders care more about their wealth and their family honor and their power than they do about the people.</p><p>This is where <em>Romeo and Juliet</em> is really helpful for us to understand. Lord Montague and Lord Capulet, as their goons are knifing each other in the street, they care about defeating each other. Do they care about the good of Italy? Do they care about the good of the city of Verona? No. Their feud is harming the city of Verona, and they don&#8217;t care. They demand that Romeo get away with murder because he is their son. That is not service to the state.</p><p>Petrarch reads about the ancient Roman <a href="https://en.wikipedia.org/wiki/Lucius_Junius_Brutus">Brutus</a>&#8212;not the one who killed <a href="https://en.wikipedia.org/wiki/Julius_Caesar">Caesar</a>, but the ancestor to whom <a href="https://en.wikipedia.org/wiki/Marcus_Junius_Brutus">that one</a> was trying to live up. Brutus was one of the first consuls of Rome, and he learned while in office that his sons were plotting to take over the state and make him king. So he executed his own sons for treason against the state. Can you imagine Lord Montague wanting to execute Romeo for treason against Verona? He would never do that. When you&#8217;re living in the plot of <em>Romeo and Juliet</em> and you read about these ancient Roman figures, as described in the lofty biographies of someone like <a href="https://en.wikipedia.org/wiki/Livy">Livy</a>, you read them and you say, &#8220;Wow, if only our leaders would act like that.&#8221;</p><p>Well, how were they raised? Can we raise our leaders the same way? Can we make libraries filled with what young Cicero read and what young Brutus read? What did they read? They read <a href="https://en.wikipedia.org/wiki/Plato">Plato</a>, and they read <a href="https://en.wikipedia.org/wiki/Homer">Homer</a>. So we need these things. Can we recreate the educational environment that produced them?</p><p>Petrarch suggests this. His students and successors embrace this idea and pour money into traveling across the Alps to look for manuscripts, traveling to Constantinople to purchase manuscripts from the wealthier East where books are common, and bringing them back to assemble these libraries. Then they raise tutors like <a href="https://en.wikipedia.org/wiki/Marsilio_Ficino">Marsilio Ficino</a>, who can know Greek and Latin and surround the young princes and princesses of Europe with these values in the hopes that they will act like Brutus and not like Lord Montague.</p><p>This is based on an assumption that education is very much like osmosis, that if you&#8217;re exposed to something, you&#8217;ll imitate it. And the uptake of this is strong because Italy is also full of upstart rulers who just seized power five minutes ago by having a coup in their state and have no legitimacy and no right to be ruling what they&#8217;re ruling and are resented by their people. But they can dress up like a Roman emperor. And they can have a parade with allegorical figures of the virtues next to them. And they can invest in an impressive palace that has a pediment on the front and looks like a Roman building to the eyes of the period, and cover themselves with the trappings of antiquity.</p><p>Then people might look at them and say, &#8220;This guy is different from what we&#8217;ve had. This guy is like the Caesars. The days of the Caesars were pretty good. Maybe we want this guy. Maybe he&#8217;s not going to be a tyrant. Maybe he&#8217;s going to be a good prince, and he&#8217;s going to make a golden age.&#8221;</p><p>And so the first dream is idealistic: let&#8217;s make better rulers. The adoption is self-serving and propagandistic: &#8220;Hey, I&#8217;m a tyrant, but I can seem like something better than just a tyrant. If I make myself look like Julius Caesar, then people will like and respect me.&#8221;</p><p>Or in the case of Florence with the Medici, &#8220;We are merchant scum. We are dirt compared to everybody around us. We&#8217;re not even one of the important families of Florence. We&#8217;re three ranks down. Even on the standards of merchant scum, we&#8217;re extra scummy merchant scum. But if we can have Latin and Greek and quote Cicero and seem like the ancients, people will take us seriously and respect us and talk to us even if we don&#8217;t have it.&#8221;</p><p>Let me give an example. Imagine that you are an ambassador from France, and you&#8217;re on your way to Rome, because a new pope has just been elected. Whenever a new pope is elected, every country in Europe has to send a special ambassador whose job it is to deliver a long-winded oration that says, &#8220;I am the ambassador from a very wealthy country and a very powerful prince.&#8221; And he&#8217;s so glad you&#8217;re the pope. Congratulations. Only you have to do that for an hour.</p><p>You have to give a gift to the pope, and it has to be very impressive, and you have to be a really important person. You&#8217;re the most important person who can leave your country without causing a political crisis. You might be the heir to the throne, for example. Or you might be a more minor ambassador, but you&#8217;re at least the son of a count.</p><p>You&#8217;re on your way to Rome, you&#8217;re heading along the length of Italy, you&#8217;re going to go through Florence, it&#8217;s on the way. Ugh. There&#8217;s nobody there worth talking to because it&#8217;s just a pit of scum and villainy. In fact, also filth and depravity because, of course, Florence is the sodomy capital of Europe. To Florentine is the verb for anal sex in several different European languages. In the laws of France, you can be indicted for sodomy on the grounds that you have ever once in your life even visited Florence. That&#8217;s considered evidence enough.</p><p>So you&#8217;re on your way to this matchlessly filthy dive of scum and villainy. And then you approach the city, and there are these statues. They look like ancient statues, the kind that are so lifelike that it&#8217;s as if they&#8217;re about to breathe and move. You&#8217;ve never seen an intact new statue like that. That isn&#8217;t something we know how to do. You ride through the city a bit, and it&#8217;s a large, impressive city, and you get to the cathedral, and it has this massive dome, way bigger than anything you&#8217;ve ever seen except for old Roman ruins.</p><p>You come to the banker&#8217;s house, and your servant knocks at the door. The banker greets you humbly at the door and apologizes that his humble palace is not worthy to host Your Excellency, and you&#8217;re like, &#8220;Yeah, it&#8217;s not. You&#8217;re correct.&#8221; He invites you in, and the instant you step inside, you&#8217;re in a space like nothing you&#8217;ve ever seen before with white light streaming in through this airy, rounded windowed courtyard that feels cleaner and more outdoors than the outdoors did, because something about the air is cool and fresh. It&#8217;s like nothing you&#8217;ve&#8212; Wait, wait. It is. It&#8217;s like the Roman ruins in the backyard of the castle where you grew up. But we don&#8217;t have the ability to do that anymore. All that&#8217;s lost.</p><p>In the middle of the square is another one of these bronze statues that looks like it&#8217;s about to come to life, except it&#8217;s shining and new. It hasn&#8217;t even turned green yet. Around the courtyard are busts of all the Roman emperors in order, and above them are portraits of this guy and the members of his family. Off in the corner are some men wearing robes that look like the robes the ancients wear. You say, &#8220;Who are those guys?&#8221; He says, &#8220;Oh, they&#8217;re Platonists. They&#8217;re speaking ancient Greek.&#8221; You say, &#8220;I thought I didn&#8217;t understand that language, but ancient Greek is lost. We don&#8217;t have ancient Greek.&#8221; He says, &#8220;We have lots of ancient Greek here.&#8221; You say, &#8220;And also, we don&#8217;t have the works of Plato. They&#8217;re also lost.&#8221; &#8220;Oh, we have lots of Plato here. Look, here&#8217;s my grandson, Lorenzo. He&#8217;s just written a poem in ancient Greek about the <a href="https://en.wikipedia.org/wiki/Plato%27s_theory_of_soul">three parts of the soul</a>. Would you like to hear him recite it?&#8221;</p><p>Now there&#8217;s a ten-year-old boy reciting a poem at you in ancient Greek about the three parts of the soul, and you&#8217;re like, &#8220;Where am I? None of this is possible. None of this has existed for a thousand years.&#8221; That&#8217;s the moment that <a href="https://en.wikipedia.org/wiki/Cosimo_de'_Medici">Cosimo de&#8217; Medici</a> turns to you and says, &#8220;Would you like to make an alliance with Florence?&#8221;</p><p>And you can say no. You can say, &#8220;No. My king is going to come over the Alps with his enormous army, and we&#8217;re going to descend upon this city, and we&#8217;re going to sack it, and everyone&#8217;s going to let us because it has no friends because it doesn&#8217;t have any nobility, so it can&#8217;t marry anybody, so it has no meaningful allies. And also, it&#8217;s in the middle of this <a href="https://en.wikipedia.org/wiki/Guelphs_and_Ghibellines">Guelph-Ghibelline</a> feud, so all of its neighbors hate it and they&#8217;re just going to let it burn. We&#8217;re going to take the enormous piles of gold that are in your basements and go home rich, and all of this will be gone like a dream.&#8221;</p><p>Or you could say, &#8220;Yes, let&#8217;s make an alliance. Give me a bronzesmith and an architect and a Greek teacher and a Platonist, and we&#8217;re going to take all of these things, and we&#8217;re going to do the French court like this. Then when the ambassador from Portugal comes, he&#8217;s going to feel like an uncultured fool, just like I feel right now.&#8221; The power dynamic just flipped upside down. Suddenly, the condescending nobleman is in awe of the merchant scum. That&#8217;s what the art and the culture does as a propagandistic tool.</p><p>The next stage of it then is, &#8220;Okay, we&#8217;ve raised these princes like this, and they have the Latin, and they have the Greek, and they can impress everybody.&#8221; Then they fight a bigger, nastier, worse war than any of the earlier big, nasty wars, with more deaths and more betrayals and bigger cannons knocking down cities and burning whole areas. The wealth is centralized, so the mercenaries are more numerous because people can produce more. The first generations raised by this are supposed to be philosopher princes, and instead we get <a href="https://en.wikipedia.org/wiki/Cesare_Borgia">Cesare</a> and <a href="https://en.wikipedia.org/wiki/Lucrezia_Borgia">Lucrezia Borgia</a>, both of whom had Latin and Greek and Cicero and Plato when they were kids. Then they grow up, and Cesare sets fire to half the world.</p><p>That is the war Machiavelli watched. Machiavelli was raised on all of the Cicero and Livy. He was raised on the Petrarchan project. He has <a href="https://dhspriory.org/kenny/PhilTexts/Machiavelli/Letter%20to%20Vettori.htm">this famous, beautiful letter</a> that he wrote in exile, where he&#8217;s describing his day to his friend. Most of the day is wasted, and he mucks around hunting for larks. Then he goes to a pub and gets drunk in the company of uncultured countrymen. Then he goes home, and he gets dressed in the court robes, the court finery that he would wear back when he was an ambassador to popes and kings. Attired thus, he enters his library to hold commerce with the ancients. He loves this the way Petrarch wanted him to love it.</p><p>But he observes these wars, and he observes virtuous princes like <a href="https://en.wikipedia.org/wiki/Guidobaldo_da_Montefeltro">Guidobaldo da Montefeltro</a>, who does every single thing you&#8217;re supposed to do virtuously. He has all the Plato, and he has all the libraries, and he has all the art. And he gets betrayed and his city taken away from him and loses everything. And he watches terrible people like Cesare Borgia and Julius II make terrible choices and succeed. He says, &#8220;Okay, clearly Petrarch was wrong that just reading Cicero would make successful rulers like the Caesars. But I still feel in my heart a deep power in the classics.&#8221;</p><p>So he says, &#8220;What if the libraries are what we need, but we need to use them differently?&#8221; He proposes what we would think of as political science. We observe historical examples. We say, &#8220;Okay, here are five examples of battles that happened next to rivers. We&#8217;ll put those examples side by side and see what decisions the commanders made to try to figure out which one worked better.&#8221; We use history as a casebook of examples of what worked and what didn&#8217;t. We imitate what worked, and we avoid doing what didn&#8217;t. Instead of feeling that reading about good men will make us good, we read about wise choices, and we imitate those choices.</p><p>This is one of the reasons Machiavelli is described by his contemporaries as a historian. He says we need to use history and use the classics differently. He proposes that. He isn&#8217;t very popular in his own day. It takes a long time for that to catch on. Many people for decades after him are still trying to use absorption by osmosis. But he&#8217;s writing that in the early 1500s, so it&#8217;s been a little over a century since this started.</p><p>We have to remember how long this process is. From Petrarch&#8217;s first call to Machiavelli writing that is as long as from <a href="https://en.wikipedia.org/wiki/Yuri_Gagarin">Yuri Gagarin&#8217;s</a> space flight back to <a href="https://en.wikipedia.org/wiki/Napoleon">Napoleon</a>. The childhood of Napoleon to the space race, that&#8217;s Petrarch to Machiavelli. We think of it as one time period, but a lot changed. They had a plan. They tried the plan. They brought the plan to its maximum. They raised all the princes in this new way. The wars happened. It clearly failed. Machiavelli then thinks about why it failed.</p><p>We&#8217;re still only halfway through the Renaissance. Shakespeare&#8217;s grandparents have barely been born. We have a lot more time to go. So what do we need? We need new ways of thinking about it. We&#8217;re reading the ancients, and we have bigger libraries. We have the printing press now. We&#8217;re having libraries in smaller towns. More and more people can read. It&#8217;s easier and easier to get an education. More people are starting to learn about science.</p><p>It also is important that they&#8217;re inventing micro technologies of book production like footnotes and glossaries in the margin that explain the hard vocabulary. When Petrarch&#8217;s successors like Ficino were young, you had to be a masterful Latinist to read these ancients. You had to have an enormous vocabulary. There are no dictionaries. There are no glosses. There&#8217;s nothing to help you. Only a tiny slice of expert classicists could actually read this stuff.</p><p>A hundred years later, there are translations into the vernacular. There are footnotes that tell you the hard vocabulary. Any med student can read Lucretius&#8217; discussions of materialist information. When <a href="https://en.wikipedia.org/wiki/Poggio_Bracciolini">Poggio</a> found it, there were two dozen people in the world who could read it. A hundred years later, 30,000 people can read it in the 30 print editions that are printed before 1600.</p><p>When all different kinds of people read it&#8212;med students, law students, people in different countries, people in different places&#8212;they ask new questions. They wonder whether they can test the hypotheses. They do test the hypotheses. They&#8217;re the generation that discovers that the heart is a pump. They&#8217;re the generation that takes seriously the question, &#8220;Maybe there are atoms, and maybe that&#8217;s how diseases work, and maybe we can develop the <a href="https://en.wikipedia.org/wiki/Germ_theory_of_disease">germ theory of disease</a>.&#8221; That&#8217;s the 1560s, 1580s, 160 years after Lucretius comes back, because it takes generations of work to build the libraries, to have the libraries, to use the libraries.</p><p>So when we get to 1600, which is almost exactly 200 years after this begins, a little bit more, we&#8217;ve had time to say, &#8220;Let&#8217;s build the libraries, have the libraries, use the libraries, or realize we failed in how we use the libraries, and use the libraries differently.&#8221; That&#8217;s the generation of Francis Bacon and Galileo who say, &#8220;Hey, let&#8217;s use the information differently. Let&#8217;s use nature as a casebook of examples the way Machiavelli said we should use history. Let&#8217;s examine, let&#8217;s doubt, let&#8217;s rethink, let&#8217;s do stuff in new ways.&#8221;</p><p><strong>Dwarkesh Patel</strong></p><p>Just to make sure I understood, the chain of causation here. We&#8217;ve got to resuscitate the virtues of the Romans, therefore read what they read. To do that, you need to build the libraries. You build the libraries, you resuscitate all those arts. Then you just need to have people be literate, have people think about information in a new way to analyze it. And that analysis lends itself not just to the history of leaders, but also to the nature of the world.</p><p>Whenever I hear a story about how this is why the <a href="https://en.wikipedia.org/wiki/Scientific_Revolution">scientific revolution</a> happened, why the Industrial Revolution happened, I&#8217;m like, but there are so many stories and it&#8217;s just hard to figure out why this one over the other ones. There&#8217;s a dozen other stories you could tell.</p><p>I had a previous guest, <a href="https://www.dwarkesh.com/p/joseph-henrich">Joseph Henrich</a>, who has this theory that the Catholic Church was breaking down these old kinship-based networks that the rest of the world had. It was encouraging guilds, encouraging these kinds of centers where people could get together and discuss ideas. There are probably twenty other stories you could tell. Why this story?</p><p><strong>Ada Palmer</strong></p><p>Two different reasons. One, I think it&#8217;s useful to think about how for new ideas to flourish and new ways of running the world to happen, you need a fertile environment. In the same way that for forests to grow, you need enough topsoil. It takes a while to get that topsoil.</p><p>It takes a while to get enough books. You need to have enough books for a bunch of people to be reading and thinking. You also need to have networks of information moving this stuff back and forth so that they can have discourses of ideas with each other. You can&#8217;t publish a scientific journal until there are journals. You need to have developed this ecosystem of information and knowledge.</p><p>People talk about it sometimes in terms of increasing literacy rates as if higher literacy makes there be more books instead of the other way around. In fact, there&#8217;s a lot more literacy than people imagine in even medieval Italy. Florence had a male literacy rate of ninety percent.</p><p><strong>Dwarkesh Patel</strong></p><p>As of the sixteenth century?</p><p><strong>Ada Palmer</strong></p><p>As of the twelfth century. Because everybody&#8217;s in the merchant world, so you have to be able to send letters. You have to be able to read account books. You have to be able to calculate your tab at a restaurant.</p><p>But of those people, how many have read a book? Very few. They&#8217;ve read letters, they&#8217;ve read tallies, they&#8217;ve read indexes, they&#8217;ve made notes. The difference between being literate and being book-literate is different. In the same way that some people watch television but don&#8217;t watch very many films, while other people watch lots of films. You can be literate and have never read a book because there might be almost no books in the entire city in which you grew up if it&#8217;s 1200 or 1500. But if it&#8217;s 1600, there are definitely books in any medium-sized town. So literacy transforms into access to scientific, intellectual, legal, all sorts of different worlds of ideas.</p><p>The other person you quoted who&#8217;s talking about transformations in networks of power from being less family and clan-centered to being more guild-centered&#8230; The guilds are major generators of ideas as well. The guilds can own libraries by 1600. If you went to a guild hall, it will have a bunch of books about its own trade. That would not have been true in 1100.</p><p>Those changes are all real, they&#8217;re all intermixing, and they&#8217;re all parallel to each other. You need all of these things together. One of the focuses I have is sometimes there are more steps to something than you think.</p><p>We tell this story of the Renaissance, of how they rediscovered these ancient texts, and then we got science. That&#8217;s true, but it is an oversimplification and too wide a zoom. If I said that in the <a href="https://en.wikipedia.org/wiki/French_Revolution">French Revolution</a>, Napoleon rose to power and spread nationalized warfare across Europe, and then we landed on the moon, I&#8217;ve skipped some steps. We know that about modernity, but we don&#8217;t remember that about earlier periods.</p><p><strong>Dwarkesh Patel</strong></p><p>Obviously all the stories are somewhat true, but to the extent that this is a part of the story, you&#8217;re building up libraries of classics and &#8230; setting up a network of information exchange that leads to the Scientific Revolution&#8230;</p><p>The reason this feels salient right now is that a lot of people have this idea that they&#8217;re going to make AI go well by doing X thing. Maybe some of those things work, but it&#8217;s at the same time frustrating but also funny and interesting that historically nobody has a good track record of being able to say, &#8220;I will do this thing so that this huge unanticipated change in history will go my way, or according to my values.&#8221;</p><p><strong>Ada Palmer</strong></p><p>Right. I think &#8220;go my way&#8221; as opposed to &#8220;go well&#8221; is a really important distinction. Petrarch wanted a world with these values. He thought, for example, that this would be a triumph for Christianity and what we would call Catholicism, though there&#8217;s only one Christianity from his point of view at the time, except for the <a href="https://en.wikipedia.org/wiki/Eastern_Orthodox_Church">East</a>, which is different.</p><p>He was sure that when we found the ancients, fundamentally all of their philosophy would agree with Christianity. The ancients were wise, therefore they will be correct, and Plato will ninety percent agree with Christianity. It just needs a little shaker of the Trinity on top to be Christianity. When he says, &#8220;Go find these ancients,&#8221; he is in a world that doesn&#8217;t have the ancients yet. He&#8217;s just guessing what&#8217;s going to be in these books. But he says, &#8220;If we find them, they will uphold good values,&#8221; and everyone believes him.</p><p>Then they go find them, and they squabble with each other. There are <a href="https://en.wikipedia.org/wiki/Hedonism#Ancient">Hedonists</a> and <a href="https://en.wikipedia.org/wiki/Epicureanism">Epicureans</a> and <a href="https://en.wikipedia.org/wiki/Stoicism">Stoics</a> and all sorts of chaotic things, much more plural than he anticipated. It makes a world that in turn has giant wars, which he would not like, and a crisis, and <a href="https://dukespace.lib.duke.edu/server/api/core/bitstreams/e489fce7-2bd2-4a2f-ac14-912710a8284b/content">Machiavelli&#8217;s critique of the ancients</a>, and then the new science and the new philosophy, and eventually Galileo, none of which resembles what Petrarch imagined if he had specifically described the future he was trying to make.</p><p>But then we get to the propagators of Bacon&#8217;s scientific method, meaning Voltaire and <a href="https://en.wikipedia.org/wiki/Montesquieu">Montesquieu</a>, who are also big campaigners for inoculation against smallpox. The first major disease eradications start to begin under that immediate influence. Science gets us to the germ theory of disease gets us to modern hygiene, which gets us to vaccines, which gets us to penicillin and the treatment for the <a href="https://en.wikipedia.org/wiki/Black_Death">Black Death</a>.</p><p>Petrarch thought he would make a world which shared his values. Instead, he made a world that doesn&#8217;t share his values but is capable of curing a disease he never imagined would be curable. If you showed him this future, it would be scary. It would be weird to him because it does not embrace his values. Our values are different. He would be horrified by democracy. He believed that only a tiny elite has the capacity to rule. If we had a time-traveling Petrarch, he would really wrestle for a long time to wrap his head around democracy as a functional system. He really thought in oligarchic terms.</p><p>But he would see the wonders we&#8217;ve created, especially the fact that we can treat the Black Death, and he would weep for joy seeing that. He did not create a world that went as he wanted, but he created a world that went well. We have many examples of that. Trains and bicycles come in, and we get feminism because it&#8217;s easier for people, especially women, to move freely and independently. They can organize. They can mobilize. We get <a href="https://en.wikipedia.org/wiki/Suffragette">suffragettes</a>. Did the inventor of the train intend for there to be women&#8217;s liberation? No. Did it go the way he imagined? No. Did it go well? Yes.</p><h3>00:28:49 - How Florence&#8217;s weird republic worked</h3><p><strong>Ada Palmer</strong></p><p>It&#8217;s important here to zoom in a little bit on Florence&#8217;s own government system and how and why it&#8217;s weird, in order to understand what rank Machiavelli actually holds in it.</p><p>All of these republics, except Florence, are modeled on ancient Rome. The ancient Roman model was an <a href="https://en.wikipedia.org/wiki/Roman_Republic">oligarchic republic</a> in which within the city there are <a href="https://en.wikipedia.org/wiki/Patrician_(ancient_Rome)">certain noble families</a>, usually founding families who made the city in the first place, who are the senatorial families. Hereditarily, when they come of age, the men of the family are automatically in the senate. From among them are elected the <a href="https://en.wikipedia.org/wiki/Roman_consul">consuls</a>, high senators, or the head of state if there is one. You have a small slice of the population that are fully enfranchised members of the republic who rule over the commoner majority.</p><p>That is how Venice works. That is how Genoa works. That is how Bologna and <a href="https://en.wikipedia.org/wiki/Republic_of_Siena">Siena</a> for the most part work. That&#8217;s how the <a href="https://en.wikipedia.org/wiki/Old_Swiss_Confederacy">Swiss Republic</a> works. That&#8217;s how all of these republics work. Florence was like that for quite a while, but when republics fell, they usually fell to noble families who are the foremost, the strongest, the military class. If you&#8217;re a military leader in this period, you have to have noble blood. No soldier is going to follow a commander who doesn&#8217;t have noble blood. That would be weird. Those threats to the independence of the republic almost always came from the nobility.</p><p>After one particular near miss in which the city was nearly taken over, they decided to <a href="https://en.wikipedia.org/wiki/Ordinances_of_Justice">get rid of the nobility of Florence</a>. They massacred most of them, cut their heads off, put them on pikes, burned their houses down, raked salt into the earth, and had a party on their graves, the way you do in the period when you&#8217;re getting rid of a class of people. There were a few noble families that they really liked who had not been part of negative stuff. They allowed them to officially renounce their nobility. They renounced their nobility, changed their names, and declared themselves commoners.</p><p>They set up a commoner republic. What that meant was the senate consisted of members of merchant guilds. A member of a merchant guild here means the owners of workshops. It&#8217;s not the guy who sits at the loom weaving, but the guy who owns the warehouse full of looms where the workers are working. The head of the sculpture works, the head of the architectural firm, not the bricklayers who are actually laying the bricks. Bourgeoisie is an anachronistic word, but we&#8217;re talking about the owners of the means of production who are themselves commoners.</p><p>They are very wealthy, but from the point of view of the diplomatic corps of any other society, all of the ruling people and all of their ambassadors are noble-blooded. If you&#8217;re an ambassador, you&#8217;re automatically noble-blooded. Nobody&#8217;s going to take an ambassador seriously who isn&#8217;t. From the perspective of every other polity in the world, the rulers of Florence are the rank of their valet. There is no nobility left in the city.</p><p>In fact, Florence can&#8217;t run its own armies or head its own police, because you&#8217;re not going to surrender if you&#8217;re told to surrender in the name of some guy who doesn&#8217;t have a coat of arms. That would be weird. So they actually have to hire a nobleman to come to the city and be their chief of police to arrest people in the name of the Holy Roman Emperor. One at a time, they&#8217;ll invite a skilled military commander nobleman who will come to the city. He&#8217;ll be <em><a href="https://en.wikipedia.org/wiki/Podest%C3%A0">podest&#224;</a></em>. He&#8217;ll live in the palace, which is also the prison. He&#8217;ll arrest people. He&#8217;ll enforce the law.</p><p>They will pay him handsomely at the end of the year, escort him to the gates, and then banish him from the city for life on pain of death so that he cannot return and make use of the power that he had in the city to try to take over. They&#8217;re very wary of any nobleman. They&#8217;ve set up a really weird republic&#8212;weird from the perspective of everyone around them&#8212;in which a bunch of merchants are trying to share power by being lotteried into the senate.</p><p>You put names in a bag. You examine all of the merchant members of guilds. You choose which ones are fit to serve, meaning not ill and dying, not insane, not so deeply in debt that they could be manipulated by the people whom they owe money to. Their names go in a bag. You choose nine guys at random. They rule the city. They are put in a palace where they rule the city from that tower.</p><p>They&#8217;re actually locked in the tower for the duration of their time in office because if they left the tower, they could be bribed or kidnapped. They rule the city for two or three months. At the end, they are thanked for their service and escorted out, and then a different nine guys share power for the next three months. It&#8217;s a power sharing that is designed to be tyrant-proof because you need consensus of nine randomly selected guys to decide to do anything.</p><p><strong>Dwarkesh Patel</strong></p><p>Oh, it&#8217;s not even a majority vote, it&#8217;s consensus?</p><p><strong>Ada Palmer</strong></p><p>It&#8217;s consensus.</p><p><strong>Dwarkesh Patel</strong></p><p>Previously you were describing &#8220;kill the nobles, salt the earth&#8221;. I&#8217;m almost thinking early communists. But then you say it&#8217;s the heads of the merchant guilds who are in charge. I want to understand why merchants and entrepreneurs have notable status in Florence. What is it about the culture that makes it so? Also, the Medici, the most powerful people, their job is <a href="https://en.wikipedia.org/wiki/Usury">usury</a>. It&#8217;s like the church&#8212;</p><p><strong>Ada Palmer</strong></p><p>It&#8217;s important to remember they were nobody when this set up. They were a minor important family.</p><p><strong>Dwarkesh Patel</strong></p><p>But the culture is getting started where somebody like that could be respected. How does that happen?</p><p><strong>Ada Palmer</strong></p><p>An important part of it is when you have a merchant capital, everybody works for somebody who works for somebody who works for the boss.</p><p>If you are a major merchant in Florence, you&#8217;re importing and exporting wool to and from all across Europe. You have employees all across Europe. You&#8217;re buying mass bulk wool from England, importing it to Florence to use olive oil that you&#8217;ve bought from Naples to process into high-quality wool, which you&#8217;re then exporting to Germany and France. You are a very interconnected businessman. You have a lot of contacts, you have a lot of clout, and the employees who work for you look to you for their safety net as well as their political representation.</p><p>We&#8217;re very accustomed in the modern period to thinking of the government as being our big safety net. If we wonder who is going to fund the hospitals, whose job is it to take care of orphans, we think of the government, or maybe the church. But in this period, if you&#8217;re killed and you leave orphans behind, it is your employer whose duty it is to take care of them. If you are injured and can no longer work, it is your employer who will support you for the rest of your life while you are disabled and find you work that you can do with that disability. A huge portion of the safety net is your employer.</p><p>Are you in trouble with the law? Your employer will supply your defense attorney. Your employer will supply the persuasive note to the judge that they would very much appreciate if their person got off. This is the system known as the patronage system, and it <a href="https://en.wikipedia.org/wiki/Patronage_in_ancient_Rome">existed in ancient Rome</a>. It exists and saturates the medieval and the Renaissance worlds in which everyone is in a very interconnected hierarchy.</p><p>So if you&#8217;re a brewer and your son gets in a barroom brawl and punches somebody out and the person&#8217;s nose breaks and they die in the brawl and your son is suddenly in trouble and you say, &#8220;Oh no, I don&#8217;t want my son to be executed,&#8221; you turn to your landlord. Your landlord turns to his landlord. They turn to one of these major families. These major families are massive landowners that own dozens of apartments within the city. Hundreds or thousands of people work for them.</p><p>So it makes sense to everyone to be represented that way, like having a council of the CEOs of all of the organizations that employees work for, when your corporation also supplies your social safety net and you see your representation there.</p><p>It&#8217;s also a world that&#8217;s used to thinking in terms of hierarchy and very unused to thinking about real democracy. It really doesn&#8217;t have any confidence in what we would recognize as democracy. We talk about these republics, and we&#8217;re very excited by the fact that they give more power to the people than a monarchy does, but they&#8217;re still incredibly narrow oligarchic republics.</p><p>When we read Machiavelli, he talks a lot about the <em>popolo</em>, which we translate as &#8220;the people.&#8221; He talks about how important it is that the <em>popolo</em> are respected and have a voice, that the <em>popolo</em> are armed, and the government shows respect for the people by allowing them to be armed. We read this and we&#8217;re like, &#8220;This feels really familiar. This feels like documents of the founding of the US where we&#8217;re respecting and arming and trusting the people.&#8221;</p><p><em>Popolo</em> meant the top 4% economically of the population, the members of the merchant guilds. That&#8217;s the <em>popolo</em>. He&#8217;s talking about a narrow-slice oligarchy being heard, a narrow-slice oligarchy being respected. We didn&#8217;t realize that in the nineteenth century when we were excitedly translating <em>The Prince</em> and reading it as quasi-democratic. We now have read more documents of the period and realize how people use these words.</p><h3>00:38:13 - How the Medicis took over Florence</h3><p><strong>Dwarkesh Patel</strong></p><p>Florence in this period goes through five different forms of government. It&#8217;s this <a href="https://en.wikipedia.org/wiki/Signoria_of_Florence">republic of nine dudes in a tower</a>, as you were saying, before 1434, and then&#8212;</p><p><strong>Ada Palmer</strong></p><p>There&#8217;s a gradual takeover. There&#8217;s a gradual, what we could call regulatory capture. But an interesting detail about Florence, even as the Medici take over, is that the Medici know the people of Florence are very deeply invested in this republic and very deeply invested in its institutions. Therefore, they have to respect those institutions and proclaim respect for those institutions. So they&#8217;re going to sustain people in the named offices that there used to be. They&#8217;re going to continue to let the guilds be important and have important offices.</p><p>There was a mandatory outfit that people wore who worked in the republic. The garment over there in the corner is a <em><a href="https://www.academia.edu/3443948/Clothing_and_a_Florentine_Style_1550_1620">lucco fiorentino</a></em>. This was the garment you were mandated by law to wear if you held office in the Florentine Republic. To us, we look at it and say, &#8220;It&#8217;s a long red robe. It looks very Renaissance.&#8221; To them, it looked like a toga because of the way it was draped. They thought of this as a toga. They&#8217;re cosplaying the Roman Republic. Wearing a Florentine toga while in office was something that you did to represent your fealty to <a href="https://en.wikipedia.org/wiki/Cicero">Cicero</a> and republican values.</p><p>The dukes made their men continue to wear these. In fact, the first Duke, <a href="https://en.wikipedia.org/wiki/Cosimo_I_de%27_Medici">Cosimo I</a>, would wear one to costume balls as if in his heart he longed not to dress like a duke, but to dress in a toga like a republican.</p><p><strong>Dwarkesh Patel</strong></p><p>It&#8217;s doubly ironic because when the Roman Republic turns to the <a href="https://en.wikipedia.org/wiki/Roman_Empire">Roman Empire</a>, they still have the senate. They still have all these old institutions, even though it&#8217;s no longer a republic.</p><p><strong>Ada Palmer</strong></p><p>The Roman Senate keeps meeting until 1200 AD.</p><p><strong>Dwarkesh Patel</strong></p><p>It&#8217;s sort of doubly ironic that they are doing the same thing, but in the 1500s.</p><p><strong>Ada Palmer</strong></p><p>And it means that more rights are granted to the people of Florence than to other cities that fell to monarchies at similar points. The monarchs of Florence know they have to be careful, they have to respect rights to a certain amount, and they can&#8217;t run roughshod over them.</p><p>There&#8217;s a really cool building that I love in Florence. If you&#8217;ve been there, there&#8217;s the famous bridge, the <a href="https://en.wikipedia.org/wiki/Ponte_Vecchio">Ponte Vecchio</a>, which has little jeweler shops all along it. When you get to the end of it, there&#8217;s this funny overhead corridor, the <a href="https://en.wikipedia.org/wiki/Vasari_Corridor">Vasari Corridor</a>, which was built by the dukes of Florence to connect the old city palace where the senate used to meet&#8212;where they had to have their seat of power&#8212;to their new palace across the river, which was much bigger, where they could have grand balls and things that dukes need to have.</p><p>Because they&#8217;re so terrified of being assassinated by their own people, they built this overhead walkway that goes from one end of the city to the other so that they could walk in safety without being assassinated. This is a sign of a weak duke. But also, when he was building it, it&#8217;s going across the roofs and sometimes blasting off the second stories of different people&#8217;s houses. Most people, when His Grace the Duke says, &#8220;I&#8217;m gonna blast the top story off your house,&#8221; would say, &#8220;Yes, Your Grace, please continue.&#8221; There are literally severed heads of people who resisted still rotting on spikes in front of the <a href="https://en.wikipedia.org/wiki/Palazzo_Vecchio">Palazzo Vecchio</a>.</p><p>But they get to this one point where there&#8217;s <a href="https://en.wikipedia.org/wiki/Torre_dei_Mannelli">a very old tower</a>, a 500-year-old tower. This belongs to the Mannelli family, who are descended from peers of Julius Caesar and can trace their genealogy all the way back to an old Roman <a href="https://en.wikipedia.org/wiki/Gens">gens</a>. When the duke says, &#8220;We want to knock the top off your tower,&#8221; they say, &#8220;No, this is our tower. This tower has been ours since before the Medici existed as a named family. You may not knock the top off.&#8221;</p><p>And the duke does not knock the top off. The corridor goes around in this awkward square around that tower, because he knows that if he violates something as traditional and core to the civilization as the property rights of somebody who has owned something for a long time, there will be rebellion, civil war, dissent, and resistance. These are monarchs who know that they are weak and are therefore careful, and therefore more rights, like property rights, exist.</p><p>Meanwhile, across the river in Ferrara, <a href="https://en.wikipedia.org/wiki/Alfonso_I_d%27Este">Duke Alfonso I d&#8217;Este of Ferrara</a> used to wander around Ferrara buck naked with a sword in one hand and his dick in the other, to show off that nobody would ever possibly try to harm a Duke d&#8217;Este. He and his siblings used to do things like, if they liked a musician, kidnap them and lock them in a tower so that nobody else could hear them, or if they wanted each other&#8217;s musician, send goons to kidnap each other&#8217;s musicians. They also used to recreationally murder each other&#8217;s servants when the siblings were tiffing with each other.</p><p>That is what you do when you don&#8217;t fear your people and when you feel confident in power. They are much closer to tyrants than the Medici are ever able to be, even after the republic falls. That&#8217;s what&#8217;s so neat. Because the resistance failed, if we&#8217;re looking at it in black and white. The republic fell. There wasn&#8217;t a republic anymore. There was a duke. He took over, and the old system was gone.</p><p>But because the republic fought so hard and because the people really believed in it, the people had a lot more rights, and the tyrant was a lot less tyrannical because there had been that fight. It&#8217;s a great example of how even when resistance loses, resistance wins.</p><p><strong>Dwarkesh Patel</strong></p><p>I think there&#8217;s an interesting parallel to today, not to be too on the nose, but sometimes people debate the odds that America becomes a Putinist kind of country within a couple of decades. I think the odds are actually quite low. Just because even though constitutionally, or at least in precedent, the president is very powerful, the republican expectation is so strong. The amount of resistance faced, even when you successfully do something, demotivates the next escalation.</p><p><strong>Ada Palmer</strong></p><p>The only thing that makes resistance weak in the US is when people feel as if partial victory is failure. Remembering moments like how Florence&#8217;s resistance all the way to the end meant that there was more liberty for the next several centuries, even under the tyrant, is what we need to remind ourselves, that partial victory is an important thing.</p><p>Even if the worst were to happen and there were to be tyranny, that tyranny would be so much weaker because there was a lot of resistance, and traditions of resistance and structures would develop that would continue to exist.</p><p><strong>Dwarkesh Patel</strong></p><p>I think you should discuss the fact that the Medici are the bankers for the papacy. What does that mean? Why is that necessary? How are they able to make money off of that from the interest on the float?</p><p><strong>Ada Palmer</strong></p><p>When Cosimo de&#8217; Medici swings the contract as banker for the pope, it&#8217;s important to remember that when you can&#8217;t wire transfer money in the pre-modern world, collecting taxes is a very difficult and complicated system. It is generally done by the centralizing power that has the right to tax delegating somebody local. If you&#8217;re in a town, there&#8217;ll be a local tax collector. It&#8217;s his job to go around to everybody and collect taxes, send a portion of those taxes home to the central power, and keep a remainder to pay himself.</p><p>The central power will say, &#8220;We expect X amount of taxes from this area.&#8221; When you hear about wicked tax collectors, it&#8217;s because if you are told, &#8220;We want 10,000 florins worth of tax from this town,&#8221; but you extract 15,000, you can keep the other 5,000. The 10,000 is what you need to send to the central power, so the more you extract, the more you get paid.</p><p>This delegate system, in which there&#8217;s a local tax collector and even a more local tax collector below him who might collect tax from a particular village, means that you depend a lot upon the person whose job it is to collect your taxes. When Cosimo is papal banker, he is the person collecting and channeling the money from every church in Christendom when everybody puts a coin into a collection box or pilgrims come and put money. All of the wealth that&#8217;s supposed to flow back to the papacy is actually flowing to Cosimo. Cosimo is passing it on to the papacy after taking a cut.</p><p>That is a lot of money moving quickly. There is also a lot of ability to make contracts and contacts. We all know how important networking is. He rises in prominence from a banker to somebody who has enough money to effectively take over his state via manipulating the guys-out-of-a-bag system. To discuss that again briefly, if you have a system where you lottery people, <a href="https://en.wikipedia.org/wiki/Sortition">sortition</a> is the technical term for it. This is a very old form of government. <a href="https://en.wikipedia.org/wiki/Kleroterion">Ancient Athens used it</a>. It actually works really well.</p><p>But like any institution, it is corruptible. In the same way that you can corrupt voting by bribing people or manipulating the machines or manipulating voters, you can also corrupt sortition by bribing the people who pull names out of the bag. Or you can use the simpler mechanism which Cosimo uses first. If you&#8217;re a giant bigwig in the city and you employ a third of the people in the city and they&#8217;re on your payroll, and nine guys at random are chosen out of a bag, three of them are going to be your guys, just statistically.</p><p>If you tell all your guys, &#8220;I want this policy, this policy, and this policy, and if you have questions, send for me and I&#8217;ll tell you what to do,&#8221; when the plurality on a random council all have a plan and it&#8217;s your plan, you effectively control the city. In that way, the Medici effectively controlled this lotteried system, because they guaranteed that the plurality, in a situation that doesn&#8217;t have a majority, will always be them.</p><p>But of course, there&#8217;s a chance to that. In 1430 and 1432, Cosimo has bad luck, and the lottery draws a lot of people who dislike him and doesn&#8217;t draw any of his guys. They immediately declare him a traitor to the state, arrest him, and <a href="https://en.wikipedia.org/wiki/Cosimo_de'_Medici#Florentine_politics">lock him in a tower</a>.</p><p>And he bribes his way out. He offers the equivalent of about $300,000 to the guard outside the cell and $700,000 to the captain of the guard to smuggle him out of the tower. He wrote in a letter later that they were the two most foolish men he&#8217;d ever met because he was Cosimo de&#8217; Medici. He would happily have paid them tens of millions of dollars to let him out of there, but they weren&#8217;t ambitious enough to think to ask for more than a few hundred thousand.</p><p>So he escapes, and then the next election they happened to elect entirely people who just loved Cosimo. They invited him back to the city in triumph, declared him father of the fatherland, and arrested and persecuted all of his enemies, who turned out to be guilty of tax evasion and all sorts of other things.</p><p>That was the moment that his grip tightened. And he&#8217;s like, &#8220;I&#8217;m going to stop simply controlling a plurality, and I&#8217;m going to start bribing the people who actually run the elections.&#8221; His famous quote about this is, &#8220;It is dangerous to be rich and not powerful.&#8221; You need the power to defend yourself in a situation like King of the Mountain, where when you&#8217;re on top, everyone will try to knock you down.</p><p>This is the system into which Machiavelli is born. His family has worked for the Medici family for generations. He grows up expecting to work for the Medici family. But the problem with heredity is that sometimes you get a weak link.</p><p>And in the moment that Machiavelli is in his early twenties, he is coming of age, about to work in government for the first time, a government in which he is not, in fact, even fully enfranchised. That&#8217;s one of the fascinating things about the degree of his patriotism. You weren&#8217;t allowed to serve in government office fully&#8212;the lotteried offices&#8212;if your family was deep in debt. His grandfather had a lot of unpaid tax debt.</p><p>So he worked his whole life for a government of which he was not even quite a full citizen. That shows a deep love of country, but it also shows that even people who could not be in office deeply loved and cared about this republic and the important liberty they felt they had being ruled by the 5% instead of being ruled by one dictator.</p><p>To us, that isn&#8217;t a very big difference. They&#8217;re still both not democracy. We would say they&#8217;re both not liberty in the sense that we want liberty. But it&#8217;s an inch more liberty than monarchy. Even that small amount of liberty, people loved it. People were willing to fight for it. People were willing to go to the streets, wave their banners, and say &#8220;libertas&#8221; for the republic. Because they were invested in it, Machiavelli observes, they sustained it.</p><p>But eventually, <a href="https://en.wikipedia.org/wiki/Piero_di_Cosimo_de%27_Medici">one particular Medici</a>&#8212;I&#8217;m not saying names because they all have the same names over and over, and it&#8217;s really confusing&#8212;comes to power quite young and weak. He&#8217;s basically 20 when he&#8217;s suddenly in charge of a very precarious republic. Right then, the French are invading Italy, and he&#8217;s scared. He botches the diplomacy with France and falls into disrepute, and the city takes the opportunity to kick him out. The subsequent regimes, which are an independent republic again, are the ones for which Machiavelli works.</p><p>He was part of the regime that ruled while they were in exile. When they returned, they viewed him as an enemy. He didn&#8217;t actively organize to resist them, but his name was found on a list of potential people that an anti-Medicean resistance movement had intended to recruit. He is arrested, tortured, exiled, and in exile writes <em>The Prince</em>.</p><p>He dedicates it to the very family that exiled him because they now control Florence, and he will only work for Florence. He doesn&#8217;t want his manual of the great secrets of statecraft to be in the hands of anybody but his homeland, so that it will defend his homeland.</p><p>When Florence exiles you, they tell you, &#8220;Go to this place and wait, and if you&#8217;re good, we&#8217;ll invite you back.&#8221; Florence has been doing this for ages because Florence actually used this as the core of its diplomatic corps. When you have no nobility, you can&#8217;t have ambassadors in the full-on noble ambassador sense. There&#8217;s nobody in the city of sufficient rank to go talk to the kings, to play chess with the sultan, and do all the things you have to do to be a proper ambassador.</p><p>What Florence did instead is exile people and say, &#8220;Okay, we&#8217;re exiling you. You go to Bruges. Be our contact in Bruges. You go to London. Be our contact in London. Be good. Send us letters informing us what&#8217;s going on. When we have diplomatic needs to talk to the king, we&#8217;re going to send letters to you, and you&#8217;re going to forward them. If you&#8217;re good, you get to come back.&#8221; So being in exile is sort of being on probation, but also being entrusted with state matters.</p><p>That&#8217;s not quite what they did with Machiavelli. With Machiavelli, they banished him to a hamlet in the middle of the Tuscan countryside near nothing important and said, &#8220;Go sit in the country and rot, and if you&#8217;re good, we&#8217;ll invite you back.&#8221;</p><p>What everyone expects is that Machiavelli will break that promise and leave. Because he&#8217;s a well-known statesman, a scholar, a playwright, and a historian, and there are dozens of cardinals in Rome and other cities that would love to employ him. Kings of England love employing Florentines to work for them as secretaries. <a href="https://en.wikipedia.org/wiki/Kingdom_of_Naples">Kings of Naples</a> love employing Florentines to work for them as secretaries. He might go get a job tutoring the daughters of the Duke of Milan, the way <a href="https://en.wikipedia.org/wiki/Francesco_Filelfo">Francesco Filelfo</a> did when he was kicked out of Florence for opposing the Medici.</p><p>There are lots of places it&#8217;s expected an exiled Florentine intellectual will go where he will have the ear of power and be able to exert influence. He will be a mover and shaker at the court of Milan or Naples or England.</p><p>Instead, when they say to Machiavelli, &#8220;Sit in the country and rot, this is a test,&#8221; he passes the test and sits in the country faithfully and rots. If he had wanted to go be an intellectual power broker, the correct move is to run off to Rome and say, &#8220;I will give up the chance to go home the way <a href="https://en.wikipedia.org/wiki/Dante_Alighieri">Dante</a> did, but I will be a Florentine in exile, and I will write important things. I will live at the house of wealthy men who will support me and give me the ear of power, and I will exert my influence in that way.&#8221;</p><p>He does not do that. He stays in the country and he rots, and he continues writing letters home saying, &#8220;I will serve you or nothing. Bring me home to serve my country.&#8221; That is a weird thing to do, and not normal for the many other Florentine intellectuals who experienced similar banishments in the same period.</p><p><strong>Dwarkesh Patel</strong></p><p>How do we know that he wasn&#8217;t just trying to get back into power?</p><p><strong>Ada Palmer</strong></p><p>The answer is you read his personal letters. You read the way he talks about love of his country, and you read the way he talks to his friends. You read the letters he wrote when he discusses writing <em>The Prince</em>, and you read the comments he exchanges with the other friends that he shared it with.</p><p>His other works&#8212;<a href="https://en.wikipedia.org/wiki/The_Mandrake">his comic play</a>, which was a big hit, his <a href="https://www.gutenberg.org/files/2464/2464-h/2464-h.htm">history of Florence</a>, which was well known at the time&#8212;those he published and circulated. <em>The Prince</em> he kept in very close private circles, circulating it only with trusted, intimate friends, and then the copy that he sends in to Florence.</p><p>Yes, it&#8217;s a job application: &#8220;Please bring me back. I will work for you. I will be loyal. I support my city more than any particular iteration of my city. I support my country more than any particular regime or group that might be in power. Whatever is in power in my city, I will be faithful to it.&#8221; You see him expressing that in lots of different ways.</p><p>When in <em>The Prince</em> he says you can and should do all of these ruthless things to keep power, we have to remember that the end justifies the means when the end is the survival of your country. It&#8217;s not that the end, in general, justifies the means. Machiavelli feels very strongly that regime changes bring civil violence, and civil violence sheds blood. He has seen the streets of his city run with blood before.</p><p>He thinks that even life under a tyrant is better than life in a civil war, which is usually not life at all, given the massacre of the people and external conquest that are likely as a result of another regime change. So he says, &#8220;Don&#8217;t push for regime change. Even if the regime is tyrannical, more people will survive by sticking with the tyrant than by changing the regime.&#8221;</p><h3>(00:58:12) - Why it was so hard for Gutenberg to make any money off the printing press</h3><p><strong>Dwarkesh Patel</strong></p><p>I want to talk about the printing press. One thing I didn&#8217;t realize before reading your book is that not only does <a href="https://en.wikipedia.org/wiki/Johannes_Gutenberg">Gutenberg</a> go bankrupt after making the most significant invention of a millennia, but his apprentices also go bankrupt.</p><p>This is at a time when people like Cosimo are willing to pay on the order of hundreds of thousands of dollars per book. So with the guy who invents a way to make this way cheaper, how is this possible?</p><p><strong>Ada Palmer</strong></p><p>The problem is printed books are a mass-produced commodity in a world that does not have distribution networks for mass-produced commodities. Mass production is incredibly rare in this period. Coins are mass-produced, but that&#8217;s really about it. Almost everything is artisanally produced. When you have a mass-produced product, you need a distribution mechanism before you can sell it.</p><p>The great example is that technically e-books existed the first time anyone typed a book on a computer. Certainly in the 1970s there was such a thing as an e-book. But there was no market for e-books until the Kindle came out and made a commodity way to buy and sell e-books, then the e-book industry came into existence. So the e-book as a commodity is several decades younger than the e-book technically existing.</p><p>In the same way, you&#8217;re Gutenberg. You have figured out how to produce 300 copies of a book for the cost of one copy of a book. You do so. You print your Bible. You have 300 Bibles. You sell seven of them to the seven people in your small landlocked German town who are legally allowed to read the Bible in a period in which only priests are allowed to read the Bible. Congratulations, Mr. Gutenberg, you have 293 Bibles, and you can&#8217;t sell them, and you go bankrupt.</p><p>There has to be a distribution mechanism for books to find their market because there are certainly 300 people in Europe that want this, but there are not 300 people in one location where it&#8217;s being produced. So Gutenberg goes bankrupt. The bank seizes his press. They try to go into the business. The bank goes bankrupt. There is so much overhead. You spend hundreds of thousands of dollars on the production cost of the books, and then you get nothing back.</p><p>Gutenberg&#8217;s apprentices build presses. They go bankrupt. They flee their debts, flee the country, leave Germany, and go to Venice. Venice is the airport hub of the Mediterranean. Venice is where you change boats. If you&#8217;re sailing from A to B, you go to Venice, you change boats, you get to the next place. The hub system has always worked well.</p><p>So if you&#8217;re printing in Venice, you print 300 Bibles, you give ten Bibles to each of thirty ships&#8217; captains going to thirty different cities. They can sell them. The first economically sustainable circulation of print is enabled by the hub system.</p><p>Then book fairs come into existence in which printers will spend all year printing a book. They go with a thousand copies of their book to a book fair where there are a thousand other printers. They all trade, and then they go home to their town with five copies each of 200 books instead of a thousand copies of one book, and then they sell them in bookshops. Things like the <a href="https://en.wikipedia.org/wiki/Frankfurt_Book_Fair">Frankfurt Book Fair</a>, which still exists today, developed as the distribution mechanism.</p><p>There&#8217;s a slow growth and a slow saturation. That&#8217;s really cool because one of the things people think is unique about our present information revolution is that we&#8217;re living in this sequence of successive information revolutions. We had the computer, the computer was exciting. Then we had the personal computer, then we had the internet, the cell phone, social media, and now we have different social media networks coming in successively causing crises one after the other. And then we have LLMs and other applications of machine learning and generative AI.</p><p>It&#8217;s easy to think of each of these as different tech revolutions, as if we&#8217;ve just had ten tech revolutions in a row. But really, they are all deeper penetration of one tech revolution: the development of the computer. These are all applications of computers.</p><p>In the same way, the printing press comes in in 1450, and it isn&#8217;t done shaping the world instantly. It takes forty years to even be economically sustainable. It&#8217;s not until the 1490s that printers are making money.</p><p>And then in the 1510s, it&#8217;s time for pamphlets and pamphlet distribution. Now there&#8217;s news, and news is suddenly done by print, and that&#8217;s a revolution on the same scale as the difference between computers and cell phones. We get the <a href="https://en.wikipedia.org/wiki/Reformation">Reformation</a>, which is enabled by pamphlets in exactly the same way that the <a href="https://en.wikipedia.org/wiki/Arab_Spring">Arab Spring</a> is enabled by cell phones.</p><p>Then we get the newspaper, another new application of the same technology that follows, like social media. It&#8217;s one information revolution having multiple successive revolutionary applications as it disseminates and eventually saturates. It moves on a timescale quite similar to the timescale in which the digital one is happening as well, so that print keeps hitting Europe with successive revolutions for 150 years.</p><p>And every couple of decades, or every decade, there&#8217;ll be a new bang. Suddenly it&#8217;s possible to get a printed pamphlet from Wittenberg to London in seventeen days. Oh my God, we can coordinate our resistance movement against the Catholics. Boom. The Reformation happens. That wasn&#8217;t possible even a decade earlier when it took months to get a pamphlet from one end of Europe to the other.</p><p>So it&#8217;s best to think of these very much in parallel, the print revolution and the digital revolution, as one big technological change in information that then has successive applications as that one technology finds new forms and disseminates more deeply and keeps having consequences over decades. It&#8217;s not multiple separate revolutions. It&#8217;s one ongoing information revolution.</p><p><strong>Dwarkesh Patel</strong></p><p>Maybe other eras also have this and I just haven&#8217;t read the books about them, but from your book, I thought, &#8220;Oh, history just seems to be happening really fast, and seems to have sped up, especially religious and political history.&#8221; Obviously, the things happening in Italy, but even aside from that, you have <a href="https://en.wikipedia.org/wiki/Martin_Luther">Martin Luther</a> and the Reformation, and then just twenty years later <a href="https://en.wikipedia.org/wiki/English_Reformation">England splits off from the Catholic Church</a>, which is unprecedented in two millennia.</p><p><strong>Ada Palmer</strong></p><p>Then it has a bunch of tumults that flop, flop, flop so that every decade feels different. Here you are in 1506 being nostalgic for how the world was completely different in 1490. And you&#8217;re like, &#8220;That&#8217;s pretty fast.&#8221; Here we are in 2026 often feeling nostalgic for how things were in the year 2000.</p><p><strong>Dwarkesh Patel</strong></p><p>Is it fair to trace that back to the printing press or its offshoots, or is it just embedded?</p><p><strong>Ada Palmer</strong></p><p>It&#8217;s more that history has always moved fast. But when we teach it in high school, we&#8217;re trying to move over large chunks of time quickly, and so we pretend that it moved slowly. We have this lie that there were long periods of stagnation. But you can zoom in anywhere, and you&#8217;re going to find every decade feels different, and people in the 1320s are nostalgic for people in the 1300s.</p><p>It&#8217;s always felt like history was moving very quickly, and things rose and things fell. It&#8217;s the lies we tell ourselves in history books written in the 19th century that are trying to group all of these things together and make modernity special that confuse us about this.</p><p>I&#8217;m working on a paper right now about the video game <a href="https://en.wikipedia.org/wiki/Civilization_(series)">Civ</a>. Civ is the number one teacher of history in the world. It has shipped 70 million copies, and 65 percent of people on Earth who have technology play video games. Civ is the number one teacher of history, bar none, since 1991.</p><p>What does Civ tell you? Civ tells you that in antiquity, a turn is fifty years, and then in the Middle Ages, a turn is twenty-five years. Once you get into the Industrial Revolution, a turn is ten years, and then five years, and in modernity, a turn is just one year because in one year, as much happens now as happened in fifty years in antiquity. That lie is also what our textbooks tell us.</p><p>But it doesn&#8217;t matter where we zoom in. Any time I go to a talk where any historian is zooming in on any decade in any time and place, it always feels like it&#8217;s moving as fast as our present is moving.</p><p><strong>Dwarkesh Patel</strong></p><p>I guess the difference is that technologically, we know that they weren&#8217;t moving as fast.</p><p><strong>Ada Palmer</strong></p><p>Technologically, they were moving fast. We just don&#8217;t care about those technologies anymore.</p><p><strong>Dwarkesh Patel</strong></p><p>That&#8217;s interesting.</p><p><strong>Ada Palmer</strong></p><p>They were constantly inventing all sorts of things. We just take them for granted. The invention of chairs with backs, the invention of scissors, the invention of improved metallurgy so that steel could do things steel couldn&#8217;t do before. There was always technological change happening.</p><p>I&#8217;m in the middle of reading an amazing book about how, when you look at the paintings of <a href="https://en.wikipedia.org/wiki/Raphael">Raphael</a> and the few paintings we have by <a href="https://en.wikipedia.org/wiki/Michelangelo">Michelangelo</a>, the colors look like they&#8217;re really glowing, like gemstones. How did that happen? When you compare them to paintings from just a hundred years earlier somehow the colors are flatter. I&#8217;m not talking about the anatomy being more realistic. That&#8217;s separate, but the colors are flatter.</p><p>The answer is there was a sequence of revolutionary adaptations in how to process oil and how to process colors and mix them together, and then those were used to create fake gemstones, and there was a major industrial leap forward in the fake gemstone industry. Then people who were making picture frames realized they could use the same techniques from the fake gemstones to make fake gold by painting yellow over the surface of tinfoil.</p><p>And then those were used by artists who were like, &#8220;Wait, I want to make things that look like they glow like fake gemstones.&#8221; There were eleven major technical revolutions over the course of 120 years that led to those colors changing.</p><p><strong>Dwarkesh Patel</strong></p><p>Obviously progress has been happening in individual fields over time. But in this macroscopic view, and this is a big part of your book, there&#8217;s a reason that people living in the fourteenth century would say, &#8220;Look, the best time to be alive was when the Romans were around, and since then it&#8217;s just been the <a href="https://en.wikipedia.org/wiki/Dark_Ages_(historiography)">Dark Ages</a>.&#8221;</p><p>If they stood in relation to the Roman Empire as we stand to them, we would obviously notice that the world has seen so much progress since then. It clearly seems like the pace...</p><p><strong>Ada Palmer</strong></p><p>It&#8217;s hard to figure out when we are lying and when we are right where we say the pace picked up. One thing that makes the pace pick up in modern day is simply the population grew and grew and grew and is now much, much larger. The majority of people who ever lived in the entire history, since humans have been humans and not hominids, have lived in the last 200 years because the population became massive. How did the population become massive? Our agriculture and our hygiene enabled it.</p><p>How did our agriculture and our hygiene improve? Half of that is continuing on the artisanal level to invent new things in the same way that the artists invented better colors. Agricultural workers invented better technologies, and agriculture was constantly improving. You&#8217;re correct that with the <a href="https://en.wikipedia.org/wiki/History_of_scientific_method">arrival of the systematic scientific method just after 1600</a>, there is a deliberate societal desire to create intentional anthropogenic progress. I&#8217;ll zoom in on the arguments made in 1600, then I&#8217;ll zoom out and unpack them.</p><p>In 1600, the idea is that history up until now has been unsystematic. People have discovered things at random, but we can create a method in which we observe the world and use inductive reasoning to figure things out from those observations to create systematic descriptions of the secret motions that underlie nature, and from that work out technologies that are good and useful for humankind. If, as we make our observations of nature, we publish them and share them with each other, we can create a community of scientists that will share all of these discoveries with each other and with the world and therefore benefit it.</p><p>This is where, when I&#8217;m doing this in the classroom, I deliberately provoke and shock my students with the fun claim that <a href="https://en.wikipedia.org/wiki/Leonardo_da_Vinci">Leonardo da Vinci</a> was not a scientist. What I mean by that is that to be a scientist is to publish your results and share them with a community of other scientists so that they can test them, so that the whole human civilization progresses a little bit. When my friends who are chemists or my friends who are particle physicists discover something, the next goal is to share that discovery with everyone so everyone&#8217;s knowledge advances.</p><p>What does Leonardo do? He writes everything he discovers down in coded mirror writing so that nobody but him can possibly use it. He refuses to share even with his students and assistants the secrets of what he&#8217;s doing because Leonardo does not want to contribute to human progress. Leonardo wants to make unique masterpieces so that hundreds of years later, people will see them and marvel and say, &#8220;How did he do it? No one else has ever been able to replicate that method.&#8221; He wanted to be marveled at by the future exactly the way he and his peers marveled at the works of the ancients.</p><p>They look at something like the <a href="https://en.wikipedia.org/wiki/Colosseum">Colosseum</a> or the <a href="https://en.wikipedia.org/wiki/Pantheon,_Rome">Pantheon</a> in Rome with its enormous dome, and they say, &#8220;How did they do it? If only we could work that out, we could make one and then make sure no one else could.&#8221; <a href="https://en.wikipedia.org/wiki/Filippo_Brunelleschi">Brunelleschi</a>, who built Florence&#8217;s famous beautiful dome, deliberately burned all of his notes and schematics so that nobody else would be able to replicate his work. That is an inventor, and an engineer, but in the sense of a community of scientists, this is not a servant of human progress. This is actually a saboteur of human progress, if anything, who deliberately makes progress and then tries to cut it off at that point so that no one else can be his peer.</p><p>That is what you did as a learned inventor in the 1400s and in the 1500s. But as you get to 1600, the suggestion is different, and here I&#8217;m going to use <a href="https://en.wikipedia.org/wiki/Francis_Bacon">Francis Bacon&#8217;s</a> gorgeous <a href="https://plato.stanford.edu/entries/francis-bacon/#CriEarPhi">simile of the three insects</a>. There are three types of knowledge wielders, says Bacon.</p><p>First, there is the ant, who is the encyclopedist, who gathers information from all around the world. He learns everything he can, and he piles it up into a great big pile. He makes an anthill, and he sits on top. If he has the biggest anthill, the biggest pile of knowledge, then he&#8217;s proud of having made it. But all he does is assemble it and possess it. It&#8217;s a beautiful library, but nothing comes from it.</p><p>The second type is the system weaver, the spider who spins elaborate webs of beautiful, intricate, logical theory. You admire them, and you can get entranced and ensnared in them easily because they&#8217;re so beautiful. They&#8217;re almost hypnotic. But there&#8217;s nothing real in them. They&#8217;re all just spun out of the body of the spider himself, the theorist theorizing from his own mind.</p><p>The third kind, says Bacon, is the honeybee, who, gathering from among the fruits of nature, processes what he gathers through the organ of his own being to produce something which is sweet and useful for humankind. That is the scientist who gathers from nature to produce something sweet and useful for humankind.</p><p>With this rhetorical call, and with Francis Bacon&#8217;s portrait on the title page, the <a href="https://en.wikipedia.org/wiki/Royal_Society">English Academy of Sciences</a> is founded and starts publishing. The standard switches over from &#8220;You are not a great achiever because you built the dome&#8221; to &#8220;You are a great achiever because you worked out how it can be done, and you shared that sweet and useful thing with all of humankind.&#8221;</p><p>Bacon says if we do this, if we make academies of sciences, we can make sure that every human generation lives in a better condition than the past. We&#8217;ll have better agriculture, fewer famines. We will have refrigeration. We&#8217;ll have chicken in winter. We will have all of these things that we aspire to. If we collaborate, each generation&#8217;s experience will be better than the last. He says that to be a scientist is the ultimate act of charity because there is no greater act of charity than to give a gift to every human who will ever live after you.</p><p>That is the rhetoric of what you would feel was happening if you&#8217;re alive in the 1620s and 1630s. <a href="https://en.wikipedia.org/wiki/Galileo_Galilei">Galileo</a> is publishing his observations, and <a href="https://en.wikipedia.org/wiki/Ren%C3%A9_Descartes">Descartes</a> is publishing his systems. They&#8217;ve just <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC3721262/">discovered that the heart is a pump</a> and that they were totally wrong about the <a href="https://en.wikipedia.org/wiki/Humorism">four humors theory</a>. The blood circulates, and they&#8217;re trying to figure out what it does. They have magnification, and they can see worlds of complex patterns on the wing of a flea. It sounds like the whole world is suddenly coming into view, and we&#8217;re at the beginning of progress.</p><p>If we zoom out, we would say there&#8217;d been progress the whole time. People had always been inventing things. Agriculture in France was better in 1300 than it was in 1000. Plows got better, seed got better, cabbages were bred to be bigger. People worked out better pots. There were always artisanal inventors.</p><p>In fact, that&#8217;s a lot of what Bacon is observing. He worked in the patent office as a young man, and he would see a carpenter come in to patent: &#8220;I have invented a better chisel. I&#8217;ve invented a thing that goes like this. I&#8217;m going to patent it.&#8221; He would realize that it was workers and workmen and handicraftsmen who were inventing the really useful tools. He wanted to make this systematic.</p><p>We would say there was always anthropogenic progress. In 1630, they realize there is anthropogenic progress. They think there hasn&#8217;t been. They think they&#8217;re beginning, and that history up until this point has been stagnant, but now it&#8217;s going to suddenly be full of invention as, for the first time, there will be deliberate anthropogenic progress. Really, we would say there always was and that it&#8217;s accelerating, and at this point, we realize it and articulate and describe it.</p><p>You&#8217;ve probably seen lots of graphs of history with the hockey stick graph structure, where it&#8217;s flat for a long time and then zhoops up. They&#8217;ll put that zhoop after the invention of the scientific method. It depends on what we&#8217;re graphing, whether that zhoop is appropriate. It also depends on how much you zoom in or zoom out.</p><p>It&#8217;s true, we do get to inventions that result in enormous increases in population 150 years after Bacon. Would we have anyway, even if it hadn&#8217;t been systematized? Probably a bit later, and we would have a slightly flatter hockey stick. But we would still have hockey sticked. In the same way that when you put mice on an island without mice, they breed and they breed and they breed and they breed and they hockey stick. Humans would also have hockey sticked. But would we have hockey sticked later? Would we have hockey sticked with more pain? When mice hockey stick, they also starve to death and eat each other. We haven&#8217;t done that yet. Go us.</p><p>Was that science? Probably. There are a lot of factors to it. So is it true that everything accelerated after 1620? In one sense, yes. In another sense, it&#8217;s a continuation of a curve that was already curving.</p><h3>01:17:34 - Why the industrial revolution didn&#8217;t happen in Italy</h3><p><strong>Dwarkesh Patel</strong></p><p>I think you might have answered a question I was about to ask. The book you recommend on <a href="https://www.adapalmer.com/">your website</a>, <em><a href="https://amzn.to/4r862y8">The Renaissance in Italy</a></em>, I keep forgetting the name of the author. Italian names are tough.</p><p><strong>Ada Palmer</strong></p><p><a href="https://en.wikipedia.org/wiki/Guido_Ruggiero">Guido Ruggiero</a>.</p><p><strong>Dwarkesh Patel</strong></p><p>In some part, he has this question: Look, in Italy, as you mentioned, in Venice, they&#8217;ve really scaled the printing press. As a result, you have the metalworking for fine typesetting. Separately, milling technology for water mills and windmills is advanced, along with gears for watches. So he asks, why didn&#8217;t Italy have the Industrial Revolution? I wonder, do you stand by the answer you just gave, or is it a different theory?</p><p><strong>Ada Palmer</strong></p><p>Part of it. But another is, we cannot underestimate how much richer per square meter Italy is than everywhere else. Italy is the breadbasket, and it&#8217;s also the center of Big Oil, which is to say Big Olive Oil, which was both fuel oil for light and industrial oil for production, as well as cooking and eating oil. And the other major major industry of the period, which is Big Wool.</p><p>If you&#8217;re already the center of Big Finance, Big Wool, and Big Oil, do you need an industrial revolution? You&#8217;re already economically on top through the power of agriculture. It makes sense for it to have been a sort of industrial backwater area. What was England producing? Crappy quality wool?</p><p>England was so aware that it couldn&#8217;t process wool into high quality without masses of olive oil, which it couldn&#8217;t produce, that England just exported its crude wool to Florence in order to have Florence, with its olive oil reserves, produce the fine quality. Think about how a wool suit isn&#8217;t itchy, but a wool blanket often is. That wool suit isn&#8217;t itchy because lots of olive oil went into the process of producing it, at least at pre-modern tech levels. So do you want England to produce your itchy wool that people will only pay a small amount for, or do you want to export it?</p><p>It makes sense for it to have been somewhere industrially ambitious that wasn&#8217;t already economically on top to have done it. That&#8217;s one reason that industrialization doesn&#8217;t kindle in Italy. Italy is agricultural land and a finance world. It doesn&#8217;t feel like it needs a new industry.</p><p>Another factor is mining. This land is more valuable as a farm than it is as a mine. You don&#8217;t want to rip it up. Another is it&#8217;s so subdivided because those rich cities are still mostly independent, whereas a centralized crown in England is more able to pass legislation to facilitate a massive transformation.</p><p>No city really wants to be the one where the giant industrialization is happening. It&#8217;s awful for the city. Note that the industrialization of the Industrial Revolution was mostly outside of the wealthier centers of England in the second-tier towns. They grow massively into huge industrial areas like Lancaster. So those are a plural bunch of reasons.</p><p><strong>Dwarkesh Patel</strong></p><p>But I would have also thought that the competitiveness between different Italian city-states would have made it so that if they get better textile machines before you, it&#8217;s a disaster because they&#8217;re right there.</p><p><strong>Ada Palmer</strong></p><p>This is not going to sound plausible to anybody, but it&#8217;s true. We&#8217;ve been looking at some documents recently which pretty much confirm that they did figure out how to make industrial looms in the 1400s, and they didn&#8217;t want to. They wanted to make luxuriant artisanal fabrics.</p><p><strong>Dwarkesh Patel</strong></p><p>This, by the way, was another interesting thing from the book. With the first printed books, there&#8217;s not this market of commodity things that are produced cheaply that the average person is going to be like, &#8220;Oh, if I can get this for $10.99, I&#8217;ll go buy it.&#8221; So they&#8217;re trying to make this thing look like it was produced as artisanal luxury grade.</p><p><strong>Ada Palmer</strong></p><p>Right. The first printed fonts look like handwritten scripts, and often have a blank space to illuminate it so that it looks just as fancy as manuscripts.</p><h3>01:23:02 - The Library of Alexandria isn&#8217;t where most ancient books were lost</h3><p><strong>Dwarkesh Patel</strong></p><p>One thing I wanted to ask you, back to the printing press. Not only does printing get cheaper, but around this time, paper itself also gets cheaper. So not just reading, but writing gets cheaper. Do you as historians see a marked change in this period in the amount of records that are taken and, as a result, our understanding?</p><p><strong>Ada Palmer</strong></p><p>A huge amount rests on whether you have a cheap writing surface. Rather than looking first at the Renaissance, let&#8217;s look at what we think of as the <a href="https://en.wikipedia.org/wiki/Fall_of_the_Western_Roman_Empire">fall of Rome</a>. One of the biggest things that happens there is that Western and Northern Europe lose access to papyrus. Papyrus is the cheap writing surface of antiquity. It is an easy plant-based writing surface.</p><p>You take this tall, thin water reed that is fibrous like asparagus. You slice it into ribbons. You set them out in the sun, a bunch of them parallel to each other sitting on a stone like noodles. You put a second row of noodles perpendicular to that on top, and then they dry in the sun, and they are naturally sticky. They stick to each other. They produce a sheet. Practically no labor has gone into this. You&#8217;ve sliced, you&#8217;ve laid out, boom.</p><p>Papyrus is a very inexpensive writing surface, and this is what enables Rome to have a bureaucracy and to have libraries in any mid-sized city. People can send letters back and forth. There can be enormous tax records. Sometimes when Egypt and Rome are at war, Egypt will be like, &#8220;No, we are angry. We&#8217;ll stop exporting papyrus.&#8221; No papyrus to Rome, and then Rome&#8217;s infrastructure will fall apart overnight because you can&#8217;t do anything if you can&#8217;t write stuff down.</p><p>Papyrus is a warm weather plant. It is killed by frost. You cannot grow it north of the frost line. So France, Spain, even most of Italy, you can only grow papyrus down in the very tip down in Sicily.</p><p>Without papyrus, what you&#8217;re writing on is a dead sheep. If you think of the price of a head of lettuce and the price of a leather jacket, you&#8217;re understanding the difference between a sheet of papyrus and writing on a dead sheep. Every page of a medieval book is as expensive as that much of a leather jacket. A handwritten medieval book handwritten on parchment costs as much as a house, so that a small pocket copy of a book costs as much as a studio condo. A big illuminated fancy Bible, you&#8217;re spending on that what you would spend on a villa in the countryside.</p><p>This is an enormous expense. To have a library is to be not just rich, but mega-rich. Only the wealthiest cities contain anybody who has a library. The great library of the <a href="https://en.wikipedia.org/wiki/University_of_Paris">University of Paris</a>&#8212;<em>the</em> library from Europe&#8217;s perspective&#8212;has six hundred books. There&#8217;s definitely more than six hundred books in this room. Every kiosk at an airport selling Dan Brown novels has more than six hundred books. This is nothing.</p><p>At the same time as that, in the Middle East, sultans have libraries of over a thousand books or five thousand books. There are libraries in Sub-Saharan Africa with thousands of books. There are libraries in China with thousands of books because they have cheap paper, rice paper. The Middle East has papyrus. Europe, and only Europe, is writing on a leather jacket.</p><p><strong>Dwarkesh Patel</strong></p><p>What changes around this time? How is Europe able to get paper?</p><p><strong>Ada Palmer</strong></p><p>Still zooming in on the fall of Rome. Rome had lots and lots of books on papyrus. They start falling apart because papyrus is brittle. Most of our knowledge from antiquity is not lost at the burning of the <a href="https://en.wikipedia.org/wiki/Library_of_Alexandria">Library of Alexandria</a>. It&#8217;s lost between 400 and 600 A.D. when the papyri are falling apart.</p><p>Here you are with a library of a thousand books, and you can only afford to make a hundred new books. You have to choose which hundred of these thousand to save because there literally is not enough industry on your continent to make enough leather to copy down all this text. You have to pick. The majority of what we lost from antiquity, we lost then.</p><p>We lost it when the papyri were falling apart. This also distorted what survived because most of the copying out was done by monks. When you have a thousand books and you can only save a hundred of them and you&#8217;re a monk, you&#8217;re like, &#8220;What will I save? I know, <a href="https://en.wikipedia.org/wiki/Augustine_of_Hippo">Saint Augustine</a>. I love Saint Augustine.&#8221; This is why we have more surviving work by Saint Augustine than the entirety of all pagan classical Latin. The subjective tastes of the people in power at the moment the papyri were falling apart ended up being an unintentional moment of censorship that biased what survives from antiquity.</p><p>Paper technology hits Europe in 800 A.D., so we&#8217;re talking about a four-hundred-year famine of a cheap writing surface. Paper is nowhere near as cheap as papyrus because you need to gather rags from used clothing. You immerse them in water, and you beat them violently using a mill for a very long time until they become a pulp. You then scoop that pulp up on a screen, and the fibers lock together. It&#8217;s sort of a slurry that looks like grits. You lift up the slurry, and it locks together into a sheet of paper.</p><p>It&#8217;s not as cheap as just growing papyrus, and it&#8217;s much more labor. You have to build a paper mill. If parchment is a leather jacket and papyrus is buying a head of lettuce, this is somewhere in between. What&#8217;s in between a leather jacket and a...</p><p><strong>Dwarkesh Patel</strong></p><p>This feels like a trick question.</p><p><strong>Ada Palmer</strong></p><p>This is somewhere in between, getting yourself a dozen frozen prepackaged meals, which are complex and have many ingredients. A lot of industry went into producing the actual packaging, more so than a head of lettuce. So it&#8217;s ten times as expensive, but it&#8217;s still a tenth as much as the leather jacket.</p><p>Paper comes in, and people are very wary of it. Paper is clearly not as strong as parchment. Parchment is really tough stuff. People start using paper for rough drafts, letters, sketchbooks. When you&#8217;re doing the sketch before doing a painting, you might do that on paper. But Europe has paper for four hundred years before the earliest state document is ever written on paper, to give you a sense of how people are wary of it.</p><p>It disseminates slowly. It&#8217;s still expensive. It requires industry and production, but it is a tenth as expensive as leather. Paper disseminates slowly through Europe. Again, this is one of these things where there was always technological change, and all technological changes are gradual.</p><p>Paper comes in in 800. It&#8217;s being trusted by 1200. When printing begins, they&#8217;re printing on paper, but they even print on <a href="https://en.wikipedia.org/wiki/Vellum">vellum</a>. If you&#8217;re a really rich person, you would be like, &#8220;Please print two copies on vellum for me.&#8221; Dukes like the Dukes d&#8217;Este, <a href="https://en.wikipedia.org/wiki/Isabella_d%27Este">Isabella d&#8217;Este</a>&#8212;the sister of the duke who walked around buck naked to show off that he could&#8212;specially ordered all of her books to be printed on vellum even when the rest of the print run was on paper. These are the very books being produced in Venice by the apprentices of Gutenberg who ran away.</p><p>At that moment in the 1490s, if you&#8217;re really rich, you might be invested in these newfangled printed books, but you&#8217;re still not trusting paper, even though paper has been there for six hundred years at that point. So again, gradual adoption of technologies and gradual trust in paper. They&#8217;re still using parchment for things, gradually less and less, but substantially over the course of the 1600s. You can even find things written on parchment in the 1700s and 1800s. British Parliament still did its records on parchment up until ten years ago, and the Vatican still does its official records on parchment now.</p><p><strong>Dwarkesh Patel</strong></p><p>This is a digression, but the numbers of how expensive a book is didn&#8217;t make sense to me just based on how much scribe time it took. You say it&#8217;s $600,000 per book, and then separately, it&#8217;s five months of scribe time. I&#8217;m like, how much are the scribes getting paid? But if it&#8217;s the paper... What changes with Gutenberg?</p><p><strong>Ada Palmer</strong></p><p>The paper and the ink. But a lot of it is scribe time.</p><p><strong>Dwarkesh Patel</strong></p><p>But Gutenberg still needs paper, right?</p><p><strong>Ada Palmer</strong></p><p>Yeah, Gutenberg needs paper. That&#8217;s why he goes bankrupt. He borrows the equivalent of about $1.5 million to buy paper, and then doesn&#8217;t make back $1.5 million worth of material when printing it.</p><p>This is what makes printing a risk. You have to start buying the paper up front. You need to buy it in a big lot so that it matches, because people don&#8217;t want the paper to suddenly be a different color within their book. You&#8217;re investing a lot up front, and you&#8217;re not getting anything back until you produce this slow print run, which is why printers start printing pamphlets.</p><p>They can have one press that&#8217;s slowly printing a valuable book that will take six months to print. Next to it they have another press that&#8217;s printing pamphlets where in two days they&#8217;ve printed a fashion report on what everyone was wearing at the royal wedding, which they can sell right away. It&#8217;s much cheaper, but it means they have something they can sell two or three times a week. So you have the pamphlet following the book, printing cheap news, printing scandal rags.</p><p><strong>Dwarkesh Patel</strong></p><p>Why is it cheaper? Because the material is cheaper?</p><p><strong>Ada Palmer</strong></p><p>Just because it&#8217;s only five pages long.</p><p><strong>Dwarkesh Patel</strong></p><p>Oh, I see. Got it.</p><p><strong>Ada Palmer</strong></p><p>I could grab one if you want to see one. So if we look at some examples. I&#8217;ll show you these one by one. For example, this is a pamphlet. Naked pages, short text, hand-stitched together. It would take two or four days because you print the front side and then the back side. It&#8217;s cheap. It&#8217;s ephemeral. You print a thousand of them. You sell a bunch around the town. You sell a bunch to news writers who are going to and from other cities, who will buy them and bring them to the next town.</p><p>If you&#8217;ve printed news in Milan, people who are going to Florence will want to buy your news to go there. It might be a report of a siege. It might be what people were wearing at the royal wedding. My favorite title of a pamphlet was &#8220;The Scandalous Tale of a Doctor from Padua and How He Seduced His Maid, Murdered His Wife, Murdered the Maid, Cut Out Her Heart and Ate It, and How He Was Justly Punished by God.&#8221; That was the title of the pamphlet. These things circulated around. Some of them were nonsense, some of them were real news. Most were combinations. But you can sell something like this cheaply in a couple of days.</p><p>Often they would have a cheap blue cover. You have seen this color before. This is the color of laundry lint, because fundamentally laundry lint is what paper is. You take rags of old clothes, you put them in water, you beat them until they become a pulp, and you skim it out with a sieve. Laundry lint is what rag paper is. If you don&#8217;t bleach it, it&#8217;s this generic blue-gray color, which is sort of the average color of what human beings wear.</p><p>That&#8217;s a copy of <em><a href="https://en.wikipedia.org/wiki/The_Gentleman%27s_Magazine">The Gentleman&#8217;s Magazine</a></em>, another example of technology taking a leap forward in the 18th century. When they invented the newspaper, they immediately had the problem of, &#8220;Oh, no. Newspapers contradict each other. We don&#8217;t know what&#8217;s true. We have to fact-check stuff.&#8221;</p><p>That one has a great fold-out. I think there&#8217;s a procession or something. That is what everybody wore at the state funeral. Instead of photographs, we have this fancy, &#8220;Here is what everyone was wearing at the state funeral.&#8221; Very exciting.</p><p>Your laundry lint, if you don&#8217;t bleach it, remains the color that it on average was. In the 18th century, they have newspapers. The newspapers are reporting news, but they don&#8217;t quite say the same thing as each other. The problem becomes, how do we know who to trust?</p><p><em>The Gentleman&#8217;s Magazine</em> was developed, and every week they would publish a roundup of that week&#8217;s news saying what each newspaper said about it, where they contradicted each other, analyzing who&#8217;s right and wrong. It was the fact-checking. This is the first magazine. It invented the word &#8220;magazine&#8221; being used in this context. It was an intellectual response to the fake news problem of how we reconcile what happens with newspapers.</p><p>You see these many iterations: they invent the printing press, then they invent the pamphlet, then they invent the newspaper, then they invent the magazine to cope with the newspaper. The newspaper is invented to cope with the pamphlet because you don&#8217;t know whether to trust the scandalous tale of the doctor from Padua and how he murdered his wife. Is he real? We don&#8217;t know. But if somebody publishes a newspaper that serially prints news every week, they have a reputation. They have to be respectable. You&#8217;re not going to subscribe to them if you catch them printing nonsense.</p><p>The serial nature of a newspaper was a form of accountability that made people willing to trust it over time. The newspaper is a way of fact-checking the pamphlet. The pamphlet is a way of making money while you&#8217;re printing your longer book. I will also let you have a look at papyrus.</p><p><strong>Dwarkesh Patel</strong></p><p>Thank you.</p><p><strong>Ada Palmer</strong></p><p>You can see the plaid pattern of the papyrus because it is made of two layers of strips. And there&#8217;s a papyrus scroll. That&#8217;s modern papyrus. The thing about papyrus is that in addition to being cheap, it&#8217;s very brittle. It works better in a scroll than it does folded over because the folded edge cracks really easily. If you try to make this into a <a href="https://en.wikipedia.org/wiki/Codex">codex book</a>, it&#8217;s going to be very fragile.</p><p>Here you go. This is a real 17th-century letter in absolutely indecipherable handwriting.</p><p><strong>Dwarkesh Patel</strong></p><p>On parchment?</p><p><strong>Ada Palmer</strong></p><p>On parchment. You can even tell, because that&#8217;s cheap parchment, which side was the outside of the animal and which side was the inside.</p><p><strong>Dwarkesh Patel</strong></p><p>The handwriting is in some sense bad, but it&#8217;s also very well aligned.</p><p><strong>Ada Palmer</strong></p><p>Tiny and precise. But here is good parchment. It is hard to believe that it&#8217;s animal skin. These are pages from a <a href="https://en.wikipedia.org/wiki/Book_of_hours">book of hours</a> from about 1480, individually hand-calligraphed. You can see that one has a hole through it. They wrote around the hole because it&#8217;s too valuable to not use that sheet.</p><p><strong>Ada Palmer</strong></p><p>These are paper thin. You can barely tell, if you look carefully, which side was the outside of the animal and which was the inside because one side has tiny little speckles of pores.</p><p><strong>Dwarkesh Patel</strong></p><p>Where is this from?</p><p><strong>Ada Palmer</strong></p><p>A book of hours. This is probably a French book of hours. A book of hours is a personal prayer book. Bible quotes, objects of meditation.</p><p>The book would be fat and small. This was the most common manuscript in the Middle Ages. You would carry it around in your pocket, and you&#8217;d pull it out different times of day for personal prayer. But it also has big margins so that you can take notes in it, write down addresses, have friends write notes in it.</p><p>You use it almost like a day planner. It&#8217;s the smartphone of the period in which you make all your notes or write down people&#8217;s names. You might have celebrities you meet sign your book of hours. All sorts of neat things go into the margins as you use this to organize the day.</p><p><strong>Dwarkesh Patel</strong></p><p>That would be extremely interesting as a collector&#8217;s item, random people&#8217;s book of hours and what kinds of things they recorded.</p><p><strong>Ada Palmer</strong></p><p>Oh yeah. Think of a leather jacket, but how much more industrial effort went into making leather literally paper-thin like this. Huge amounts of industrial effort go into making the pages of such a book.</p><p><strong>Dwarkesh Patel</strong></p><p>My favorite example of this kind of distribution and diffusion taking longer than you would think for a very fundamental technology&#8212;well, this is now my favorite example, so my second favorite example&#8212;is oil. I interviewed <a href="https://www.dwarkesh.com/p/daniel-yergin">Daniel Yergin</a>, who wrote this big book about the history of oil. In the 1860s, <a href="https://en.wikipedia.org/wiki/Drake_Well">Drake strikes oil in Pennsylvania</a>.</p><p>It&#8217;s in the 1910s that the car is invented, the <a href="https://en.wikipedia.org/wiki/Internal_combustion_engine">internal combustion engine</a> is put into a thing which you sell millions of copies of. Until then, oil is just used for <a href="https://en.wikipedia.org/wiki/Kerosene">kerosene</a>, which is just for lighting. The actual gas is just thrown away. In fact, when the light bulb was invented, people were wondering whether <a href="https://en.wikipedia.org/wiki/Standard_Oil">Standard Oil</a> was going to go bankrupt because the main use case had gone away.</p><p><strong>Ada Palmer</strong></p><p>Oh, neat. I always think of Julius Caesar&#8217;s description of Britain when the Romans first get there. He says, &#8220;The people of Britain are so poor, they can&#8217;t afford to burn wood, so they burn rocks.&#8221; We know he&#8217;s talking about coal.</p><p><strong>Dwarkesh Patel</strong></p><p>Oh, I thought it was satire.</p><p><strong>Ada Palmer</strong></p><p>No, he&#8217;s talking about coal. They had coal in the days of Julius Caesar, but they didn&#8217;t figure out its massive industrial utility until many, many years later.</p><p><strong>Dwarkesh Patel</strong></p><p>There is this interesting question of why the Romans didn&#8217;t have the <a href="https://en.wikipedia.org/wiki/Industrial_Revolution">Industrial Revolution</a> because they had these <a href="https://barryyeoman.com/2010/09/the-mines-that-built-empires/">huge silver mines in Spain</a> and elsewhere, but no coal.</p><p><strong>Ada Palmer</strong></p><p>You have the Industrial Revolution when you feel you need to. That&#8217;s the thing about Gutenberg as well that a lot of people don&#8217;t think about. People are like, &#8220;Gutenberg was an inventor and invented a thing, and then it had an impact.&#8221; No. He was living in the middle of a library building boom in which there was a huge demand for books that spiked. He invented the invention in response to that cultural change.</p><p>It isn&#8217;t by chance that we got the printing press in 1450. There was a huge boom of library buildings starting in the 1410s, and inventors were trying to figure out ways to make books cheaper. They were making smaller books. They were using paper more. Paper surges before the Gutenberg movable type printing press. So Gutenberg isn&#8217;t a random genius out of nowhere. It was the moment that people needed more books. We were going to get the invention.</p><h3>01:41:21 - The Inquisition accidentally invented peer review</h3><p><strong>Dwarkesh Patel</strong></p><p>One thing you say in passing in the book is Martin Luther comes up at the exact right time, because you&#8217;ve got <a href="https://en.wikipedia.org/wiki/Girolamo_Savonarola">Savonarola</a> in the 1490s, and he&#8217;s another prophet type. I guess he&#8217;s the modern analog of somebody like <a href="https://en.wikipedia.org/wiki/Ruhollah_Khomeini">Khomeini</a> in Iran, setting up a theocratic government, but too early. Machiavelli you say is too late because the censorship is already in place. What is the censorship that is in place by the time of Machiavelli? What is the alternative world?</p><p><strong>Ada Palmer</strong></p><p>Machiavelli, remember, is contemporary with Luther. It&#8217;s just that he circulates his stuff very briefly and very privately. He doesn&#8217;t want a pamphlet version of his ideas out there because he only wants Florence to have it.</p><p>Luther hits the sweet spot when the pamphlet distribution network had just developed. When Savonarola printed pamphlets, they only circulated around Florence and its neighbors, Siena and Pisa. It took months for them to get farther. His movement was quickly crushed.</p><p>When Luther makes the <a href="https://en.wikipedia.org/wiki/Ninety-five_Theses">Ninety-five Theses</a> public, they&#8217;re in print in London seventeen days after he releases them in Wittenberg. The pamphlet runners go foom, foom, foom, and get the news there, and things are printed overnight and come out that fast.</p><p><strong>Dwarkesh Patel</strong></p><p>But it seems like you&#8217;re hinting that within the next two decades, there&#8217;s a new censorship regime across Europe.</p><p><strong>Ada Palmer</strong></p><p>A new censorship regime responds. The censorship regime is very effective at shaping what is printed in books, but can never keep up with pamphlets. In the same way that the government can pressure CNN, the government can&#8217;t pressure random people on a social media network. You&#8217;re not going to be able to keep up with that speed.</p><p>One of the funny problems that the <a href="https://en.wikipedia.org/wiki/Inquisition">Inquisition</a> always had when trying to persecute printers is that printers worked in the information distribution industry. They were the people who paid the news writers, whose job it is to move as fast as humanly possible between cities. Which meant that news always reached them first. If a printer was ever convicted by the Inquisition, they would find out before the Inquisition could possibly get there to arrest them.</p><p>The Inquisition never succeeded at arresting printers. They&#8217;d always skipped town by the time the Inquisition got there, because if you employ the news writers, you find out first what&#8217;s going on. The Inquisition can&#8217;t keep up.</p><p>When we look at censorship, there&#8217;s an intersection of four factors as to whether censorship is possible. One of them is law: Is it legal for the censorship to happen? Another one is the technology. Is it actually possible to censor this thing? You cannot censor whatever moves the information fastest because it will move the information faster than you can move.</p><p>Even if that one printer had to skip town, he will set up shop somewhere else, a new person will take over his shop, and the information will still move. So pamphlets become unpoliceable. You can try to police them, you can partially police them, but keeping pamphlets from moving around&#8230; They&#8217;re anonymous, they&#8217;re quick, they&#8217;re produced overnight, they move quickly. You just can&#8217;t keep up with them.</p><p><strong>Dwarkesh Patel</strong></p><p>Couldn&#8217;t they just punish print shops for publishing things? Just say, &#8220;This is what we like, and if you do something we don&#8217;t like, we&#8217;ll punish you,&#8221; which is how censorship in China works, for example.</p><p><strong>Ada Palmer</strong></p><p>They did. So the printer skips town. The printer moves to the next town. There is a cost to that. There&#8217;s a human cost to evading that. You&#8217;ve had to leave your home and friends behind and move to a new place, but they don&#8217;t get you. It&#8217;s also very easy to deny that the pamphlet came from you at all.</p><p>The print industry proves very difficult to censor, and we&#8217;re experiencing the same thing with social media. Everyone is like, &#8220;Censor the pornography on this social media channel,&#8221; and they&#8217;re like, &#8220;We just can&#8217;t. It&#8217;s too fast. There&#8217;s too much.&#8221; Or, &#8220;Censor the hate speech.&#8221; &#8220;We just can&#8217;t. It&#8217;s too fast, there&#8217;s too much.&#8221;</p><p>There are too many pamphlets, and they could crack down on one particular pamphlet shop. We have records of this. There&#8217;s a brilliant analysis in <a href="https://history.ufl.edu/directory/anton-matytsin/">Anton Matytsin&#8217;s</a> book, <em><a href="https://amzn.to/4d3dWoZ">The Specter of Skepticism in the Age of Enlightenment</a></em>. He has a great description from the notes of a raid on a clandestine bookshop. This wasn&#8217;t the printer, this was the underground bookshop that was selling illegal books, and they&#8217;re raided. It has all the details of how angry the people were about different things that the shop had.</p><p>So there was censorship and there were crackdowns, but it was a censorship that could not actually prevent circulation. It could restrict it, it could make it harder, it could make it scary, but it couldn&#8217;t prevent it.</p><p><strong>Dwarkesh Patel</strong></p><p>Before books become cheap, unless you&#8217;re fantastically wealthy, you&#8217;re reading the same couple of books&#8212;if you&#8217;ve ever read a book&#8212;again and again throughout your life.</p><p><strong>Ada Palmer</strong></p><p>Cosimo de&#8217; Medici&#8217;s father owned, I think it was twelve books.</p><p><strong>Dwarkesh Patel</strong></p><p>I want to understand the intellectual significance of rereading the exact same book again and again. Maybe the reason <a href="https://en.wikipedia.org/wiki/Petrarch">Petrarch</a> loved Cicero so much is, imagine reading the same book twenty times, hitting the same joke again and just meditating on every single point. There&#8217;s got to be a difference in intellectual culture as a result of treating these things as the equivalent of the Bible.</p><p><strong>Ada Palmer</strong></p><p>You really feel like you get to know the person intimately. You develop a personal relationship with the ancient author. You are participating in a conversation across the diaspora of time. It&#8217;s a one-way conversation. You&#8217;re responding to them, the future will respond to you. But there is a great deal of intimacy.</p><p>Petrarch talks about his friend Cicero and being betrayed by his friend Cicero. He finds new works of Cicero that he hadn&#8217;t read including some of Cicero&#8217;s letters in which Cicero is not following his own stoic philosophical precepts and is being petty, yelling at people about real estate, and getting all upset after his daughter&#8217;s death. You know how people get manic when there&#8217;s been a death in the family and start quarreling about everything? Cicero gets like that, and Petrarch is heartbroken.</p><p>To him it means even the wisest man in history could not conquer that urge to become irrational and petty in the face of grief. If even Cicero became irrational and petty in the face of grief, does that mean humanity is doomed to forever be irrational and petty in the face of grief? He talks about Cicero breaking his heart and his foot, because the book fell on his foot and broke it, and he got a bad infection, and he was bedridden for months.</p><p><strong>Dwarkesh Patel</strong></p><p>Totally different topic, but in 1492, <a href="https://en.wikipedia.org/wiki/Christopher_Columbus">Columbus</a> comes to the New World. They discover the New World. What is the reception of this news?</p><p><strong>Ada Palmer</strong></p><p>I was just at a conference a week ago in which we confirmed that there&#8217;s a Vatican document from 1100 or maybe 1200&#8212;I forget the exact year&#8212;that recognizes the existence of <a href="https://en.wikipedia.org/wiki/Vinland">Vinland</a>, i.e. Canada, where they got the information from the Vikings.</p><p><strong>Dwarkesh Patel</strong></p><p>Oh, interesting.</p><p><strong>Ada Palmer</strong></p><p>They thought it was just a little thing, but yeah.</p><p><strong>Dwarkesh Patel</strong></p><p>So they&#8217;re rediscovering the New World. Would it be the equivalent of finding out there are aliens today? Why wasn&#8217;t it considered more significant? Why wasn&#8217;t the consensus, &#8220;This is the main thing happening right now, we&#8217;ve discovered the New World&#8221;?</p><p><strong>Ada Palmer</strong></p><p>When I teach my class on the 1490s, the students, many of whom are American, always have trouble wrapping their heads around people thinking that the New World isn&#8217;t a big deal. A big part of it is that they find the Caribbean islands, and they find the coast, and they think this is small.</p><p><strong>Ada Palmer</strong></p><p>The way I put it to my students is, the news comes back, we&#8217;ve found something across the water to the west. It might be even as big as the <a href="https://en.wikipedia.org/wiki/Canary_Islands">Canary Islands</a>. They&#8217;ve found something, but they don&#8217;t realize they&#8217;ve found something the scale of Europe and Africa. Actually, it&#8217;s not as big as Europe and Africa, but they found something humongous. That&#8217;s part of it. Another part of it is no matter how big and important something far away is, it&#8217;s hard to bring your mind out of the petty squabbles that are happening right around you, especially when they feel like life or death.</p><p>If it&#8217;s 1492, what is happening? France is about to invade Italy. Europe might be embroiled in the <a href="https://en.wikipedia.org/wiki/Italian_War_of_1494%E2%80%931495">largest war it&#8217;s seen in fifty years</a>. The <a href="https://en.wikipedia.org/wiki/Pope_Alexander_VI">papacy has just been taken over by Spain</a>. <a href="https://en.wikipedia.org/wiki/Catholic_Monarchs_of_Spain">Spain is suddenly trying to throw its weight around in Europe</a> in a way that&#8217;s unprecedented. The <a href="https://en.wikipedia.org/wiki/Hungarian%E2%80%93Ottoman_Wars#Turkish_wars_of_Matthias_Corvinus_(1458%E2%80%931490)">Ottomans have just invaded Italy and Hungary</a> and might be coming again. Also over there, there&#8217;s a new thing. Okay, great. We&#8217;ll worry about that when we&#8217;re not having three wars at the same time. But guys, we&#8217;re having three wars at the same time. Oh my God. And then Martin Luther hits Europe like a ton of bricks when they still haven&#8217;t even figured out that this is a continent and not an island. In the same way, if you&#8217;re in a country and it&#8217;s having a tumult, you worry a lot about its tumult, even if a larger tumult is happening in a faraway country. It&#8217;s hard to bring your mind out of Europe at crisis to be like, &#8220;Hey, this is a thing.&#8221;</p><p>The other is they&#8217;re inventing lots of new things, and it falls into the sphere along the rest. They&#8217;re discovering the existence of sub-Saharan Africa, where they thought there was basically one country&#8217;s worth of stuff, south of the Sahara, Ethiopia and nothing else. Then they&#8217;re like, &#8220;Oh my God, there&#8217;s a whole big thing that sticks out.&#8221; They&#8217;re also discovering that the heart is a pump. That&#8217;s a bit later, but they&#8217;re discovering all sorts of stuff at the same time.</p><p>The discovery of the New World, especially when they realize how big it is, becomes an intellectual challenge where they say, &#8220;Wait, does this mean all the maps we&#8217;ve had are wrong? Does this mean the ancients were wrong about geography? Does it mean the world is a lot bigger than we used to think the world is? Let&#8217;s worry about that the same way we worry about revolutionizing our mathematics and figuring out that the sun doesn&#8217;t go around the Earth.&#8221;</p><p>These are things that are paradigm shifting. But on the other hand, does it matter whether the sun goes around the Earth or the Earth around the sun when the French are invading right now and we need to get the defenses going, and there&#8217;s a giant civil war happening, and we&#8217;re about to be betrayed? It does matter, but it also doesn&#8217;t matter. Any decade is concerned by its tumults and often fails to recognize the importance of what&#8217;s around it. That&#8217;s true of every decade.</p><p>One fun game when I study the history of censorship, which I work a lot on&#8212;my next non-fiction book is gonna be a book on the history of censorship&#8212;whatever they&#8217;re looking at, they&#8217;re always wrong, from our perspective, about what they should be worried about censoring. If we had a time machine and our goal is to go give them advice&#8230; Here we are in the <a href="https://en.wikipedia.org/wiki/French_Enlightenment">French Enlightenment</a>, <a href="https://en.wikipedia.org/wiki/Voltaire">Voltaire</a> and <a href="https://en.wikipedia.org/wiki/Jean-Jacques_Rousseau">Rousseau</a> and the <a href="https://en.wikipedia.org/wiki/Marquis_de_Sade">Marquis de Sade</a> and <a href="https://en.wikipedia.org/wiki/Julien_Offray_de_La_Mettrie">La Mettrie&#8217;s</a> articulations of <a href="https://en.wikipedia.org/wiki/French_materialism">materialist atheism</a> are flying around Europe. What is the Inquisition worried about? It&#8217;s worried about <a href="https://en.wikipedia.org/wiki/Jansenism">Jansenist</a> treatises about the nature of the <a href="https://en.wikipedia.org/wiki/Trinity">Trinity</a>.</p><p>Jansenism is sort of like a <a href="https://en.wikipedia.org/wiki/Reformed_Christianity">Calvinist</a> version of Catholicism. Do you want to have an incredibly terrifying authoritarian God who hates you and tells you that your soul is a worthless spider that deserves to be hurled into fire, but also have to obey the arbitrary pope in Rome? Then Jansenism is for you. It has all the grimness of Calvinism and all of the authoritarian centrality of the <a href="https://en.wikipedia.org/wiki/Catholic_Church">Roman Catholics</a>. This was a <a href="https://en.wikipedia.org/wiki/Heresy_in_Christianity">heresy</a> that was abroad in the Enlightenment, and they are so much more worried about Jansenism than they are about Voltaire.</p><p>Remember that very chapter in Matytsin&#8217;s book I mentioned where they are raiding the clandestine bookshop. They&#8217;re like, &#8220;Voltaire, fine. The banned <em><a href="https://en.wikipedia.org/wiki/Encyclop%C3%A9die">Encyclop&#233;die</a></em>, which is gonna revolutionize all thought in Europe, fine. letters of <a href="https://en.wikipedia.org/wiki/Denis_Diderot">Diderot</a>, Rousseau, fine, fine. Jansenist treatises about the nature of the Trinity! Throw the book at these guys! This is the worst thing!&#8221; They really are obsessed with this incredibly petty minor heresy to the degree that when the <em>Encyclop&#233;die</em> is banned by Rome&#8230;</p><p>France likes the Encyclopedia. This is Diderot and <a href="https://en.wikipedia.org/wiki/Baron_d%27Holbach">d&#8217;Holbach&#8217;s</a> big project of universal education, to print an encyclopedia that will collect all world knowledge. They articulate it as, &#8220;Should a new dark age come upon humankind and even one copy of the encyclopedia survive, it will be sufficient to reconstruct all human progress.&#8221; That&#8217;s the goal of this thing. It&#8217;s advancing incredibly radical ideas about biology, about statecraft, about reforming the law to be rational instead of traditional, all sorts of stuff.</p><p>When that is banned by Rome, Paris is commanded... Paris loves this book. The king likes this book. The queen likes this book. She&#8217;s on record saying it was so cool being able to look up the technology that was used to make her silk pantyhose. She just loves it. Everybody loves it. France allows it to circulate despite its controversial content. But Rome says, &#8220;No, you must ban this book.&#8221; So they agree they&#8217;re gonna have the ceremonial burning, and they march the <em>Encyclop&#233;die</em> up to the fire. Then they get some Jansenist treatise about the nature of the Trinity and burn those instead, because they don&#8217;t want to burn the <em>Encyclop&#233;die</em>. They love it. They want to burn this other thing.</p><p>This is always true. If we had a time machine for the Inquisition in the 1540s, we would say, &#8220;Guys, Machiavelli, he&#8217;s really important. He&#8217;s really revolutionary. You gotta be looking at this.&#8221; Or we would say <a href="https://en.wikipedia.org/wiki/Lucretius">Lucretius&#8217;s</a> <em><a href="https://en.wikipedia.org/wiki/De_rerum_natura">De rerum natura</a></em>, which I did my dissertation on&#8230; Many people are familiar with <a href="https://en.wikipedia.org/wiki/Stephen_Greenblatt">Greenblatt&#8217;s</a> book, <em><a href="https://amzn.to/4lk4AYb">The Swerve</a></em>, which credits a lot of change to the materialist science that this poem articulates. There&#8217;s a much more complex story, which you know is told in my book, which refers to Greenblatt&#8217;s. If anyone enjoyed <em>The Swerve</em>, you would really enjoy the more detailed zoom-in that <em>Inventing the Renaissance</em> has. But we would say, &#8220;Guys, you should censor this.&#8221;</p><p>We literally have letters of inquisitors writing to each other saying, &#8220;We don&#8217;t need to bother censoring Lucretius. Only learned people can read it, and they know perfectly well that the false stuff is false, so it&#8217;ll just circulate and it&#8217;s fine. What we need to worry about censoring is all of these fine minutiae of <a href="https://en.wikipedia.org/wiki/Protestantism">Protestantism</a>.&#8221; The 1545 edition of the <em><a href="https://en.wikipedia.org/wiki/Index_Librorum_Prohibitorum">Index of Banned Books</a></em> says in its introduction, &#8220;We shall put the names of arch-heretics in all caps.&#8221; When I first read that, I was like, &#8220;Ooh, I want to see all my favorite arch-heretics be in all caps.&#8221; I eagerly flip to M, and Machiavelli is not in all caps. He was not important enough from their position. The all caps authors are all minor Protestant theologians. They&#8217;re all people like Calvin and <a href="https://en.wikipedia.org/wiki/Huldrych_Zwingli">Zwingli</a> and Luther and <a href="https://en.wikipedia.org/wiki/Philip_Melanchthon">Melanchthon</a>. They&#8217;re all doing stuff that we would say does not matter.</p><p>But an era is always wrong about what ideas and what circulation and what changes are the really big ones and are always much, much more worried about, &#8220;Oh my God, the Prince of Spain, which princess is he gonna marry? This is going to determine whether Spain is or isn&#8217;t annexed by Germany. This is the most important thing that has ever happened in the entire stream of time.&#8221; People are like, &#8220;We&#8217;ve discovered another continent,&#8221; and they&#8217;re like, &#8220;We don&#8217;t care. We just wanna know who&#8217;s gonna marry Charles.&#8221;</p><p><strong>Dwarkesh Patel</strong></p><p>That&#8217;s a very profound observation. It was really interesting to learn from your book that of all the thousands of people killed during the Inquisition, one guy was executed for atheism.</p><p><strong>Ada Palmer</strong></p><p>Science-related stuff.</p><p><strong>Dwarkesh Patel</strong></p><p>And even he had these ideas of reincarnation or...</p><p><strong>Ada Palmer</strong></p><p>I think probably the number executed for atheism would be about 100. There are 12 total trials of scientists about science. <a href="https://en.wikipedia.org/wiki/Galileo_Galilei">Galileo</a> is one. <a href="https://en.wikipedia.org/wiki/Giordano_Bruno">Giordano Bruno</a> is one. Giordano Bruno is the only one executed. Of those 12 trials, only three were convicted.</p><p>Hundreds of thousands of trials for Judaizing, which is theoretically contaminating Christianity with Jewish thought, and all of these other minutiae of oppression and segregation of populations, executions for paganism, meaning practicing your indigenous religion in a colonized space&#8230; Hundreds of thousands of executions for that, one for science.</p><p><strong>Dwarkesh Patel</strong></p><p>I recently got interested in the story of <a href="https://en.wikipedia.org/wiki/Johannes_Kepler">Kepler</a> just because <a href="https://www.johndcook.com/blog/2018/04/03/planets-and-platonic-solids/">the way he discovers the laws of planetary motion is so whimsical with the theory of Platonic objects</a>. While he&#8217;s going through <a href="https://en.wikipedia.org/wiki/Tycho_Brahe">Brahe&#8217;s</a> data and coming up with the <a href="https://en.wikipedia.org/wiki/Kepler%27s_laws_of_planetary_motion">laws of planetary motion</a>, he is the imperial mathematician for the <a href="https://en.wikipedia.org/wiki/House_of_Habsburg">Habsburg</a> emperor, which basically means that he&#8217;s doing astrology for a general. Will we win the battle or whatever.</p><p>Then he gets excommunicated, not for the laws of planetary motion, but because he&#8217;s a Lutheran. In fact, his mother is tried for witchcraft. Again, has nothing to do with science, it&#8217;s just because she&#8217;s also a Lutheran.</p><p><strong>Ada Palmer</strong></p><p><a href="https://en.wikipedia.org/wiki/John_Milton">Milton</a> of <em><a href="https://en.wikipedia.org/wiki/Paradise_Lost">Paradise Lost</a></em> fame wrote our first <a href="https://en.wikipedia.org/wiki/Areopagitica">big defense of the free press</a>. This is in the moment in the early 1600s when England doesn&#8217;t yet have systematic censorship law. It has ad hoc, &#8220;Hey, this book is bad,&#8221; but it doesn&#8217;t have systematic, &#8220;You must submit all books to a censor,&#8221; the way the Catholic world does by that point. The Catholic world developed it in order to fight Protestantism.</p><p>There&#8217;s a lot of support for creating censorship in England at the time because there&#8217;s anxiety about Papists plotting against our nice non-Catholic country, trying to undermine it. There&#8217;s a general feeling of anxiety. There&#8217;s also deliberate moral panic whipped up by politicians and power-seeking people who whip up a deliberate moral panic about books, the same way in 1954 there was a moral panic about comic books or the same way there was a moral panic about Dungeons &amp; Dragons in the &#8216;90s. There&#8217;s a moral panic about scary and dangerous books and pamphlets. So there&#8217;s a movement to create state censorship for the first systematic time in England.</p><p>Milton writes this big treatise about why freedom of the press is important, the <a href="https://en.wikipedia.org/wiki/Areopagitica">Areopagitica</a>. It&#8217;s a beautifully written rhetorical piece that presents the importance of how we must trust truth to rise purely to the top. We must let free voices move, otherwise you&#8217;re gonna create a situation where people are writing for the censor first and for the public second. It will constrain people&#8217;s thoughts in the way that we know chilling effects and fear do. It&#8217;s a beautiful treatise. He fails. The censorship regime passes.</p><p><em>Paradise Lost</em> is published under the censorious regime. It goes through the censorship. The one line they tell him to change is about astrology. They&#8217;re like, &#8220;It&#8217;s perfectly fine having Satan be your charismatic protagonist and God be kind of a jackass, and also having Satan spout ferocious anti-monarchical rhetoric copied from revolutionary pamphlets that are circulating in the British colonies so that he&#8217;s actually parroting republican, anti-monarchical rhetoric, very dangerous stuff in the treatise. That&#8217;s fine. But this one line about a comet causing a thing to happen, no, no, no. Astrology is gonna confuse people&#8217;s souls.&#8221; You&#8217;re like, &#8220;Guys, speaking as a time traveler, you&#8217;re so wrong about what you&#8217;re censoring.&#8221; They always are.</p><p><strong>Dwarkesh Patel</strong></p><p>You have one sentence which I couldn&#8217;t trace down, which I found very interesting. You said, &#8220;In the late 17th century, the most extensive library in all of Europe is the one in the Vatican run by the inquisitors.&#8221;</p><p><strong>Ada Palmer</strong></p><p>Not the library, the most extensive experimental laboratory. <a href="https://en.wikipedia.org/wiki/Daniele_Macuglia">Daniele Macuglia</a> is the scholar there. <a href="https://pubmed.ncbi.nlm.nih.gov/32174230/">That&#8217;s from his dissertation</a>. I think it&#8217;s been published now, but I don&#8217;t know if it&#8217;s actually out in English. It&#8217;s out in Italian. He works on the Inquisition and the immediate aftermath of Galileo.</p><p>They saw themselves as guarantors of truth and of accuracy in information. So they decided after Galileo that they had a duty to verify the truth of the books that they were sent to censor. If people were going to be doing mechanical experiments, they needed to repeat the mechanical experiments to see whether they were true. So they effectively invented peer review, which is to say they invented a second laboratory trying to recreate the results of the first. There are these amazing people who by day are inquisitors and by night are going home to write their own scientific treatises as they do these experiments. It&#8217;s not what we expect, but history is never what we expect.</p><p><strong>Dwarkesh Patel</strong></p><p>Seems like a good place to close. Ada, thank you very much.</p><p><strong>Ada Palmer</strong></p><p>Thank you.</p>]]></content:encoded></item><item><title><![CDATA[Dario Amodei — "We are near the end of the exponential"]]></title><description><![CDATA["That's why I'm sending this message of urgency"]]></description><link>https://www.dwarkesh.com/p/dario-amodei-2</link><guid isPermaLink="false">https://www.dwarkesh.com/p/dario-amodei-2</guid><dc:creator><![CDATA[Dwarkesh Patel]]></dc:creator><pubDate>Fri, 13 Feb 2026 16:46:36 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/187852154/b76fd92c0474a5f48cf339e1eaac7dae.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Dario Amodei thinks we are just a few years away from &#8220;a country of geniuses in a data center&#8221;. In this episode, we discuss what to make of the scaling hypothesis in the current RL regime, how AI will diffuse throughout the economy, whether Anthropic is underinvesting in compute given their timelines, how frontier labs will ever make money, whether regulation will destroy the boons of this technology, US-China competition, and much more.</p><p>Watch on <a href="https://youtu.be/n1E9IZfvGMA">YouTube</a>; listen on <a href="https://podcasts.apple.com/us/podcast/dario-amodei-the-highest-stakes-financial-model-in-history/id1516093381?i=1000749621800">Apple Podcasts</a> or <a href="https://open.spotify.com/episode/2ZNrpVSrgZMlDwQinl20Ay?si=9D4aG1l7S-2wzLsiILRLIg">Spotify</a>.</p><div id="youtube2-n1E9IZfvGMA" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;n1E9IZfvGMA&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/n1E9IZfvGMA?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h3>Sponsors</h3><ul><li><p><a href="https://labelbox.com/dwarkesh">Labelbox</a> can get you the RL tasks and environments you need. Their massive network of subject-matter experts ensures realism across domains, and their in-house tooling lets them continuously tweak task difficulty to optimize learning. Reach out at <a href="https://labelbox.com/dwarkesh">labelbox.com/dwarkesh</a></p></li><li><p><a href="https://janestreet.com/dwarkesh">Jane Street</a> sent me another puzzle&#8230; this time, they&#8217;ve trained backdoors into 3 different language models &#8212; they want you to find the triggers. Jane Street isn&#8217;t even sure this is possible, but they&#8217;ve set aside $50,000 for the best attempts and write-ups. They&#8217;re accepting submissions until April 1st at <a href="https://janestreet.com/dwarkesh">janestreet.com/dwarkesh</a></p></li><li><p><a href="https://mercury.com/personal-banking">Mercury</a>&#8217;s personal accounts make it easy to share finances with a partner, a roommate&#8230; or OpenClaw. Last week, I wanted to try OpenClaw for myself, so I used Mercury to spin up a virtual debit card with a small spend limit, and then I let my agent loose. No matter your use case, apply at <a href="https://mercury.com/personal-banking">mercury.com/personal-banking</a></p></li></ul><h2><strong>Timestamps</strong></h2><p><a href="https://www.dwarkesh.com/i/187852154/000000-what-exactly-are-we-scaling">(00:00:00) - What exactly are we scaling?</a></p><p><a href="https://www.dwarkesh.com/i/187852154/001236-is-diffusion-cope">(00:12:36) - Is diffusion cope?</a></p><p><a href="https://www.dwarkesh.com/i/187852154/002942-is-continual-learning-necessary-how-will-it-be-solved">(00:29:42) - Is continual learning necessary?</a></p><p><a href="https://www.dwarkesh.com/i/187852154/004620-if-agi-is-imminent-why-not-buy-more-compute">(00:46:20) - If AGI is imminent, why not buy more compute?</a></p><p><a href="https://www.dwarkesh.com/i/187852154/005849-how-will-ai-labs-actually-make-profit">(00:58:49) - How will AI labs actually make profit?</a></p><p><a href="https://www.dwarkesh.com/i/187852154/013119-will-regulations-destroy-the-boons-of-agi">(01:31:19) - Will regulations destroy the boons of AGI?</a></p><p><a href="https://www.dwarkesh.com/i/187852154/014741-why-cant-china-and-america-both-have-a-country-of-geniuses-in-a-datacenter">(01:47:41) - Why can&#8217;t China and America both have a country of geniuses in a datacenter?</a></p><h2><strong>Transcript</strong></h2><h3>00:00:00 - What exactly are we scaling?</h3><p><strong>Dwarkesh Patel</strong></p><p><a href="https://www.dwarkesh.com/p/dario-amodei">We talked three years ago</a>. In your view, what has been the biggest update over the last three years? What has been the biggest difference between what it felt like then versus now?</p><p><strong>Dario Amodei</strong></p><p>Broadly speaking, the exponential of the underlying technology has gone about as I expected it to go. There&#8217;s plus or minus a year or two here and there. I don&#8217;t know that I would&#8217;ve predicted the specific direction of code.</p><p>But when I look at the exponential, it is roughly what I expected in terms of the march of the models from smart high school student to smart college student to beginning to do PhD and professional stuff, and in the case of code reaching beyond that. The frontier is a little bit uneven, but it&#8217;s roughly what I expected.</p><p>What has been the most surprising thing is the lack of public recognition of how close we are to the end of the exponential. To me, it is absolutely wild that you have people &#8212; within the bubble and outside the bubble &#8212; talking about the same tired, old hot-button political issues, when we are near the end of the exponential.</p><p><strong>Dwarkesh Patel</strong></p><p>I want to understand what that exponential looks like right now. The first question I asked you when we recorded three years ago was, &#8220;what&#8217;s up with <a href="https://www.dwarkesh.com/p/will-scaling-work">scaling</a> and why does it work?&#8221; I have a similar question now, but it feels more complicated. At least from the public&#8217;s point of view, three years ago there were well-known public trends across many orders of magnitude of compute where you could see how the loss improves.</p><p>Now we have <a href="https://en.wikipedia.org/wiki/Reinforcement_learning">RL</a> <a href="https://www.tobyord.com/writing/how-well-does-rl-scale">scaling</a> and there&#8217;s no publicly known <a href="https://en.wikipedia.org/wiki/Neural_scaling_law">scaling law</a> for it. It&#8217;s not even clear what the story is. Is this supposed to be teaching the model skills? Is it supposed to be teaching meta-learning? What is the <a href="https://gwern.net/scaling-hypothesis">scaling hypothesis</a> at this point?</p><p><strong>Dario Amodei</strong></p><p>I actually have the same hypothesis I had even all the way back in 2017. I think I talked about it last time, but I wrote a doc called <a href="http://corley.ai/the-blob-that-ate-ai/">&#8220;The Big Blob of Compute Hypothesis&#8221;</a>. It wasn&#8217;t about the scaling of language models in particular. When I wrote it <a href="https://en.wikipedia.org/wiki/GPT-1">GPT-1</a> had just come out.</p><p>That was one among many things. Back in those days there was robotics. People tried to work on reasoning as a separate thing from <a href="https://en.wikipedia.org/wiki/Large_language_model">language models</a>, and there was scaling of the kind of RL that happened in <a href="https://en.wikipedia.org/wiki/AlphaGo">AlphaGo</a> and in <a href="https://en.wikipedia.org/wiki/OpenAI_Five">Dota</a> at <a href="https://en.wikipedia.org/wiki/OpenAI">OpenAI</a>. People remember StarCraft at <a href="https://en.wikipedia.org/wiki/Google_DeepMind">DeepMind</a>, <a href="https://en.wikipedia.org/wiki/AlphaStar_(software)">AlphaStar</a>.</p><p>It was written as a more general document. <a href="https://www.dwarkesh.com/p/richard-sutton">Rich Sutton</a> put out <a href="http://www.incompleteideas.net/IncIdeas/BitterLesson.html">&#8220;The Bitter Lesson&#8221;</a> a couple years later. The hypothesis is basically the same. What it says is that all the cleverness, all the techniques, all the &#8220;we need a new method to do something&#8221;, that doesn&#8217;t matter very much. There are only a few things that matter. I think I listed seven of them.</p><p>One is how much raw compute you have. The second is the quantity of data. The third is the quality and distribution of data. It needs to be a broad distribution. The fourth is how long you train for. The fifth is that you need an objective function that can scale to the moon. The <a href="https://www.moveworks.com/us/en/resources/ai-terms-glossary/pre-training">pre-training</a> objective function is one such objective function. Another is the RL objective function that says you have a goal, you&#8217;re going to go out and reach the goal.</p><p>Within that, there&#8217;s objective rewards like you see in math and coding, and there&#8217;s more subjective rewards like you see in <a href="https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback">RLHF</a> or higher-order versions of that. Then the sixth and seventh were things around <a href="https://en.wikipedia.org/wiki/Normalization_(machine_learning)">normalization</a> or conditioning, just getting the numerical stability so that the big blob of compute flows in this <a href="https://en.wikipedia.org/wiki/Laminar_flow">laminar</a> way instead of running into problems.</p><p>That was the hypothesis, and it&#8217;s a hypothesis I still hold. I don&#8217;t think I&#8217;ve seen very much that is not in line with it. The <a href="https://blogs.nvidia.com/blog/ai-scaling-laws/">pre-training scaling laws</a> were one example of what we see there. Those have continued going. Now it&#8217;s been widely reported, we feel good about pre-training. It&#8217;s continuing to give us gains.</p><p>What has changed is that now we&#8217;re also seeing the same thing for RL. We&#8217;re seeing a pre-training phase and then an RL phase on top of that. With RL, it&#8217;s actually just the same. Even other companies have published things in some of their releases that say, &#8220;We train the model on math contests &#8212; <a href="https://en.wikipedia.org/wiki/American_Invitational_Mathematics_Examination">AIME</a> or other things &#8212; and how well the model does is log-linear in how long we&#8217;ve trained it.&#8221;</p><p>We see that as well, and it&#8217;s not just math contests. It&#8217;s a wide variety of RL tasks. We&#8217;re seeing the same scaling in RL that we saw for pre-training.</p><p><strong>Dwarkesh Patel</strong></p><p>You mentioned Rich Sutton and &#8220;The Bitter Lesson&#8221;. <a href="https://www.dwarkesh.com/p/richard-sutton">I interviewed him last year</a>, and he&#8217;s actually very non-LLM-pilled. I don&#8217;t know if this is his perspective, but one way to paraphrase his objection is: Something which possesses the true core of human learning would not require all these billions of dollars of data and compute and these bespoke environments, to learn how to use Excel, how to use PowerPoint, how to navigate a web browser. The fact that we have to build in these skills using these RL environments hints that we are actually lacking a core human learning algorithm. So we&#8217;re scaling the wrong thing.</p><p>That does raise the question. Why are we doing all this RL scaling if we think there&#8217;s something that&#8217;s going to be human-like in its ability to learn on the fly?</p><p><strong>Dario Amodei</strong></p><p>I think this puts together several things that should be thought of differently. There is a genuine puzzle here, but it may not matter. In fact, I would guess it probably doesn&#8217;t matter. There is an interesting thing. Let me take the RL out of it for a second, because I actually think it&#8217;s a red herring to say that RL is any different from pre-training in this matter.</p><p>If we look at pre-training scaling, it was very interesting back in 2017 when <a href="https://scholar.google.com/citations?user=dOad5HoAAAAJ&amp;hl=en">Alec Radford</a> was doing GPT-1. The models before GPT-1 were trained on datasets that didn&#8217;t represent a wide distribution of text. You had very standard language modeling benchmarks. GPT-1 itself was trained on a bunch of fanfiction, I think actually.</p><p>It was literary text, which is a very small fraction of the text you can get. In those days it was like a billion words or something, so small datasets representing a pretty narrow distribution of what you can see in the world. It didn&#8217;t generalize well. If you did better on some fanfiction corpus, it wouldn&#8217;t generalize that well to other tasks.</p><p>We had all these measures. We had all these measures of how well it did at predicting all these other kinds of texts. It was only when you trained over all the tasks on the internet &#8212; when you did a general internet scrape from something like <a href="https://en.wikipedia.org/wiki/Common_Crawl">Common Crawl</a> or scraping links in Reddit, which is what we did for <a href="https://en.wikipedia.org/wiki/GPT-2">GPT-2</a> &#8212; that you started to get generalization.</p><p>I think we&#8217;re seeing the same thing on RL. We&#8217;re starting first with simple RL tasks like training on math competitions, then moving to broader training that involves things like code. Now we&#8217;re moving to many other tasks. I think then we&#8217;re going to increasingly get generalization. So that kind of takes out the RL vs. pre-training side of it.</p><p>But there is a puzzle either way, which is that in pre-training we use trillions of tokens. Humans don&#8217;t see trillions of words. So there is an actual sample efficiency difference here. There is actually something different here. The models start from scratch and they need much more training. But we also see that once they&#8217;re trained, if we give them a long <a href="https://www.ibm.com/think/topics/context-window">context length</a> of a million &#8212; the only thing blocking long context is <a href="https://hazelcast.com/foundations/ai-machine-learning/machine-learning-inference/">inference</a> &#8212; they&#8217;re very good at learning and adapting within that context.</p><p>So I don&#8217;t know the full answer to this. I think there&#8217;s something going on where pre-training is not like the process of humans learning, but it&#8217;s somewhere between the process of humans learning and the process of human evolution. We get many of our priors from evolution. Our brain isn&#8217;t just a blank slate. <a href="https://en.wikipedia.org/wiki/The_Blank_Slate">Whole books have been written about this.</a></p><p>The language models are much more like blank slates. They literally start as random <a href="https://www.geeksforgeeks.org/deep-learning/the-role-of-weights-and-bias-in-neural-networks/">weights</a>, whereas the human brain starts with all these regions connected to all these inputs and outputs. Maybe we should think of pre-training &#8212; and for that matter, RL as well &#8212; as something that exists in the middle space between human evolution and human on-the-spot learning. And we should think of the in-context learning that the models do as something between long-term human learning and short-term human learning.</p><p>So there&#8217;s this hierarchy. There&#8217;s evolution, there&#8217;s long-term learning, there&#8217;s short-term learning, and there&#8217;s just human reaction. The LLM phases exist along this spectrum, but not necessarily at exactly the same points. There&#8217;s no analog to some of the human modes of learning the LLMs are falling in between the points. Does that make sense?</p><p><strong>Dwarkesh Patel</strong></p><p>Yes, although some things are still a bit confusing. For example, if the analogy is that this is like evolution so it&#8217;s fine that it&#8217;s not sample efficient, then if we&#8217;re going to get super sample-efficient agent from <a href="https://www.lakera.ai/blog/what-is-in-context-learning">in-context learning</a>, why are we bothering to build all these RL environments?</p><p>There are companies whose work seems to be teaching models how to use this API, how to use Slack, how to use whatever. It&#8217;s confusing to me why there&#8217;s so much emphasis on that if the kind of agent that can just learn on the fly is emerging or has already emerged.</p><p><strong>Dario Amodei</strong></p><p>I can&#8217;t speak for the emphasis of anyone else. I can only talk about how we think about it. The goal is not to teach the model every possible skill within RL, just as we don&#8217;t do that within pre-training. Within pre-training, we&#8217;re not trying to expose the model to every possible way that words could be put together. Rather, the model trains on a lot of things and then reaches generalization across pre-training.</p><p>That was the transition from GPT-1 to GPT-2 that I saw up close. The model reaches a point. I had these moments where I was like, &#8220;Oh yeah, you just give the model a list of numbers &#8212; this is the cost of the house, this is the square feet of the house &#8212; and the model completes the pattern and does linear regression.&#8221; Not great, but it does it, and it&#8217;s never seen that exact thing before.</p><p>So to the extent that we are building these RL environments, the goal is very similar to what was done five or ten years ago with pre-training. We&#8217;re trying to get a whole bunch of data, not because we want to cover a specific document or a specific skill, but because we want to generalize.</p><h3>00:12:36 - Is diffusion cope?</h3><p><strong>Dwarkesh Patel</strong></p><p>I think the framework you&#8217;re laying down obviously makes sense. We&#8217;re making progress toward <a href="https://en.wikipedia.org/wiki/Artificial_general_intelligence">AGI</a>. Nobody at this point disagrees we&#8217;re going to achieve AGI this century. The crux is you say we&#8217;re hitting the end of the exponential. Somebody else looks at this and says, &#8220;We&#8217;ve been making progress since 2012, and by 2035 we&#8217;ll have a human-like agent.&#8221;</p><p>Obviously we&#8217;re seeing in these models the kinds of things that evolution did, or that learning within a human lifetime does. I want to understand what you&#8217;re seeing that makes you think it&#8217;s one year away and not ten years away.</p><p><strong>Dario Amodei</strong></p><p>There are two claims you could make here, one stronger and one weaker. Starting with the weaker claim, when I first saw the scaling back in 2019, I wasn&#8217;t sure. This was a 50/50 thing. I thought I saw something. My claim was that this was much more likely than anyone thinks. Maybe there&#8217;s a 50% chance this happens.</p><p>On the basic hypothesis of, as you put it, within ten years we&#8217;ll get to what I call a &#8220;country of geniuses in a data center&#8221;, I&#8217;m at 90% on that. It&#8217;s hard to go much higher than 90% because the world is so unpredictable. Maybe the irreducible uncertainty puts us at 95%, where you get to things like multiple companies having internal turmoil, <a href="https://en.wikipedia.org/wiki/Chinese_unification">Taiwan gets invaded</a>, all the <a href="https://en.wikipedia.org/wiki/Semiconductor_fabrication_plant">fabs</a> get blown up by missiles.</p><p><strong>Dwarkesh Patel</strong></p><p>Now you&#8217;ve jinxed us, Dario.</p><p><strong>Dario Amodei</strong></p><p>You could construct a 5% world where things get delayed for ten years. There&#8217;s another 5% which is that I&#8217;m very confident on tasks that can be verified. With coding, except for that irreducible uncertainty, I think we&#8217;ll be there in one or two years. There&#8217;s no way we will not be there in ten years in terms of being able to do end-to-end coding.</p><p>My one little bit of fundamental uncertainty, even on long timescales, is about tasks that aren&#8217;t verifiable: planning a mission to Mars; doing some fundamental scientific discovery like CRISPR; writing a novel. It&#8217;s hard to verify those tasks. I am almost certain we have a reliable path to get there, but if there&#8217;s a little bit of uncertainty it&#8217;s there. On the ten-year timeline I&#8217;m at 90%, which is about as certain as you can be. I think it&#8217;s crazy to say that this won&#8217;t happen by 2035. In some sane world, it would be outside the mainstream.</p><p><strong>Dwarkesh Patel</strong></p><p>But the emphasis on verification hints to me a lack of belief that these models are generalized. If you think about humans, we&#8217;re both good at things for which we get verifiable reward and things for which we don&#8217;t.</p><p><strong>Dario Amodei</strong></p><p>No, this is why I&#8217;m almost sure. We already see substantial generalization from things that verify to things that don&#8217;t. We&#8217;re already seeing that.</p><p><strong>Dwarkesh Patel</strong></p><p>But it seems like you were emphasizing this as a spectrum which will split apart which domains in which we see more progress. That doesn&#8217;t seem like how humans get better.</p><p><strong>Dario Amodei</strong></p><p>The world in which we don&#8217;t get there is the world in which we do all the verifiable things. Many of them generalize, but we don&#8217;t fully get there. We don&#8217;t fully color in the other side of the box. It&#8217;s not a binary thing.</p><p><strong>Dwarkesh Patel</strong></p><p>Even if generalization is weak and you can only do verifiable domains, it&#8217;s not clear to me you could automate software engineering in such a world. You are &#8220;a software engineer&#8221; in some sense, but part of being a software engineer for you involves <a href="https://www.darioamodei.com/">writing long memos</a> about your grand vision.</p><p><strong>Dario Amodei</strong></p><p>I don&#8217;t think that&#8217;s part of the job of <a href="https://en.wikipedia.org/wiki/Software_engineering">SWE</a>. That&#8217;s part of the job of the company, not SWE specifically. But SWE does involve design documents and other things like that. The models are already pretty good at writing comments. Again, I&#8217;m making much weaker claims here than I believe, to distinguish between two things. We&#8217;re already almost there for software engineering.</p><p><strong>Dwarkesh Patel</strong></p><p>By what metric? There&#8217;s one metric which is how many lines of code are written by AI. If you consider other productivity improvements in the history of software engineering, <a href="https://en.wikipedia.org/wiki/Compiler">compilers</a> write all the lines of software. There&#8217;s a difference between how many lines are written and how big the productivity improvement is. &#8220;We&#8217;re almost there&#8221; meaning&#8230; How big is the productivity improvement, not just how many lines are written by AI?</p><p><strong>Dario Amodei</strong></p><p>I actually agree with you on this. I&#8217;ve made a series of predictions on code and software engineering. I think people have repeatedly misunderstood them. Let me lay out the spectrum.</p><p>About eight or nine months ago, I said the AI model will be writing 90% of the lines of code in three to six months. That happened, at least at some places. It happened at <a href="https://en.wikipedia.org/wiki/Anthropic">Anthropic</a>, happened with many people downstream using our models. But that&#8217;s actually a very weak criterion. People thought I was saying that we won&#8217;t need 90% of the software engineers. Those things are worlds apart. The spectrum is: 90% of code is written by the model, 100% of code is written by the model. That&#8217;s a big difference in productivity.</p><p>90% of the end-to-end SWE tasks &#8212; including things like compiling, setting up clusters and environments, testing features, writing memos &#8212; are done by the models. 100% of today&#8217;s SWE tasks are done by the models. Even when that happens, it doesn&#8217;t mean software engineers are out of a job. There are new higher-level things they can do, where they can manage. Then further down the spectrum, there&#8217;s 90% less demand for SWEs, which I think will happen but this is a spectrum.</p><p>I wrote about it in <a href="https://www.darioamodei.com/essay/the-adolescence-of-technology">&#8220;The Adolescence of Technology&#8221;</a> where I went through this kind of spectrum with farming. I actually totally agree with you on that. These are very different benchmarks from each other, but we&#8217;re proceeding through them super fast.</p><p><strong>Dwarkesh Patel</strong></p><p>Part of your vision is that going from 90 to 100 is going to happen fast, and that it leads to huge productivity improvements. But what I notice is that even in greenfield projects people start with <a href="https://claude.com/product/claude-code">Claude Code</a> or something, people report starting a lot of projects&#8230; Do we see in the world out there a renaissance of software, all these new features that wouldn&#8217;t exist otherwise? At least so far, it doesn&#8217;t seem like we see that.</p><p>So that does make me wonder. Even if I never had to intervene with Claude Code, the world is complicated. Jobs are complicated. Closing the loop on self-contained systems, whether it&#8217;s just writing software or something, how much broader gains would we see just from that? Maybe that should dilute our estimation of the &#8220;country of geniuses&#8221;.</p><p><strong>Dario Amodei</strong></p><p>I simultaneously agree with you that it&#8217;s a reason why these things don&#8217;t happen instantly, but at the same time, I think the effect is gonna be very fast. You could have these two poles. One is that AI is not going to make progress. It&#8217;s slow. It&#8217;s going to take forever to diffuse within the economy. <a href="https://en.wikipedia.org/wiki/Diffusion_(business)">Economic diffusion</a> has become one of these buzzwords that&#8217;s a reason why we&#8217;re not going to make AI progress, or why AI progress doesn&#8217;t matter.</p><p>The other axis is that we&#8217;ll get <a href="https://en.wikipedia.org/wiki/Recursive_self-improvement">recursive self-improvement</a>, the whole thing. Can&#8217;t you just draw an exponential line on the curve? We&#8217;re going to have <a href="https://en.wikipedia.org/wiki/Dyson_sphere">Dyson spheres</a> around the sun so many nanoseconds after we get recursive. I&#8217;m completely caricaturing the view here, but there are these two extremes.</p><p>But what we&#8217;ve seen from the beginning, at least if you look within Anthropic, there&#8217;s this bizarre 10x per year growth in revenue that we&#8217;ve seen. So in 2023, it was zero to $100 million. In 2024, it was $100 million to $1 billion. In 2025, it was $1 billion to $ 9-10 billion.</p><p><strong>Dwarkesh Patel</strong></p><p>You guys should have just bought a billion dollars of your own products so you could just&#8230;</p><p><strong>Dario Amodei</strong></p><p>And the first month of this year, that exponential is... You would think it would slow down, but we added another few billion to revenue in January. Obviously that curve can&#8217;t go on forever. The GDP is only so large. I would even guess that it bends somewhat this year, but that is a fast curve. That&#8217;s a really fast curve. I would bet it stays pretty fast even as the scale goes to the entire economy.</p><p>So I think we should be thinking about this middle world where things are extremely fast, but not instant, where they take time because of economic diffusion, because of the need to close the loop. Because it&#8217;s fiddly: &#8220;I have to do <a href="https://en.wikipedia.org/wiki/Change_management">change management</a> within my enterprise&#8230; I set this up, but I have to change the security permissions on this in order to make it actually work&#8230; I had this old piece of software that checks the model before it&#8217;s compiled and released and I have to rewrite it. Yes, the model can do that, but I have to tell the model to do that. It has to take time to do that.&#8221;</p><p>So I think everything we&#8217;ve seen so far is compatible with the idea that there&#8217;s one fast exponential that&#8217;s the capability of the model. Then there&#8217;s another fast exponential that&#8217;s downstream of that, which is the diffusion of the model into the economy. Not instant, not slow, much faster than any previous technology, but it has its limits. When I look inside Anthropic, when I look at our customers: fast adoption, but not infinitely fast.</p><p><strong>Dwarkesh Patel</strong></p><p>Can I try a hot take on you?</p><p><strong>Dario Amodei</strong></p><p>Yeah.</p><p><strong>Dwarkesh Patel</strong></p><p>I feel like diffusion is cope that people say. When the model isn&#8217;t able to do something, they&#8217;re like, &#8220;oh, but it&#8217;s a diffusion issue.&#8221; But then you should use the comparison to humans. You would think that the inherent advantages that AIs have would make diffusion a much easier problem for new AIs getting onboarded than new humans getting onboarded. An AI can read your entire Slack and your drive in minutes. They can share all the knowledge that the other copies of the same instance have. You don&#8217;t have this adverse selection problem when you&#8217;re hiring AI, so you can just hire copies of a vetted AI model.</p><p>Hiring a human is so much more of a hassle. People hire humans all the time. We pay humans upwards of $50 trillion in wages because they&#8217;re useful, even though in principle it would be much easier to integrate AIs into the economy than it is to hire humans. The diffusion doesn&#8217;t really explain.</p><p><strong>Dario Amodei</strong></p><p>I think diffusion is very real and doesn&#8217;t exclusively have to do with limitations on the AI models. Again, there are people who use diffusion as kind of a buzzword to say this isn&#8217;t a big deal. I&#8217;m not talking about that. I&#8217;m not talking about how AI will diffuse at the speed of previous technologies. I think AI will diffuse much faster than previous technologies have, but not infinitely fast.</p><p>I&#8217;ll just give an example of this. There&#8217;s Claude Code. Claude Code is extremely easy to set up. If you&#8217;re a developer, you can just start using Claude Code. There is no reason why a developer at a large enterprise should not be adopting Claude Code as quickly as an individual developer or developer at a startup.</p><p>We do everything we can to promote it. We sell Claude Code to enterprises. Big enterprises, big financial companies, big pharmaceutical companies, all of them are adopting Claude Code much faster than enterprises typically adopt new technology. But again, it takes time.</p><p>Any given feature or any given product, like Claude Code or <a href="https://claude.com/product/cowork">Cowork</a>, will get adopted by the individual developers who are on Twitter all the time, by the Series A startups, many months faster than they will get adopted by a large enterprise that does food sales. There are just a number of factors. You have to go through legal, you have to provision it for everyone. It has to pass security and compliance.</p><p>The leaders of the company who are further away from the AI revolution are forward-looking, but they have to say, &#8220;Oh, it makes sense for us to spend 50 million. This is what this Claude Code thing is. This is why it helps our company. This is why it makes us more productive.&#8221; Then they have to explain to the people two levels below. They have to say, &#8220;Okay, we have 3,000 developers. Here&#8217;s how we&#8217;re going to roll it out to our developers.&#8221; We have conversations like this every day.</p><p>We are doing everything we can to make Anthropic&#8217;s revenue grow 20 or 30x a year instead of 10x a year. Again, many enterprises are just saying, &#8220;This is so productive. We&#8217;re going to take shortcuts in our usual procurement process.&#8221; They&#8217;re moving much faster than when we tried to sell them just the ordinary API, which many of them use. Claude Code is a more compelling product, but it&#8217;s not an infinitely compelling product.</p><p>I don&#8217;t think even AGI or powerful AI or &#8220;country of geniuses in a data center&#8221; will be an infinitely compelling product. It will be a compelling product enough maybe to get 3-5x, or 10x, a year of growth, even when you&#8217;re in the hundreds of billions of dollars, which is extremely hard to do and has never been done in history before, but not infinitely fast.</p><p><strong>Dwarkesh Patel</strong></p><p>I buy that it would be a slight slowdown. Maybe this is not your claim, but sometimes people talk about this like, &#8220;Oh, the capabilities are there, but because of diffusion... otherwise we&#8217;re basically at AGI&#8221;.</p><p><strong>Dario Amodei</strong></p><p>I don&#8217;t believe we&#8217;re basically at AGI.</p><p><strong>Dwarkesh Patel</strong></p><p>I think if you had the &#8220;country of geniuses in a data center&#8221;...</p><p><strong>Dario Amodei</strong></p><p>If we had the &#8220;country of geniuses in a data center&#8221;, we would know it. We would know it if you had the &#8220;country of geniuses in a data center&#8221;. Everyone in this room would know it. Everyone in Washington would know it. People in rural parts might not know it, but we would know it. We don&#8217;t have that now. That is very clear.</p><h3>00:29:42 - Is continual learning necessary? How will it be solved?</h3><p><strong>Dwarkesh Patel</strong></p><p>Coming back to concrete prediction&#8230; Because there are so many different things to disambiguate, it can be easy to talk past each other when we&#8217;re talking about capabilities. For example, when I interviewed you three years ago, I asked you a prediction about what we should expect three years from now. You were right. You said, &#8220;We should expect systems which, if you talk to them for the course of an hour, it&#8217;s hard to tell them apart from a generally well-educated human.&#8221;</p><p>I think you were right about that. I think spiritually I feel unsatisfied because my internal expectation was that such a system could automate large parts of white-collar work. So it might be more productive to talk about the actual end capabilities you want from such a system.</p><p><strong>Dario Amodei</strong></p><p>I will basically tell you where I think we are.</p><p><strong>Dwarkesh Patel</strong></p><p>Let me ask a very specific question so that we can figure out exactly what kinds of capabilities we should think about soon. Maybe I&#8217;ll ask about it in the context of a job I understand well, not because it&#8217;s the most relevant job, but just because I can evaluate the claims about it.</p><p>Take video editors. I have video editors. Part of their job involves learning about our audience&#8217;s preferences, learning about my preferences and tastes, and the different trade-offs we have. They&#8217;re, over the course of many months, building up this understanding of context. The skill and ability they have six months into the job, a model that can pick up that skill on the job on the fly, when should we expect such an AI system?</p><p><strong>Dario Amodei</strong></p><p>I guess what you&#8217;re talking about is that we&#8217;re doing this interview for three hours. Someone&#8217;s going to come in, someone&#8217;s going to edit it. They&#8217;re going to be like, &#8220;Oh, I don&#8217;t know, Dario scratched his head and we could edit that out.&#8221;</p><p><strong>Dwarkesh Patel</strong></p><p>&#8220;Magnify that.&#8221;</p><p><strong>Dario Amodei</strong></p><p>&#8220;There was this long discussion that is less interesting to people. There&#8217;s another thing that&#8217;s more interesting to people, so let&#8217;s make this edit.&#8221;</p><p>I think the &#8220;country of geniuses in a data center&#8221; will be able to do that. The way it will be able to do that is it will have <a href="https://www.anthropic.com/news/developing-computer-use">general control of a computer screen</a>. You&#8217;ll be able to feed this in. It&#8217;ll be able to also use the computer screen to go on the web, look at all your previous interviews, look at what people are saying on Twitter in response to your interviews, talk to you, ask you questions, talk to your staff, look at the history of edits that you did, and from that, do the job.</p><p>I think that&#8217;s dependent on several things. I think this is one of the things that&#8217;s actually blocking deployment: getting to the point on computer use where the models are really masters at using the computer.</p><p>We&#8217;ve seen this climb in benchmarks, and benchmarks are always imperfect measures. But I think when we first released computer use a year and a quarter ago, <a href="https://os-world.github.io/">OSWorld</a> was at maybe 15%. I don&#8217;t remember exactly, but we&#8217;ve climbed from that to 65-70%. There may be harder measures as well, but I think computer use has to pass a point of reliability.</p><p><strong>Dwarkesh Patel</strong></p><p>Can I just follow up on that before you move on to the next point? For years, I&#8217;ve been trying to build different internal LLM tools for myself. Often I have these text-in, text-out tasks, which should be dead center in the repertoire of these models. Yet I still hire humans to do them.</p><p>If it&#8217;s something like, &#8220;identify what the best clips would be in this transcript&#8221;, maybe the LLMs do a seven-out-of-ten job on them. But there&#8217;s not this ongoing way I can engage with them to help them get better at the job the way I could with a human employee. That missing ability, even if you solve computer use, would still block my ability to offload an actual job to them.</p><p><strong>Dario Amodei</strong></p><p>This gets back to what we were talking about before with learning on the job. It&#8217;s very interesting. I think with the <a href="https://en.wikipedia.org/wiki/AI-assisted_software_development">coding agents</a>, I don&#8217;t think people would say that learning on the job is what is preventing the coding agents from doing everything end to end. They keep getting better. We have engineers at Anthropic who don&#8217;t write any code.</p><p>When I look at the productivity, to your previous question, we have folks who say, &#8220;This <a href="https://modal.com/gpu-glossary/device-software/kernel">GPU kernel</a>, this chip, I used to write it myself. I just have Claude do it.&#8221; There&#8217;s this enormous improvement in productivity.</p><p>When I see Claude Code, familiarity with the <a href="https://en.wikipedia.org/wiki/Codebase">codebase</a> or a feeling that the model hasn&#8217;t worked at the company for a year, that&#8217;s not high up on the list of complaints I see. I think what I&#8217;m saying is that we&#8217;re kind of taking a different path.</p><p><strong>Dwarkesh Patel</strong></p><p>Don&#8217;t you think with coding that&#8217;s because there is an external scaffold of memory which exists instantiated in the codebase? I don&#8217;t know how many other jobs have that. Coding made fast progress precisely because it has this unique advantage that other economic activity doesn&#8217;t.</p><p><strong>Dario Amodei</strong></p><p>But when you say that, what you&#8217;re implying is that by reading the codebase into the context, I have everything that the human needed to learn on the job. So that would be an example of&#8212;whether it&#8217;s written or not, whether it&#8217;s available or not&#8212;a case where everything you needed to know you got from the context window. What we think of as learning&#8212;&#8221;I started this job, it&#8217;s going to take me six months to understand the code base&#8221;&#8212;the model just did it in the context.</p><p><strong>Dwarkesh Patel</strong></p><p>I honestly don&#8217;t know how to think about this because there are people who qualitatively report what you&#8217;re saying. I&#8217;m sure you saw last year, <a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/">there was a major study</a> where they had experienced developers try to close <a href="https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests">pull requests</a> in repositories that they were familiar with. Those developers reported an uplift. They reported that they felt more productive with the use of these models. But in fact, if you look at their output and how much was actually merged back in, there was a 20% downlift. They were less productive as a result of using these models.</p><p>So I&#8217;m trying to square the qualitative feeling that people feel with these models versus, 1) in a macro level, where is this renaissance of software? And then 2) when people do these independent evaluations, why are we not seeing the productivity benefits we would expect?</p><p><strong>Dario Amodei</strong></p><p>Within Anthropic, this is just really unambiguous. We&#8217;re under an incredible amount of commercial pressure and make it even harder for ourselves because we have all this <a href="https://en.wikipedia.org/wiki/AI_safety">safety</a> stuff we do that I think we do more than other companies.</p><p>The pressure to survive economically while also keeping our values is just incredible. We&#8217;re trying to keep this 10x revenue curve going. There is zero time for bullshit. There is zero time for feeling like we&#8217;re productive when we&#8217;re not. These tools make us a lot more productive.</p><p>Why do you think we&#8217;re <a href="https://www.wired.com/story/anthropic-revokes-openais-access-to-claude/">concerned about competitors using the tools</a>? Because we think we&#8217;re ahead of the competitors. We wouldn&#8217;t be going through all this trouble if this were secretly reducing our productivity. We see the end productivity every few months in the form of model launches. There&#8217;s no kidding yourself about this. The models make you more productive.</p><p><strong>Dwarkesh Patel</strong></p><p>1) People feeling like they&#8217;re productive is qualitatively predicted by studies like this. But 2) if I just look at the end output, obviously you guys are making fast progress.</p><p>But the idea was supposed to be that with recursive self-improvement, you make a better AI, the AI helps you build a better next AI, et cetera, et cetera. What I see instead&#8212;if I look at you, OpenAI, DeepMind&#8212;is that people are just shifting around the podium every few months.</p><p>Maybe you think that stops because you&#8217;ve won or whatever. But why are we not seeing the person with the best coding model have this lasting advantage if in fact there are these enormous productivity gains from the last coding model.</p><p><strong>Dario Amodei</strong></p><p>I think my model of the situation is that there&#8217;s an advantage that&#8217;s gradually growing. I would say right now the coding models give maybe, I don&#8217;t know, a 15-20% total factor speed up. That&#8217;s my view. Six months ago, it was maybe 5%. So it didn&#8217;t matter. 5% doesn&#8217;t register. It&#8217;s now just getting to the point where it&#8217;s one of several factors that kind of matters. That&#8217;s going to keep speeding up.</p><p>I think six months ago, there were several companies that were at roughly the same point because this wasn&#8217;t a notable factor, but I think it&#8217;s starting to speed up more and more. I would also say there are multiple companies that write models that are used for code and we&#8217;re not perfectly good at preventing some of these other companies from using our models internally. So I think everything we&#8217;re seeing is consistent with this kind of snowball model.</p><p>Again, my theme in all of this is all of this is soft <a href="https://www.lesswrong.com/w/ai-takeoff">takeoff</a>, soft, smooth exponentials, although the exponentials are relatively steep. So we&#8217;re seeing this snowball gather momentum where it&#8217;s like 10%, 20%, 25%, 40%. As you go, <a href="https://en.wikipedia.org/wiki/Amdahl%27s_law">Amdahl&#8217;s law</a>, you have to get all the things that are preventing you from closing the loop out of the way. But this is one of the biggest priorities within Anthropic.</p><p><strong>Dwarkesh Patel</strong></p><p>Stepping back, before in the stack we were talking about when do we get this on-the-job learning? It seems like the point you were making on the coding thing is that we actually don&#8217;t need on-the-job learning. You can have tremendous productivity improvements, you can have potentially trillions of dollars of revenue for AI companies, without this basic human ability to learn on the job. Maybe that&#8217;s not your claim, you should clarify.</p><p>But in most domains of economic activity, people say, &#8220;I hired somebody, they weren&#8217;t that useful for the first few months, and then over time they built up the context, understanding.&#8221; It&#8217;s actually hard to define what we&#8217;re talking about here. But they got something and then now they&#8217;re a powerhorse and they&#8217;re so valuable to us. If AI doesn&#8217;t develop this ability to learn on the fly, I&#8217;m a bit skeptical that we&#8217;re going to see huge changes to the world without that ability.</p><p><strong>Dario Amodei</strong></p><p>I think two things here. There&#8217;s the state of the technology right now. Again, we have these two stages. We have the pre-training and RL stage where you throw a bunch of data and tasks into the models and then they generalize. So it&#8217;s like learning, but it&#8217;s like learning from more data and not learning over one human or one model&#8217;s lifetime. So again, this is situated between evolution and human learning. But once you learn all those skills, you have them.</p><p>Just like with pre-training, just how the models know more, if I look at a pre-trained model, it knows more about the history of samurai in Japan than I do. It knows more about baseball than I do. It knows more about <a href="https://en.wikipedia.org/wiki/Low-pass_filter">low-pass filters</a> and electronics, all of these things. Its knowledge is way broader than mine. So I think even just that may get us to the point where the models are better at everything.</p><p>We also have, again, just with scaling the kind of existing setup, the in-context learning. I would describe it as kind of like human on-the-job learning, but a little weaker and a little short term. You look at in-context learning and if you give the model a bunch of examples it does get it. There&#8217;s real learning that happens in context. A million <a href="https://blogs.nvidia.com/blog/ai-tokens-explained/">tokens</a> is a lot. That can be days of human learning. If you think about the model reading a million words, how long would it take me to read a million? Days or weeks at least.</p><p>So you have these two things. I think these two things within the existing paradigm may just be enough to get you the &#8220;country of geniuses in a data center&#8221;. I don&#8217;t know for sure, but I think they&#8217;re going to get you a large fraction of it. There may be gaps, but I certainly think that just as things are, this is enough to generate trillions of dollars of revenue. That&#8217;s one.</p><p>Two, is this idea of continual learning, this idea of a single model learning on the job. I think we&#8217;re working on that too. There&#8217;s a good chance that in the next year or two, we also solve that. Again, I think you get most of the way there without it. The trillions of dollars a year market, maybe all of the national security implications and the safety implications that I wrote about in &#8220;Adolescence of Technology&#8221; can happen without it. But we, and I imagine others, are working on it. There&#8217;s a good chance that we will get there within the next year or two.</p><p>There are a bunch of ideas. I won&#8217;t go into all of them in detail, but one is just to make the context longer. There&#8217;s nothing preventing longer contexts from working. You just have to train at longer contexts and then learn to serve them at inference. Both of those are engineering problems that we are working on and I would assume others are working on them as well.</p><p><strong>Dwarkesh Patel</strong></p><p>This context length increase, it seemed like there was a period from 2020 to 2023 where from <a href="https://en.wikipedia.org/wiki/GPT-3">GPT-3</a> to <a href="https://developers.openai.com/api/docs/models/gpt-4-turbo">GPT-4 Turbo</a>, there was an increase from 2000 context lengths to 128K. I feel like for the two-ish years since then, we&#8217;ve been in the same-ish ballpark.</p><p>When context lengths get much longer than that, people report qualitative degradation in the ability of the model to consider that full context. So I&#8217;m curious what you&#8217;re internally seeing that makes you think, &#8220;10 million contexts, 100 million contexts to get six months of human learning and building context&#8221;.</p><p><strong>Dario Amodei</strong></p><p>This isn&#8217;t a research problem. This is an engineering and inference problem. If you want to serve long context, you have to store your entire <a href="https://huggingface.co/blog/not-lain/kv-caching">KV cache</a>. It&#8217;s difficult to store all the memory in the GPUs, to juggle the memory around. I don&#8217;t even know the details. At this point, this is at a level of detail that I&#8217;m no longer able to follow, although I knew it in the GPT-3 era. &#8220;These are the weights, these are the activations you have to store&#8230;&#8221;</p><p>But these days the whole thing is flipped because we have <a href="https://en.wikipedia.org/wiki/Mixture_of_experts">MoE</a> models and all of that. Regarding this degradation you&#8217;re talking about, without getting too specific, there&#8217;s two things. There&#8217;s the context length you train at and there&#8217;s a context length that you serve at. If you train at a small context length and then try to serve at a long context length, maybe you get these degradations. It&#8217;s better than nothing, you might still offer it, but you get these degradations. Maybe it&#8217;s harder to train at a long context length.</p><p><strong>Dwarkesh Patel</strong></p><p>I want to, at the same time, ask about maybe some rabbit holes. Wouldn&#8217;t you expect that if you had to train on longer context length, that would mean that you&#8217;re able to get less samples in for the same amount of compute? Maybe it&#8217;s not worth diving deep on that.</p><p>I want to get an answer to the bigger picture question. I don&#8217;t feel a preference for a human editor that&#8217;s been working for me for six months versus an AI that&#8217;s been working with me for six months, what year do you predict that that will be the case?</p><p><strong>Dario Amodei</strong></p><p>My guess for that is there&#8217;s a lot of problems where basically we can do this when we have the &#8220;country of geniuses in a data center&#8221;. My picture for that, if you made me guess, is one to two years, maybe one to three years. It&#8217;s really hard to tell. I have a strong view&#8212;99%, 95%&#8212;that all this will happen in 10 years. I think that&#8217;s just a super safe bet. I have a hunch&#8212;this is more like a 50/50 thing&#8212;that it&#8217;s going to be more like one to two, maybe more like one to three.</p><p><strong>Dwarkesh Patel</strong></p><p>So one to three years. Country of geniuses, and the slightly less economically valuable task of editing videos.</p><p><strong>Dario Amodei</strong></p><p>It seems pretty economically valuable, let me tell you. It&#8217;s just there are a lot of use cases like that. There are a lot of similar ones.</p><h3>00:46:20 - If AGI is imminent, why not buy more compute?</h3><p><strong>Dwarkesh Patel</strong></p><p>So you&#8217;re predicting that within one to three years. And then, generally, Anthropic has <a href="https://www.anthropic.com/news/anthropic-s-recommendations-ostp-u-s-ai-action-plan">predicted</a> that by late &#8216;26 or early &#8216;27 we will have AI systems that &#8220;have the ability to navigate interfaces available to humans doing digital work today, intellectual capabilities matching or exceeding that of Nobel Prize winners, and the ability to interface with the physical world&#8221;. You gave <a href="https://www.nytimes.com/2025/12/07/business/dealbook/dario-amodei-dealbook.html">an interview two months ago with </a><em><a href="https://www.nytimes.com/2025/12/07/business/dealbook/dario-amodei-dealbook.html">DealBook</a></em> where you were emphasizing your company&#8217;s more responsible compute scaling as compared to your competitors.</p><p>I&#8217;m trying to square these two views. If you really believe that we&#8217;re going to have a country of geniuses, you want as big a data center as you can get. There&#8217;s no reason to slow down. The <a href="https://en.wikipedia.org/wiki/Total_addressable_market">TAM</a> of a Nobel Prize winner, that can actually do everything a Nobel Prize winner can do, is trillions of dollars. So I&#8217;m trying to square this conservatism, which seems rational if you have more moderate timelines, with your stated views about progress.</p><p><strong>Dario Amodei</strong></p><p>It actually all fits together. We go back to this fast, but not infinitely fast, diffusion. Let&#8217;s say that we&#8217;re making progress at this rate. The technology is making progress this fast. I have very high conviction that we&#8217;re going to get there within a few years. I have a hunch that we&#8217;re going to get there within a year or two. So there&#8217;s a little uncertainty on the technical side, but pretty strong confidence that it won&#8217;t be off by much.</p><p>What I&#8217;m less certain about is, again, the economic diffusion side. I really do believe that we could have models that are a country of geniuses in the data center in one to two years. One question is: How many years after that do the trillions in revenue start rolling in? I don&#8217;t think it&#8217;s guaranteed that it&#8217;s going to be immediate. It could be one year, it could be two years, I could even stretch it to five years although I&#8217;m skeptical of that.</p><p>So we have this uncertainty. Even if the technology goes as fast as I suspect that it will, we don&#8217;t know exactly how fast it&#8217;s going to drive revenue. We know it&#8217;s coming, but with the way you buy these data centers, if you&#8217;re off by a couple years, that can be ruinous. It is just like how I wrote in &#8220;<a href="https://darioamodei.com/essay/machines-of-loving-grace">Machines of Loving Grace</a>&#8221;. I said I think we might get this powerful AI, this &#8220;country of genius in the data center&#8221;. That description you gave comes from &#8220;Machines of Loving Grace&#8221;. I said we&#8217;ll get that in 2026, maybe 2027. Again, that is my hunch. I wouldn&#8217;t be surprised if I&#8217;m off by a year or two, but that is my hunch.</p><p>Let&#8217;s say that happens. That&#8217;s the starting gun. How long does it take to cure all the diseases? That&#8217;s one of the ways that drives a huge amount of economic value. You cure every disease. There&#8217;s a question of how much of that goes to the pharmaceutical company or the AI company, but there&#8217;s an enormous consumer surplus because &#8212;assuming we can get access for everyone, which I care about greatly&#8212;we cure all of these diseases.</p><p>How long does it take? You have to do the biological discovery, you have to manufacture the new drug, you have to go through the regulatory process. We <a href="https://en.wikipedia.org/wiki/Operation_Warp_Speed">saw this with vaccines and COVID</a>. We got the vaccine out to everyone, but it took a year and a half. My question is: How long does it take to get the cure for everything&#8212;which AI is the genius that can in theory invent&#8212;out to everyone? How long from when that AI first exists in the lab to when diseases have actually been cured for everyone?</p><p>We&#8217;ve had a polio vaccine for 50 years. We&#8217;re still trying to eradicate it in the most remote corners of Africa. The <a href="https://en.wikipedia.org/wiki/Gates_Foundation">Gates Foundation</a> is trying as hard as they can. Others are trying as hard as they can. But that&#8217;s difficult. Again, I don&#8217;t expect most of the economic diffusion to be as difficult as that. That&#8217;s the most difficult case. But there&#8217;s a real dilemma here. Where I&#8217;ve settled on it is that it will be faster than anything we&#8217;ve seen in the world, but it still has its limits.</p><p>So when we go to buying data centers, again, the curve I&#8217;m looking at is: we&#8217;ve had a 10x a year increase every year. At the beginning of this year, we&#8217;re looking at $10 billion in annualized revenue. We have to decide how much compute to buy. It takes a year or two to actually build out the data centers, to reserve the data center.</p><p>Basically I&#8217;m saying, &#8220;In 2027, how much compute do I get?&#8221; I could assume that the revenue will continue growing 10x a year, so it&#8217;ll be $100 billion at the end of 2026 and $1 trillion at the end of 2027. Actually it would be $5 trillion dollars of compute because it would be $1 trillion a year for five years. I could buy $1 trillion of compute that starts at the end of 2027. If my revenue is not $1 trillion dollars, if it&#8217;s even $800 billion, there&#8217;s no force on earth, there&#8217;s no hedge on earth that could stop me from going bankrupt if I buy that much compute.</p><p>Even though a part of my brain wonders if it&#8217;s going to keep growing 10x, I can&#8217;t buy $1 trillion a year of compute in 2027. If I&#8217;m just off by a year in that rate of growth, or if the growth rate is 5x a year instead of 10x a year, then you go bankrupt. So you end up in a world where you&#8217;re supporting hundreds of billions, not trillions. You accept some risk that there&#8217;s so much demand that you can&#8217;t support the revenue, and you accept some risk that you got it wrong and it&#8217;s still slow.</p><p>When I talked about behaving responsibly, what I meant actually was not the absolute amount. I think it is true we&#8217;re spending somewhat less than some of the other players. It&#8217;s actually the other things, like have we been thoughtful about it or are we YOLOing and saying, &#8220;We&#8217;re going to do $100 billion here or $100 billion there&#8221;? I get the impression that some of the other companies have not written down the spreadsheet, that they don&#8217;t really understand the risks they&#8217;re taking. They&#8217;re just doing stuff because it sounds cool.</p><p>We&#8217;ve thought carefully about it. We&#8217;re an enterprise business. Therefore, we can rely more on revenue. It&#8217;s less fickle than consumer. We have better margins, which is the buffer between buying too much and buying too little. I think we bought an amount that allows us to capture pretty strong upside worlds. It won&#8217;t capture the full 10x a year. Things would have to go pretty badly for us to be in financial trouble. So we&#8217;ve thought carefully and we&#8217;ve made that balance. That&#8217;s what I mean when I say that we&#8217;re being responsible.</p><p><strong>Dwarkesh Patel</strong></p><p>So it seems like it&#8217;s possible that we actually just have different definitions of the &#8220;country of a genius in a data center&#8221;. Because when I think of actual human geniuses, an actual country of human geniuses in a data center, I would happily buy $5 trillion worth of compute to run an actual country of human geniuses in a data center.</p><p>Let&#8217;s say JPMorgan or Moderna or whatever doesn&#8217;t want to use them. I&#8217;ve got a country of geniuses. They&#8217;ll start their own company. If they can&#8217;t start their own company and they&#8217;re bottlenecked by clinical trials&#8230; It is worth stating that with clinical trials, most clinical trials fail because the drug doesn&#8217;t work. There&#8217;s not efficacy.</p><p><strong>Dario Amodei</strong></p><p>I make exactly that point in &#8220;Machines of Loving Grace&#8221;, I say the clinical trials are going to go much faster than we&#8217;re used to, but not infinitely fast.</p><p><strong>Dwarkesh Patel</strong></p><p>Okay, and then suppose it takes a year for the clinical trials to work out so that you&#8217;re getting revenue from that and can make more drugs. Okay, well, you&#8217;ve got a country of geniuses and you&#8217;re an AI lab. You could use many more AI researchers. You also think there are these self-reinforcing gains from smart people working on AI tech. You can have the data center working on AI progress.</p><p><strong>Dario Amodei</strong></p><p>Are there substantially more gains from buying $1 trillion a year of compute versus $300 billion a year of compute?</p><p><strong>Dwarkesh Patel</strong></p><p>If your competitor is buying a trillion, yes there is.</p><p><strong>Dario Amodei</strong></p><p>Well, no, there&#8217;s some gain, but then again, there&#8217;s this chance that they go bankrupt before. Again, if you&#8217;re off by only a year, you destroy yourselves. That&#8217;s the balance. We&#8217;re buying a lot. We&#8217;re buying a hell of a lot. We&#8217;re buying an amount that&#8217;s comparable to what the biggest players in the game are buying.</p><p>But if you&#8217;re asking me, &#8220;Why haven&#8217;t we signed $10 trillion of compute starting in mid-2027?&#8221;... First of all, it can&#8217;t be produced. There isn&#8217;t that much in the world. But second, what if the country of geniuses comes, but it comes in mid-2028 instead of mid-2027? You go bankrupt.</p><p><strong>Dwarkesh Patel</strong></p><p>So if your projection is one to three years, it seems like you should want $10 trillion of compute by 2029 at the latest? Even in the longest version of the timelines you state, the compute you are ramping up to build doesn&#8217;t seem in accordance.</p><p><strong>Dario Amodei</strong></p><p>What makes you think that?</p><p><strong>Dwarkesh Patel</strong></p><p>Human wages, let&#8217;s say, are on the order of $50 trillion a year&#8212;</p><p><strong>Dario Amodei</strong></p><p>So I won&#8217;t talk about Anthropic in particular, but if you talk about the industry, the amount of compute the industry is building this year is probably, call it, 10-15 gigawatts. It goes up by roughly 3x a year. So next year&#8217;s 30-40 gigawatts. 2028 might be 100 gigawatts. 2029 might be like 300 gigawatts. I&#8217;m doing the math in my head, but each gigawatt costs maybe $10 billion, on the order of $10-15 billion a year.</p><p>You put that all together and you&#8217;re getting about what you described. You&#8217;re getting exactly that. You&#8217;re getting multiple trillions a year by 2028 or 2029. You&#8217;re getting exactly what you predict.</p><p><strong>Dwarkesh Patel</strong></p><p>That&#8217;s for the industry.</p><p><strong>Dario Amodei</strong></p><p>That&#8217;s for the industry, that&#8217;s right.</p><p><strong>Dwarkesh Patel</strong></p><p>Suppose Anthropic&#8217;s compute keeps 3x-ing a year, and then by 2027-28, you have 10 gigawatts. Multiply that by, as you say, $10 billion. So then it&#8217;s like $100 billion a year. But then you&#8217;re saying the TAM by 2028 is $200 billion.</p><p><strong>Dario Amodei</strong></p><p>Again, I don&#8217;t want to give exact numbers for Anthropic, but these numbers are too small.</p><p><strong>Dwarkesh Patel</strong></p><p>Okay, interesting.</p><h3>00:58:49 - How will AI labs actually make profit?</h3><p><strong>Dwarkesh Patel</strong></p><p>You&#8217;ve told investors that you plan to be profitable starting in 2028. This is the year when we&#8217;re potentially getting the country of geniuses as a data center. This is now going to unlock all this progress in medicine and health and new technologies. Wouldn&#8217;t this be exactly the time where you&#8217;d want to reinvest in the business and build bigger &#8220;countries&#8221; so they can make more discoveries?</p><p><strong>Dario Amodei</strong></p><p>Profitability is this kind of weird thing in this field. I don&#8217;t think in this field profitability is actually a measure of spending down versus investing in the business. Let&#8217;s just take a model of this. I actually think profitability happens when you underestimated the amount of demand you were going to get and loss happens when you overestimated the amount of demand you were going to get, because you&#8217;re buying the data centers ahead of time.</p><p>Think about it this way. Again, these are stylized facts. These numbers are not exact. I&#8217;m just trying to make a toy model here. Let&#8217;s say half of your compute is for training and half of your compute is for inference. The inference has some gross margin that&#8217;s more than 50%.</p><p>So what that means is that if you were in steady-state, you build a data center and if you knew exactly the demand you were getting, you would get a certain amount of revenue. Let&#8217;s say you pay $100 billion a year for compute. On $50 billion a year you support $150 billion of revenue. The other $50 billion is used for training. Basically you&#8217;re profitable and you make $50 billion of profit. Those are the economics of the industry today, or not today but where we&#8217;re projecting forward in a year or two.</p><p>The only thing that makes that not the case is if you get less demand than $50 billion. Then you have more than 50% of your data center for research and you&#8217;re not profitable. So you train stronger models, but you&#8217;re not profitable. If you get more demand than you thought, then research gets squeezed, but you&#8217;re kind of able to support more inference and you&#8217;re more profitable.</p><p>Maybe I&#8217;m not explaining it well, but the thing I&#8217;m trying to say is that you decide the amount of compute first. Then you have some target desire of inference versus training, but that gets determined by demand. It doesn&#8217;t get determined by you.</p><p><strong>Dwarkesh Patel</strong></p><p>What I&#8217;m hearing is the reason you&#8217;re predicting profit is that you are systematically underinvesting in compute?</p><p><strong>Dario Amodei</strong></p><p>No, no, no. I&#8217;m saying it&#8217;s hard to predict. These things about 2028 and when it will happen, that&#8217;s our attempt to do the best we can with investors. All of this stuff is really uncertain because of the cone of uncertainty. We could be profitable in 2026 if the revenue grows fast enough. If we overestimate or underestimate the next year, that could swing wildly.</p><p>What I&#8217;m trying to get at is that you have a model in your head of a business that invests, invests, invests, gets scale and then becomes profitable. There&#8217;s a single point at which things turn around. I don&#8217;t think the economics of this industry work that way.</p><p><strong>Dwarkesh Patel</strong></p><p>I see. So if I&#8217;m understanding correctly, you&#8217;re saying that because of the discrepancy between the amount of compute we should have gotten and the amount of compute we got, we were sort of forced to make profit. But that doesn&#8217;t mean we&#8217;re going to continue making profit. We&#8217;re going to reinvest the money because now AI has made so much progress and we want a bigger country of geniuses. So back into revenue is high, but losses are also high.</p><p><strong>Dario Amodei</strong></p><p>If every year we predict exactly what the demand is going to be, we&#8217;ll be profitable every year. Because spending 50% of your compute on research, roughly, plus a gross margin that&#8217;s higher than 50% and correct demand prediction leads to profit. That&#8217;s the profitable business model that I think is kind of there, but obscured by these building ahead and prediction errors.</p><p><strong>Dwarkesh Patel</strong></p><p>I guess you&#8217;re treating the 50% as a sort of given constant, whereas in fact, if AI progress is fast and you can increase the progress by scaling up more, you should just have more than 50% and not make profit.</p><p><strong>Dario Amodei</strong></p><p>But here&#8217;s what I&#8217;ll say. You might want to scale it up more. Remember the log returns to scale. If 70% would get you a very little bit of a smaller model through a factor of 1.4x... That extra $20 billion, each dollar there is worth much less to you because of the log-linear setup.</p><p>So you might find that it&#8217;s better to invest that $20 billion in serving inference or in hiring engineers who are kind of better at what they&#8217;re doing. So the reason I said 50%... That&#8217;s not exactly our target. It&#8217;s not exactly going to be 50%. It&#8217;ll probably vary over time.</p><p>What I&#8217;m saying is the log-linear return, what it leads to is you spend of order one fraction of the business. Like not 5%, not 95%. Then you get diminishing returns.</p><p><strong>Dwarkesh Patel</strong></p><p>I feel strange that I&#8217;m convincing Dario to believe in AI progress or something. Okay, you don&#8217;t invest in research because it has diminishing returns, but you invest in the other things you mentioned. I think profit at a sort of macro level&#8212;</p><p><strong>Dario Amodei</strong></p><p>Again, I&#8217;m talking about diminishing returns, but after you&#8217;re spending $50 billion a year.</p><p><strong>Dwarkesh Patel</strong></p><p>This is a point I&#8217;m sure you would make, but diminishing returns on a genius could be quite high.</p><p>More generally, what is profit in a market economy? Profit is basically saying other companies in the market can do more things with this money than I can.</p><p><strong>Dario Amodei</strong></p><p>Put aside Anthropic. I don&#8217;t want to give information about Anthropic. That&#8217;s why I&#8217;m giving these stylized numbers. But let&#8217;s just derive the equilibrium of the industry. Why doesn&#8217;t everyone spend 100% of their compute on training and not serve any customers? It&#8217;s because if they didn&#8217;t get any revenue, they couldn&#8217;t raise money, they couldn&#8217;t do compute deals, they couldn&#8217;t buy more compute the next year.</p><p>So there&#8217;s going to be an equilibrium where every company spends less than 100% on training and certainly less than 100% on inference. It should be clear why you don&#8217;t just serve the current models and never train another model, because then you don&#8217;t have any demand because you&#8217;ll fall behind. So there&#8217;s some equilibrium. It&#8217;s not gonna be 10%, it&#8217;s not gonna be 90%. Let&#8217;s just say as a stylized fact, it&#8217;s 50%. That&#8217;s what I&#8217;m getting at.</p><p>I think we&#8217;re gonna be in a position where that equilibrium of how much you spend on training is less than the gross margins that you&#8217;re able to get on compute. So the underlying economics are profitable. The problem is you have this hellish demand prediction problem when you&#8217;re buying the next year of compute and you might guess under and be very profitable but have no compute for research. Or you might guess over and you are not profitable and you have all the compute for research in the world. Does that make sense? Just as a dynamic model of the industry?</p><p><strong>Dwarkesh Patel</strong></p><p>Maybe stepping back, I&#8217;m not saying I think the &#8220;country of geniuses&#8221; is going to come in two years and therefore you should buy this compute. To me, the end conclusion you&#8217;re arriving at makes a lot of sense. But that&#8217;s because it seems like &#8220;country of geniuses&#8221; is hard and there&#8217;s a long way to go. So stepping back, the thing I&#8217;m trying to get at is more that it seems like your worldview is compatible with somebody who says, &#8220;We&#8217;re like 10 years away from a world in which we&#8217;re generating trillions of dollars of value.&#8221;</p><p><strong>Dario Amodei</strong></p><p>That&#8217;s just not my view. So I&#8217;ll make another prediction. It is hard for me to see that there won&#8217;t be trillions of dollars in revenue before 2030. I can construct a plausible world. It takes maybe three years. That would be the end of what I think it&#8217;s plausible.</p><p>Like in 2028, we get the real &#8220;country of geniuses in the data center&#8221;. The revenue&#8217;s going into the low hundreds of billions by 2028, and then the country of geniuses accelerates it to trillions. We&#8217;re basically on the slow end of diffusion. It takes two years to get to the trillions. That would be the world where it takes until 2030. I suspect even composing the technical exponential and diffusion exponential, we&#8217;ll get there before 2030.</p><p><strong>Dwarkesh Patel</strong></p><p>So you laid out a model where Anthropic makes profit because it seems like fundamentally we&#8217;re in a compute-constrained world. So eventually we keep growing compute&#8212;</p><p><strong>Dario Amodei</strong></p><p>I think the way the profit comes is&#8230; Again, let&#8217;s just abstract the whole industry here. Let&#8217;s just imagine we&#8217;re in an economics textbook. We have a small number of firms. Each can invest a limited amount. Each can invest some fraction in R&amp;D. They have some marginal cost to serve. The gross profit margins on that marginal cost are very high because inference is efficient. There&#8217;s some competition, but the models are also differentiated.</p><p>Companies will compete to push their research budgets up. But because there&#8217;s a small number of players, we have the... What is it called? The <a href="https://en.wikipedia.org/wiki/Cournot_competition">Cournot equilibrium</a>, I think, is what the small number of firm equilibrium is. The point is it doesn&#8217;t equilibrate to perfect competition with zero margins. If there&#8217;s three firms in the economy and all are kind of independently behaving rationally, it doesn&#8217;t equilibrate to zero.</p><p><strong>Dwarkesh Patel</strong></p><p>Help me understand that, because right now we do have three leading firms and they&#8217;re not making profit. So what is changing?</p><p><strong>Dario Amodei</strong></p><p>Again, the gross margins right now are very positive. What&#8217;s happening is a combination of two things. One is that we&#8217;re still in the exponential scale-up phase of compute. A model gets trained. Let&#8217;s say a model got trained that costs $1 billion last year. Then this year it produced $4 billion of revenue and cost $1 billion to inference from. Again, I&#8217;m using stylized numbers here, but that would be 75% gross margins and this 25% tax. So that model as a whole makes $2 billion.</p><p>But at the same time, we&#8217;re spending $10 billion to train the next model because there&#8217;s an exponential scale-up. So the company loses money. Each model makes money, but the company loses money.</p><p>The equilibrium I&#8217;m talking about is an equilibrium where we have the &#8220;country of geniuses in a data center&#8221;, but that model training scale-up has equilibrated more. Maybe it&#8217;s still going up. We&#8217;re still trying to predict the demand, but it&#8217;s more leveled out.</p><p><strong>Dwarkesh Patel</strong></p><p>I&#8217;m confused about a couple of things there. Let&#8217;s start with the current world. In the current world, you&#8217;re right that, as you said before, if you treat each individual model as a company, it&#8217;s profitable. But of course, a big part of the production function of being a frontier lab is training the next model, right?</p><p><strong>Dario Amodei</strong></p><p>Yes, that&#8217;s right.</p><p><strong>Dwarkesh Patel</strong></p><p>If you didn&#8217;t do that, then you&#8217;d make profit for two months and then you wouldn&#8217;t have margins because you wouldn&#8217;t have the best model.</p><p><strong>Dario Amodei</strong></p><p>But at some point that reaches the biggest scale that it can reach. And then in equilibrium, we have algorithmic improvements, but we&#8217;re spending roughly the same amount to train the next model as we spend to train the current model. At some point you run out of money in the economy.</p><p><strong>Dwarkesh Patel</strong></p><p>A fixed <a href="https://en.wikipedia.org/wiki/Lump_of_labour_fallacy">lump of labor fallacy</a>&#8230; The economy is going to grow, right? That&#8217;s one of your predictions. <a href="https://www.dwarkesh.com/p/elon-musk">We&#8217;re going to have the data centers in space</a>.</p><p><strong>Dario Amodei</strong></p><p>Yes, but this is another example of the theme I was talking about. The economy will grow much faster with AI than I think it ever has before. Right now the compute is growing 3x a year. I don&#8217;t believe the economy is gonna grow 300% a year. I said this in &#8220;Machines of Loving Grace&#8221;, I think we may get 10-20% per year growth in the economy, but we&#8217;re not gonna get 300% growth in the economy. So I think in the end, if compute becomes the majority of what the economy produces, it&#8217;s gonna be capped by that.</p><p><strong>Dwarkesh Patel</strong></p><p>So let&#8217;s assume a model where compute stays capped. The world where frontier labs are making money is one where they continue to make fast progress. Because fundamentally your margin is limited by how good the alternative is. So you are able to make money because you have a frontier model. If you didn&#8217;t have a frontier model you wouldn&#8217;t be making money. So this model requires there never to be a steady state. Forever and ever you keep making more algorithmic progress.</p><p><strong>Dario Amodei</strong></p><p>I don&#8217;t think that&#8217;s true. I mean, I feel like we&#8217;re in an economics class.</p><p><strong>Dwarkesh Patel</strong></p><p>Do you know the <a href="https://www.dwarkesh.com/p/tyler-cowen-4">Tyler Cowen</a> quote? We never stop talking about economics.</p><p><strong>Dario Amodei</strong></p><p>We never stop talking about economics. So no, I don&#8217;t think this field&#8217;s going to be a monopoly. All my lawyers never want me to say the word &#8220;monopoly&#8221;. But I don&#8217;t think this field&#8217;s going to be a monopoly. You do get industries in which there are a small number of players. Not one, but a small number of players.</p><p>Ordinarily, the way you get monopolies like Facebook or Meta&#8212;I always call them Facebook&#8212;is these kinds of <a href="https://en.wikipedia.org/wiki/Network_effect">network effects</a>. The way you get industries in which there are a small number of players, is very high costs of entry. <a href="https://en.wikipedia.org/wiki/Cloud_computing">Cloud</a> is like this. I think cloud is a good example of this. There are three, maybe four, players within cloud. I think that&#8217;s the same for AI, three, maybe four.</p><p>The reason is that it&#8217;s so expensive. It requires so much expertise and so much capital to run a cloud company. You have to put up all this capital. In addition to putting up all this capital, you have to get all of this other stuff that requires a lot of skill to make it happen.</p><p>So if you go to someone and you&#8217;re like, &#8220;I want to disrupt this industry, here&#8217;s $100 billion.&#8221; You&#8217;re like, &#8220;okay, I&#8217;m putting in $100 billion and also betting that you can do all these other things that these people have been doing.&#8221;</p><p><strong>Dwarkesh Patel</strong></p><p>Only to decrease the profit.</p><p><strong>Dario Amodei</strong></p><p>The effect of your entering is that profit margins go down. So, we have equilibria like this all the time in the economy where we have a few players. Profits are not astronomical. Margins are not astronomical, but they&#8217;re not zero. That&#8217;s what we see on cloud. Cloud is very undifferentiated. Models are more differentiated than cloud.</p><p>Everyone knows Claude is good at different things than GPT is good at, than <a href="https://en.wikipedia.org/wiki/Google_Gemini">Gemini</a> is good at. It&#8217;s not just that Claude&#8217;s good at coding, GPT is good at math and reasoning. It&#8217;s more subtle than that. Models are good at different types of coding. Models have different styles. I think these things are actually quite different from each other, and so I would expect more differentiation than you see in cloud.</p><p>Now, there actually is one counter-argument. That counter-argument is if the process of producing models, if AI models can do that themselves, then that could spread throughout the economy. But that is not an argument for commoditizing AI models in general. That&#8217;s kind of an argument for commoditizing the whole economy at once.</p><p>I don&#8217;t know what quite happens in that world where basically anyone can do anything, anyone can build anything, and there&#8217;s no moat around anything at all. I don&#8217;t know, maybe we want that world. Maybe that&#8217;s the end state here. Maybe when AI models can do everything, if we&#8217;ve solved all the safety and security problems, that&#8217;s one of the mechanisms for the economy just flattening itself again. But that&#8217;s kind of far post-&#8221;country of geniuses in the data center.&#8221;</p><p><strong>Dwarkesh Patel</strong></p><p>Maybe a finer way to put that potential point is: 1) it seems like AI research is especially loaded on raw intellectual power, which will be especially abundant in the world of AGI. And 2) if you just look at the world today, there are very few technologies that seem to be diffusing as fast as AI algorithmic progress. So that does hint that this industry is sort of structurally diffusive.</p><p><strong>Dario Amodei</strong></p><p>I think coding is going fast, but I think AI research is a superset of coding and there are aspects of it that are not going fast. But I do think, again, once we get coding, once we get AI models going fast, then that will speed up the ability of AI models to do everything else. So while coding is going fast now, I think once the AI models are building the next AI models and building everything else, the whole economy will kind of go at the same pace.</p><p>I am worried geographically, though. I&#8217;m a little worried that just proximity to AI, having heard about AI, may be one differentiator. So when I said the 10-20% growth rate, a worry I have is that the growth rate could be like 50% in Silicon Valley and parts of the world that are socially connected to Silicon Valley, and not that much faster than its current pace elsewhere. I think that&#8217;d be a pretty messed up world. So one of the things I think about a lot is how to prevent that.</p><p><strong>Dwarkesh Patel</strong></p><p>Do you think that once we have this country of geniuses in a data center, that robotics is sort of quickly solved afterwards? Because it seems like a big problem with robotics is that a human can learn how to teleoperate current hardware, but current AI models can&#8217;t, at least not in a way that&#8217;s super productive. And so if we have this ability to learn like a human, shouldn&#8217;t it solve robotics immediately as well?</p><p><strong>Dario Amodei</strong></p><p>I don&#8217;t think it&#8217;s dependent on learning like a human. It could happen in different ways. Again, we could have trained the model on many different video games, which are like robotic controls, or many different simulated robotics environments, or just train them to control computer screens, and they learn to generalize.</p><p>So it will happen... it&#8217;s not necessarily dependent on human-like learning. Human-like learning is one way it could happen. If the model&#8217;s like, &#8220;Oh, I pick up a robot, I don&#8217;t know how to use it, I learn,&#8221; that could happen because we discovered continual learning. That could also happen because we trained the model on a bunch of environments and then generalized, or it could happen because the model learns that in the context length. It doesn&#8217;t actually matter which way. If we go back to the discussion we had an hour ago, that type of thing can happen in several different ways.</p><p>But I do think when for whatever reason the models have those skills, then robotics will be revolutionized&#8212;both the design of robots, because the models will be much better than humans at that, and also the ability to control robots. So we&#8217;ll get better at building the physical hardware, building the physical robots, and we&#8217;ll also get better at controlling it.</p><p>Now, does that mean the robotics industry will also be generating trillions of dollars of revenue? My answer there is yes, but there will be the same extremely fast, but not infinitely fast diffusion. So will robotics be revolutionized? Yeah, maybe tack on another year or two. That&#8217;s the way I think about these things.</p><p><strong>Dwarkesh Patel</strong></p><p>Makes sense. There&#8217;s a general skepticism about extremely fast progress. Here&#8217;s my view. It sounds like you are going to solve continual learning one way or another within a matter of years. But just as people weren&#8217;t talking about continual learning a couple of years ago, and then we realized, &#8220;Oh, why aren&#8217;t these models as useful as they could be right now, even though they are clearly passing the Turing test and are experts in so many different domains? Maybe it&#8217;s this thing.&#8221;</p><p>Then we solve this thing and we realize, actually, there&#8217;s another thing that human intelligence can do that&#8217;s a basis of human labor that these models can&#8217;t do. So why not think there will be more things like this, where we&#8217;ve found more pieces of human intelligence?</p><p><strong>Dario Amodei</strong></p><p>Well, to be clear, I think <a href="https://www.ibm.com/think/topics/continual-learning">continual learning</a>, as I&#8217;ve said before, might not be a barrier at all. I think we may just get there by pre-training generalization and RL generalization. I think there just might not be such a thing at all.</p><p>In fact, I would point to the history in <a href="https://en.wikipedia.org/wiki/Machine_learning">ML</a> of people coming up with things that are barriers that end up kind of dissolving within the big blob of compute. People talked about, &#8220;How do your models keep track of nouns and verbs?&#8221; &#8220;They can understand syntactically, but they can&#8217;t understand semantically? It&#8217;s only statistical correlations.&#8221; &#8220;You can understand a paragraph, you can&#8217;t understand a word. There&#8217;s reasoning, you can&#8217;t do reasoning.&#8221; But then suddenly it turns out you can do code and math very well.</p><p>So I think there&#8217;s actually a stronger history of some of these things seeming like a big deal and then kind of dissolving. Some of them are real. The need for data is real, maybe continual learning is a real thing.</p><p>But again, I would ground us in something like code. I think we may get to the point in a year or two where the models can just do SWE end-to-end. That&#8217;s a whole task. That&#8217;s a whole sphere of human activity that we&#8217;re just saying models can do now.</p><p><strong>Dwarkesh Patel</strong></p><p>When you say end-to-end, do you mean setting technical direction, understanding the context of the problem, et cetera?</p><p><strong>Dario Amodei</strong></p><p>Yes. I mean all of that.</p><p><strong>Dwarkesh Patel</strong></p><p>Interesting. I feel like that is AGI-complete, which maybe is internally consistent. But it&#8217;s not like saying 90% of code or 100% of code.</p><p><strong>Dario Amodei</strong></p><p>No, I gave this spectrum: 90% of code, 100% of code, 90% of end-to-end SWE, 100% of end-to-end SWE. New tasks are created for SWEs. Eventually those get done as well. It&#8217;s a long spectrum there, but we&#8217;re traversing the spectrum very quickly.</p><p><strong>Dwarkesh Patel</strong></p><p>I do think it&#8217;s funny that I&#8217;ve seen a couple of podcasts you&#8217;ve done where the hosts will be like, &#8220;But Dwarkesh wrote <a href="https://www.dwarkesh.com/p/timelines-june-2025">the essay about the continuous learning thing</a>.&#8221; It always makes me crack up because you&#8217;ve been an AI researcher for 10 years. I&#8217;m sure there&#8217;s some feeling of, &#8220;Okay, so a podcaster wrote an essay, and every interview I get asked about it.&#8221;</p><p><strong>Dario Amodei</strong></p><p>The truth of the matter is that we&#8217;re all trying to figure this out together. There are some ways in which I&#8217;m able to see things that others aren&#8217;t. These days that probably has more to do with seeing a bunch of stuff within Anthropic and having to make a bunch of decisions than I have any great research insight that others don&#8217;t.</p><p>I&#8217;m running a 2,500 person company. It&#8217;s actually pretty hard for me to have concrete research insight, much harder than it would have been 10 years ago or even two or three years ago.</p><p><strong>Dwarkesh Patel</strong></p><p>As we go towards a world of a full drop-in remote worker replacement, does an <a href="https://en.wikipedia.org/wiki/API">API</a> pricing model still make the most sense? If not, what is the correct way to price AGI, or serve AGI?</p><p><strong>Dario Amodei</strong></p><p>I think there&#8217;s going to be a bunch of different business models here, all at once, that are going to be experimented with. I actually do think that the API model is more durable than many people think. One way I think about it is if the technology is advancing quickly, if it&#8217;s advancing exponentially, what that means is there&#8217;s always a surface area of new use cases that have been developed in the last three months.</p><p>Any kind of product surface you put in place is always at risk of sort of becoming irrelevant. Any given product surface probably makes sense for a range of capabilities of the model. The chatbot is already running into limitations where making it smarter doesn&#8217;t really help the average consumer that much. But I don&#8217;t think that&#8217;s a limitation of AI models. I don&#8217;t think that&#8217;s evidence that the models are good enough and them getting better doesn&#8217;t matter to the economy. It doesn&#8217;t matter to that particular product.</p><p>So I think the value of the API is that the API always offers an opportunity, very close to the bare metal, to build on what the latest thing is. There&#8217;s always going to be this front of new startups and new ideas that weren&#8217;t possible a few months ago and are possible because the model is advancing.</p><p>I actually predict that it&#8217;s going to exist alongside other models, but we&#8217;re always going to have the API business model because there&#8217;s always going to be a need for a thousand different people to try experimenting with the model in a different way. 100 of them become startups and ten of them become big successful startups. Two or three really end up being the way that people use the model of a given generation.</p><p>So I basically think it&#8217;s always going to exist. At the same time, I&#8217;m sure there&#8217;s going to be other models as well. Not every token that&#8217;s output by the model is worth the same amount. Think about what is the value of the tokens that the model outputs when someone calls them up and says, &#8220;My Mac isn&#8217;t working,&#8221; or something, the model&#8217;s like, &#8220;restart it.&#8221; Someone hasn&#8217;t heard that before, but the model said that 10 million times. Maybe that&#8217;s worth like a dollar or a few cents or something.</p><p>Whereas if the model goes to one of the pharmaceutical companies and it says, &#8220;Oh, you know, this molecule you&#8217;re developing, you should take the aromatic ring from that end of the molecule and put it on that end of the molecule. If you do that, wonderful things will happen.&#8221; Those tokens could be worth tens of millions of dollars.</p><p>So I think we&#8217;re definitely going to see business models that recognize that. At some point we&#8217;re going to see &#8220;pay for results&#8221; in some form, or we may see forms of compensation that are like labor, that kind of work by the hour. I don&#8217;t know. I think because it&#8217;s a new industry, a lot of things are going to be tried. I don&#8217;t know what will turn out to be the right thing.</p><p><strong>Dwarkesh Patel</strong></p><p>I take your point that people will have to try things to figure out what is the best way to use this blob of intelligence. But what I find striking is Claude Code. I don&#8217;t think in the history of startups there has been a single application that has been as hotly competed in as coding agents. Claude Code is a category leader here. That seems surprising to me.</p><p>It doesn&#8217;t seem intrinsically that Anthropic had to build this. I wonder if you have an accounting of why it had to be Anthropic or how Anthropic ended up building an application in addition to the model underlying it that was successful.</p><p><strong>Dario Amodei</strong></p><p>So it actually happened in a pretty simple way, which is that we had our own coding models, which were good at coding. Around the beginning of 2025, I said, &#8220;I think the time has come where you can have nontrivial acceleration of your own research if you&#8217;re an AI company by using these models.&#8221; Of course, you need an interface, you need a harness to use them.</p><p>So I encouraged people internally. I didn&#8217;t say this is one thing that you have to use. I just said people should experiment with this. I think it might have been originally called Claude CLI, and then the name eventually got changed to Claude Code. Internally, it was the thing that everyone was using and it was seeing fast internal adoption.</p><p>I looked at it and I said, &#8220;Probably we should launch this externally, right?&#8221; It&#8217;s seen such fast adoption within Anthropic. Coding is a lot of what we do. We have an audience of many, many hundreds of people that&#8217;s in some ways at least representative of the external audience. So it looks like we already have product market fit. Let&#8217;s launch this thing.</p><p>And then we launched it. I think just the fact that we ourselves are kind of developing the model and we ourselves know what we most need to use the model, I think it&#8217;s kind of creating this feedback loop.</p><p><strong>Dwarkesh Patel</strong></p><p>I see. In the sense that you, let&#8217;s say a developer at Anthropic is like, &#8220;Ah, it would be better if it was better at this X thing.&#8221; Then you bake that into the next model that you build.</p><p><strong>Dario Amodei</strong></p><p>That&#8217;s one version of it, but then there&#8217;s just the ordinary product iteration. We have a bunch of coders within Anthropic, they use Claude Code every day and so we get fast feedback. That was more important in the early days. Now, of course, there are millions of people using it, and so we get a bunch of external feedback as well. But it&#8217;s just great to be able to get kind of fast internal feedback.</p><p>I think this is the reason why we launched a coding model and didn&#8217;t launch a pharmaceutical company. My background&#8217;s in biology, but we don&#8217;t have any of the resources that are needed to launch a pharmaceutical company.</p><h3>01:31:19 - Will regulations destroy the boons of AGI?</h3><p><strong>Dwarkesh Patel</strong></p><p>Let me now ask you about making AI go well. It seems like whatever vision we have about how AI goes well has to be compatible with two things: 1) the ability to build and run AIs is diffusing extremely rapidly and 2) the population of AIs, the amount we have and their intelligence, will also increase very rapidly.</p><p>That means that lots of people will be able to build huge populations of misaligned AIs, or AIs which are just companies which are trying to increase their footprint or have weird psyches like <a href="https://en.wikipedia.org/wiki/Sydney_(Microsoft)">Sydney Bing</a>, but now they&#8217;re superhuman. What is a vision for a world in which we have an equilibrium that is compatible with lots of different AIs, some of which are misaligned, running around?</p><p><strong>Dario Amodei</strong></p><p>I think in &#8220;The Adolescence of Technology&#8221;, I was skeptical of the balance of power. But the thing I was specifically skeptical of is you have three or four of these companies all building models that are derived from the same thing, that they would check each other. Or even that any number of them would check each other.</p><p><strong>Dario Amodei</strong></p><p>We might live in an offense-dominant world where one person or one AI model is smart enough to do something that causes damage for everything else. In the short run, we have a limited number of players now. So we can start within the limited number of players. We need to put in place the safeguards. We need to make sure everyone does the right alignment work. We need to make sure everyone has bioclassifiers. Those are the immediate things we need to do.</p><p>I agree that that doesn&#8217;t solve the problem in the long run, particularly if the ability of AI models to make other AI models proliferates, then the whole thing can become harder to solve. I think in the long run we need some architecture of governance. We need some architecture of governance that preserves human freedom, but also allows us to govern a very large number of human systems, AI systems, hybrid human-AI companies or economic units.</p><p>So we&#8217;re gonna need to think about: how do we protect the world against bioterrorism? How do we protect the world against <a href="https://en.wikipedia.org/wiki/Mirror-image_life">mirror life</a>? Probably we&#8217;re gonna need some kind of AI monitoring system that monitors for all of these things. But then we need to build this in a way that preserves civil liberties and our constitutional rights. So I think just as anything else, it&#8217;s a new security landscape with a new set of tools and a new set of vulnerabilities.</p><p>My worry is, if we had 100 years for this to happen all very slowly, we&#8217;d get used to it. We&#8217;ve gotten used to the presence of explosives in society or the presence of various new weapons or the presence of video cameras. We would get used to it over 100 years and we&#8217;d develop governance mechanisms. We&#8217;d make our mistakes. My worry is just that this is happening all so fast. So maybe we need to do our thinking faster about how to make these governance mechanisms work.</p><p><strong>Dwarkesh Patel</strong></p><p>It seems like in an offense-dominant world, over the course of the next century&#8212;the idea is that AI is making the progress that would happen over the next century happen in some period of five to ten years&#8212;we would still need the same mechanisms, or balance of power would be similarly intractable, even if humans were the only game in town.</p><p>I guess we have the advice of AI. But it fundamentally doesn&#8217;t seem like a totally different ball game here. If checks and balances were going to work, they would work with humans as well. If they aren&#8217;t going to work, they wouldn&#8217;t work with AIs as well. So maybe this just dooms human checks and balances as well.</p><p><strong>Dario Amodei</strong></p><p>Again, I think there&#8217;s some way to make this happen. The governments of the world may have to work together to make it happen. We may have to talk to AIs about building societal structures in such a way that these defenses are possible. I don&#8217;t know. I don&#8217;t want to say this is so far ahead in time, but it&#8217;s so far ahead in technological ability that may happen over a short period of time, that it&#8217;s hard for us to anticipate it in advance.</p><p><strong>Dwarkesh Patel</strong></p><p>Speaking of governments getting involved, on December 26, the <a href="https://natlawreview.com/article/tennessees-ai-bill-would-criminalize-training-ai-cha">Tennessee legislature introduced a bill</a> which said, &#8220;It would be an offense for a person to knowingly train artificial intelligence to provide emotional support, including through open-ended conversations with a user.&#8221; Of course, one of the things that Claude attempts to do is be a thoughtful, knowledgeable friend.</p><p>In general, it seems like we&#8217;re going to have this patchwork of state laws. A lot of the benefits that normal people could experience as a result of AI are going to be curtailed, especially when we get into the kinds of things you discuss in &#8220;Machines of Loving Grace&#8221;: biological freedom, mental health improvements, et cetera.</p><p>It seems easy to imagine worlds in which these get Whac-A-Moled away by different laws, whereas bills like this don&#8217;t seem to address the actual existential threats that you&#8217;re concerned about. I&#8217;m curious to understand, in the context of things like this, Anthropic&#8217;s position against the federal moratorium on state AI laws.</p><p><strong>Dario Amodei</strong></p><p>There are many different things going on at once. I think that particular law is dumb. It was clearly made by legislators who just probably had little idea what AI models could do and not do. They&#8217;re like, &#8220;AI models serving us, that just sounds scary. I don&#8217;t want that to happen.&#8221; So we&#8217;re not in favor of that.</p><p>But that wasn&#8217;t the thing that was being voted on. The thing that was being voted on is: we&#8217;re going to ban all state regulation of AI for 10 years with no apparent plan to do any federal regulation of AI, which would take Congress to pass, which is a very high bar. So the idea that we&#8217;d ban states from doing anything for 10 years&#8230; People said they had a plan for the federal government, but there was no actual proposal on the table. There was no actual attempt.</p><p>Given the serious dangers that I lay out in &#8220;Adolescence of Technology&#8221; around things like biological weapons and bioterrorism autonomy risk, and the timelines we&#8217;ve been talking about&#8212;10 years is an eternity&#8212;I think that&#8217;s a crazy thing to do. So if that&#8217;s the choice, if that&#8217;s what you force us to choose, then we&#8217;re going to choose not to have that moratorium. I think the benefits of that position exceed the costs, but it&#8217;s not a perfect position if that&#8217;s the choice.</p><p>Now, I think the thing that we should do, the thing that I would support, is the federal government should step in, not saying &#8220;states you can&#8217;t regulate&#8221;, but &#8220;Here&#8217;s what we&#8217;re going to do, and states you can&#8217;t differ from this.&#8221; I think preemption is fine in the sense of saying that the federal government says, &#8220;Here is our standard. This applies to everyone. States can&#8217;t do something different.&#8221;</p><p>That would be something I would support if it would be done in the right way. But this idea of states, &#8220;You can&#8217;t do anything and we&#8217;re not doing anything either,&#8221; that struck us as very much not making sense. I think it will not age well, it is already starting to not age well with all the backlash that you&#8217;ve seen.</p><p>Now, in terms of what we would want, the things we&#8217;ve talked about are starting with transparency standards in order to monitor some of these autonomy risks and bioterrorism risks. As the risks become more serious, as we get more evidence for them, then I think we could be more aggressive in some targeted ways and say, &#8220;Hey, AI bioterrorism is really a threat. Let&#8217;s pass a law that forces people to have classifiers.&#8221;</p><p>I could even imagine&#8230; It depends. It depends how serious the threat it ends up being. We don&#8217;t know for sure. We need to pursue this in an intellectually honest way where we say that ahead of time, the risk has not emerged yet. But I could certainly imagine, with the pace that things are going at, a world where later this year we say, &#8220;Hey, this AI bioterrorism stuff is really serious. We should do something about it. We should put it in a federal standard. If the federal government won&#8217;t act, we should put it in a state standard.&#8221; I could totally see that.</p><p><strong>Dwarkesh Patel</strong></p><p>I&#8217;m concerned about a world where if you just consider the pace of progress you&#8217;re expecting, the life cycle of legislation... The benefits are, as you say because of diffusion lag, slow enough that I really do think this patchwork of state laws, on the current trajectory, would prohibit. I mean if having an emotional chatbot friend is something that freaks people out, then just imagine the kinds of actual benefits from AI we want normal people to be able to experience. From improvements in health and healthspan and improvements in mental health and so forth.</p><p>Whereas at the same time, it seems like you think the dangers are already on the horizon and I just don&#8217;t see that much&#8230; It seems like it would be especially injurious to the benefits of AI as compared to the dangers of AI. So that&#8217;s maybe where the cost benefit makes less sense to me.</p><p><strong>Dario Amodei</strong></p><p>So there&#8217;s a few things here. People talk about there being thousands of these state laws. First of all, the vast, vast majority of them do not pass. The world works a certain way in theory, but just because a law has been passed doesn&#8217;t mean it&#8217;s really enforced. The people implementing it may be like, &#8220;Oh my God, this is stupid. It would mean shutting off everything that&#8217;s ever been built in Tennessee.&#8221; Very often, laws are interpreted in a way that makes them not as dangerous or harmful. On the same side, of course, you have to worry if you&#8217;re passing a law to stop a bad thing; you have this problem as well.</p><p>My basic view is that if we could decide what laws were passed and how things were done&#8212;and we&#8217;re only one small input into that&#8212;I would deregulate a lot of the stuff around the health benefits of AI. I don&#8217;t worry as much about the chatbot laws. I actually worry more about the drug approval process, where I think AI models are going to greatly accelerate the rate at which we discover drugs, and the pipeline will get jammed up. The pipeline will not be prepared to process all the stuff that&#8217;s going through it.</p><p>I think reform of the regulatory process should bias more towards the fact that we have a lot of things coming where the safety and efficacy is actually going to be really crisp and clear, a beautiful thing, and really effective. Maybe we don&#8217;t need all this superstructure around it that was designed around an era of drugs that barely work and often have serious side effects.</p><p>At the same time, I think we should be ramping up quite significantly the safety and security legislation. Like I&#8217;ve said, starting with transparency is my view of trying not to hamper the industry, trying to find the right balance. I&#8217;m worried about it. Some people criticize my essay for saying, &#8220;That&#8217;s too slow. The dangers of AI will come too soon if we do that.&#8221;</p><p>Well, basically, I think the last six months and maybe the next few months are going to be about transparency. Then, if these risks emerge when we&#8217;re more certain of them&#8212;which I think we might be as soon as later this year&#8212;then I think we need to act very fast in the areas where we&#8217;ve actually seen the risk.</p><p>I think the only way to do this is to be nimble. Now, the legislative process is normally not nimble, but we need to emphasize the urgency of this to everyone involved. That&#8217;s why I&#8217;m sending this message of urgency. That&#8217;s why I wrote <em>Adolescence of Technology</em>. I wanted policymakers, economists, national security professionals, and decision-makers to read it so that they have some hope of acting faster than they would have otherwise.</p><p><strong>Dwarkesh Patel</strong></p><p>Is there anything you can do or advocate that would make it more certain that the benefits of AI are better instantiated? I feel like you have worked with legislatures to say, &#8220;Okay, we&#8217;re going to prevent bioterrorism here. We&#8217;re going to increase transparency, we&#8217;re going to increase whistleblower protection.&#8221; But I think by default, the actual benefits we&#8217;re looking forward to seem very fragile to different kinds of moral panics or political economy problems.</p><p><strong>Dario Amodei</strong></p><p>I don&#8217;t actually agree that much regarding the developed world. I feel like in the developed world, markets function pretty well. When there&#8217;s a lot of money to be made on something and it&#8217;s clearly the best available alternative, it&#8217;s actually hard for the regulatory system to stop it.</p><p>We&#8217;re seeing that in AI itself. A thing I&#8217;ve been trying to fight for is <a href="https://www.axios.com/2026/02/10/anthropic-ceo-china-chip-ban">export controls on chips to China</a>. That&#8217;s in the national security interest of the US. That&#8217;s squarely within the policy beliefs of almost everyone in Congress of both parties. The case is very clear. The counterarguments against it, I&#8217;ll politely call them fishy. Yet it doesn&#8217;t happen and we sell the chips because there&#8217;s so much money riding on it. That money wants to be made. In that case, in my opinion, that&#8217;s a bad thing. But it also applies when it&#8217;s a good thing.</p><p>So if we&#8217;re talking about drugs and benefits of the technology, I am not as worried about those benefits being hampered in the developed world. I am a little worried about them going too slow. As I said, I do think we should work to speed the approval process in the FDA. I do think we should fight against these chatbot bills that you&#8217;re describing. Described individually, I&#8217;m against them. I think they&#8217;re stupid.</p><p>But I actually think the bigger worry is the developing world, where we don&#8217;t have functioning markets and where we often can&#8217;t build on the technology that we&#8217;ve had. I worry more that those folks will get left behind. And I worry that even if the cures are developed, maybe there&#8217;s someone in rural Mississippi who doesn&#8217;t get it as well. That&#8217;s a smaller version of the concern we have in the developing world.</p><p>So the things we&#8217;ve been doing are working with philanthropists. We work with folks who deliver medicine and health interventions to the developing world, to sub-Saharan Africa, India, Latin America, and other developing parts of the world. That&#8217;s the thing I think that won&#8217;t happen on its own.</p><h3>01:47:41 - Why can&#8217;t China and America both have a country of geniuses in a datacenter?</h3><p><strong>Dwarkesh Patel</strong></p><p>You mentioned export controls. Why shouldn&#8217;t the US and China both have a &#8220;country of geniuses in a data center&#8221;?</p><p><strong>Dario Amodei</strong></p><p>Why won&#8217;t it happen or why shouldn&#8217;t it happen?</p><p><strong>Dwarkesh Patel</strong></p><p>Why shouldn&#8217;t it happen.</p><p><strong>Dario Amodei</strong></p><p>If this does happen, we could have a few situations. If we have an offense-dominant situation, we could have a situation like nuclear weapons, but more dangerous. Either side could easily destroy everything.</p><p>We could also have a world where it&#8217;s unstable. <a href="https://en.wikipedia.org/wiki/Mutually_assured_destruction">The nuclear equilibrium</a> is stable because it&#8217;s <a href="https://en.wikipedia.org/wiki/Deterrence_theory">deterrence</a>. But let&#8217;s say there was uncertainty about, if the two AIs fought, which AI would win? That could create instability. You often have conflict when the two sides have a different assessment of their likelihood of winning. If one side is like, &#8220;Oh yeah, there&#8217;s a 90% chance I&#8217;ll win,&#8221; and the other side thinks the same, then a fight is much more likely. They can&#8217;t both be right, but they can both think that.</p><p><strong>Dwarkesh Patel</strong></p><p>But this seems like a fully general argument against the diffusion of AI technology. That&#8217;s the implication of this world.</p><p><strong>Dario Amodei</strong></p><p>Let me just go on, because I think we will get diffusion eventually. The other concern I have is that governments will oppress their own people with AI. I&#8217;m worried about a world where you have a country in which there&#8217;s already a government that&#8217;s building a high-tech authoritarian state. To be clear, this is about the government. This is not about the people. We need to find a way for people everywhere to benefit. My worry here is about governments. My worry is if the world gets carved up into two pieces, one of those two pieces could be authoritarian or totalitarian in a way that&#8217;s very difficult to displace.</p><p>Now, will governments eventually get powerful AI, and is there a risk of authoritarianism? Yes. Will governments eventually get powerful AI, and is there a risk of bad equilibria? Yes, I think both things. But the initial conditions matter.  At some point, we&#8217;re going to need to set up the rules of the road.</p><p>I&#8217;m not saying that one country, either the United States or a coalition of democracies&#8212;which I think would be a better setup, although it requires more international cooperation than we currently seem to want to make&#8212;should just say, &#8220;These are the rules of the road.&#8221; There&#8217;s going to be some negotiation. The world is going to have to grapple with this.</p><p>What I would like is for the democratic nations of the world&#8212;those whose governments represent closer to pro-human values&#8212;are holding the stronger hand and have more leverage when the rules of the road are set. So I&#8217;m very concerned about that initial condition.</p><p><strong>Dwarkesh Patel</strong></p><p>I was re-listening to the interview from three years ago, and one of the ways it aged poorly is that I kept asking questions assuming there was going to be some key fulcrum moment two to three years from now. In fact, being that far out, it just seems like progress continues, AI improves, AI is more diffused, and people will use it for more things.</p><p>It seems like you&#8217;re imagining a world in the future where the countries get together, and &#8220;Here&#8217;s the rules of the road, here&#8217;s the leverage we have, and here&#8217;s the leverage you have.&#8221; But on the current trajectory, everybody will have more AI. Some of that AI will be used by authoritarian countries. Some of that within the authoritarian countries will be used by private actors versus state actors.</p><p>It&#8217;s not clear who will benefit more. It&#8217;s always unpredictable to tell in advance. It seems like the internet privileged authoritarian countries more than you would&#8217;ve expected. Maybe AI will be the opposite way around. I want to better understand what you&#8217;re imagining here.</p><p><strong>Dario Amodei</strong></p><p>Just to be precise about it, I think the exponential of the underlying technology will continue as it has before. The models get smarter and smarter, even when they get to a &#8220;country of geniuses in a data center.&#8221; I think you can continue to make the model smarter. There&#8217;s a question of getting diminishing returns on their value in the world. How much does it matter after you&#8217;ve already solved human biology? At some point you can do harder, more abstruse math problems, but nothing after that matters.</p><p>Putting that aside, I do think the exponential will continue, but there will be certain distinguished points on the exponential. Companies, individuals, and countries will reach those points at different times.</p><p>In &#8220;The Adolescence of Technology&#8221; I talk about: Is a nuclear deterrent still stable in the world of AI? I don&#8217;t know, but that&#8217;s an example of one thing we&#8217;ve taken for granted. The technology could reach such a level that we can no longer be certain of it. Think of others. There are points where if you reach a certain level, maybe you have offensive cyber dominance, and every computer system is transparent to you after that unless the other side has an equivalent defense.</p><p>I don&#8217;t know what the critical moment is or if there&#8217;s a single critical moment. But I think there will be either a critical moment, a small number of critical moments, or some critical window where AI confers some large advantage from the perspective of national security, and one country or coalition has reached it before others.</p><p>I&#8217;m not advocating that they just say, &#8220;Okay, we&#8217;re in charge now.&#8221; That&#8217;s not how I think about it. The other side is always catching up. There are extreme actions you&#8217;re not willing to take, and it&#8217;s not right to take complete control anyway. But at the point that happens, people are going to understand that the world has changed. There&#8217;s going to be some negotiation, implicit or explicit, about what the post-AI world order looks like. My interest is in making that negotiation be one in which classical liberal democracy has a strong hand.</p><p><strong>Dwarkesh Patel</strong></p><p>I want to understand what that better means, because you say in the essay, &#8220;Autocracy is simply not a form of government that people can accept in the post-powerful AI age.&#8221; That sounds like you&#8217;re saying the CCP as an institution cannot exist after we get AGI. That seems like a very strong demand, and it seems to imply a world where the leading lab or the leading country will be able to&#8212;and by that language, should get to&#8212;determine how the world is governed or what kinds of governments are, and are not, allowed.</p><p><strong>Dario Amodei</strong></p><p>I believe that paragraph said something like, &#8220;You could take it even further and say X.&#8221; I wasn&#8217;t necessarily endorsing that view. I was saying, &#8220;Here&#8217;s a weaker thing that I believe. We have to worry a lot about authoritarians and we should try to check them and limit their power. You could take this much further and have a more interventionist view that says authoritarian countries with AI are these self-fulfilling cycles that are very hard to displace, so you just need to get rid of them from the beginning.&#8221;</p><p>That has exactly all the problems you say. If you were to make a commitment to overthrowing every authoritarian country, they would take a bunch of actions now that could lead to instability. That just may not be possible.</p><p>But the point I was making that I do endorse is that it is quite possible that... Today, the view, my view, in most of the Western world is that democracy is a better form of government than authoritarianism. But if a country&#8217;s authoritarian, we don&#8217;t react the way we&#8217;d react if they committed a genocide or something. I guess what I&#8217;m saying is I&#8217;m a little worried that in the age of AGI, authoritarianism will have a different meaning. It will be a graver thing. We have to decide one way or another how to deal with that. The interventionist view is one possible view. I was exploring such views. It may end up being the right view, or it may end up being too extreme. But I do have hope.</p><p>One piece of hope I have is that we have seen that as new technologies are invented, forms of government become obsolete. I mentioned this in &#8220;Adolescence of Technology&#8221;, where I said <a href="https://en.wikipedia.org/wiki/Feudalism">feudalism</a> was basically a form of government, and when we invented industrialization, feudalism was no longer sustainable. It no longer made sense.</p><p><strong>Dwarkesh Patel</strong></p><p>Why is that hope? Couldn&#8217;t that imply that democracy is no longer going to be a competitive system?</p><p><strong>Dario Amodei</strong></p><p>Right, it could go either way. But these problems with authoritarianism get deeper. I wonder if that&#8217;s an indicator of other problems that authoritarianism will have. In other words, because authoritarianism becomes worse, people are more afraid of it. They work harder to stop it. You have to think in terms of total equilibrium. I just wonder if it will motivate new ways of thinking about how to preserve and protect freedom with the new technology.</p><p>Even more optimistically, will it lead to a collective reckoning and a more emphatic realization of how important some of the things we take as individual rights are? A more emphatic realization that we really can&#8217;t give these away. We&#8217;ve seen there&#8217;s no other way to live that actually works.</p><p>I am actually hopeful that&#8212;it sounds too idealistic, but I believe it could be the case&#8212;dictatorships become morally obsolete. They become morally unworkable forms of government and the crisis that that creates is sufficient to force us to find another way.</p><p><strong>Dwarkesh Patel</strong></p><p>I think there is genuinely a tough question here which I&#8217;m not sure how you resolve. We&#8217;ve had to come out one way or another on it through history. With China in the &#8216;70s and &#8216;80s, we decided that even though it&#8217;s an authoritarian system, we will engage with it. I think in retrospect that was the right call, because it&#8217;s a state authoritarian system but a billion-plus people are much wealthier and better off than they would&#8217;ve otherwise been. It&#8217;s not clear that it would&#8217;ve stopped being an authoritarian country otherwise. You can just look at North Korea as an example of that.</p><p>I don&#8217;t know if it takes that much intelligence to remain an authoritarian country that continues to coalesce its own power. You can imagine a North Korea with an AI that&#8217;s much worse than everybody else&#8217;s, but still enough to keep power.</p><p>In general, it seems like we should just have this attitude that the benefits of AI&#8212;in the form of all these empowerments of humanity and health&#8212;will be big. Historically, we have decided it&#8217;s good to spread the benefits of technology widely, even to people whose governments are authoritarian. It is a tough question, how to think about it with AI, but historically we have said, &#8220;yes, this is a positive-sum world, and it&#8217;s still worth diffusing the technology.&#8221;</p><p><strong>Dario Amodei</strong></p><p>There are a number of choices we have. Framing this as a government-to-government decision in national security terms is one lens, but there are a lot of other lenses. You could imagine a world where we produce all these cures to diseases. The cures are fine to sell to authoritarian countries, but the data centers just aren&#8217;t. The chips and the data centers aren&#8217;t, and the AI industry itself isn&#8217;t.</p><p>Another possibility I think folks should think about is this. Could there be developments we can make&#8212;either that naturally happen as a result of AI, or that we could make happen by building technology on AI&#8212;that create an equilibrium where it becomes infeasible for authoritarian countries to deny their people private use of the benefits of the technology? Are there equilibria where we can give everyone in an authoritarian country their own AI model that defends them from surveillance and there isn&#8217;t a way for the authoritarian country to crack down on this while retaining power?</p><p>I don&#8217;t know. That sounds to me like if that went far enough, it would be a reason why authoritarian countries would disintegrate from the inside. But maybe there&#8217;s a middle world where there&#8217;s an equilibrium where, if they want to hold on to power, the authoritarians can&#8217;t deny individualized access to the technology.</p><p>But I actually do have a hope for the more radical version. Is it possible that the technology might inherently have properties&#8212;or that by building on it in certain ways we could create properties&#8212;that have this dissolving effect on authoritarian structures? Now, we hoped originally&#8212;think back to the beginning of the Obama administration&#8212;that social media and the internet would have that property, and it turns out not to. But what if we could try again with the knowledge of how many things could go wrong, and that this is a different technology? I don&#8217;t know if it would work, but it&#8217;s worth a try.</p><p><strong>Dwarkesh Patel</strong></p><p>It&#8217;s just very unpredictable. There are first principles reasons why authoritarianism might be privileged.</p><p><strong>Dario Amodei</strong></p><p>It&#8217;s all very unpredictable. We just have to recognize the problem and come up with 10 things we can try, try those, and then assess which ones are working, if any. Then try new ones if the old ones aren&#8217;t working.</p><p><strong>Dwarkesh Patel</strong></p><p>But I guess that nets out to today, as you say, that we will not sell data centers, or chips, and the ability to make chips to China. So in some sense, you are denying&#8230; There would be some benefits to the Chinese economy, Chinese people, et cetera, because we&#8217;re doing that. Then there&#8217;d also be benefits to the American economy because it&#8217;s a positive-sum world. We could trade. They could have their country&#8217;s data centers doing one thing. We could have ours doing another. Already, you&#8217;re saying it&#8217;s not worth that positive-sum stipend to empower those countries?</p><p><strong>Dario Amodei</strong></p><p>What I would say is that we are about to be in a world where growth and economic value will come very easily if we&#8217;re able to build these powerful AI models. What will not come easily is distribution of benefits, distribution of wealth, political freedom. These are the things that are going to be hard to achieve.</p><p>So when I think about policy, I think that the technology and the market will deliver all the fundamental benefits, this is my fundamental belief, almost faster than we can take them. These questions about distribution and political freedom and rights are the ones that will actually matter and that policy should focus on.</p><p><strong>Dwarkesh Patel</strong></p><p>Speaking of distribution, as you were mentioning, we have developing countries. In many cases, <a href="https://www.investopedia.com/terms/c/catch-up-effect.asp">catch-up growth</a> has been weaker than we would have hoped for. But when catch-up growth does happen, it&#8217;s fundamentally because they have underutilized labor. We can bring the capital and know-how from developed countries to these countries, and then they can grow quite rapidly.</p><p>Obviously, in a world where labor is no longer the constraining factor, this mechanism no longer works. So is the hope basically to rely on philanthropy from the people or countries who immediately get wealthy from AI? What is the hope?</p><p><strong>Dario Amodei</strong></p><p>Philanthropy should obviously play some role, as it has in the past. But I think growth is always better and stronger if we can make it endogenous.</p><p>What are the relevant industries in an AI-driven world? I said we shouldn&#8217;t build data centers in China, but there&#8217;s no reason we shouldn&#8217;t build data centers in Africa. In fact, I think it&#8217;d be great to build data centers in Africa. As long as they&#8217;re not owned by China, we should build data centers in Africa. I think that&#8217;s a great thing to do.</p><p>There&#8217;s no reason we can&#8217;t build a pharmaceutical industry that&#8217;s AI-driven. If AI is accelerating drug discovery, then there will be a bunch of biotech startups. Let&#8217;s make sure some of those happen in the developing world. Certainly, during the transition&#8212;we can talk about the point where humans have no role&#8212;humans will still have some role in starting up these companies and supervising the AI models. So let&#8217;s make sure some of those humans are in the developing world so that fast growth can happen there as well.</p><p><strong>Dwarkesh Patel</strong></p><p>You guys recently announced that <a href="https://www.anthropic.com/news/claude-new-constitution">Claude is going to have a constitution that&#8217;s aligned to a set of values</a>, and not necessarily just to the end user. There&#8217;s a world I can imagine where if it is aligned to the end user, it preserves the balance of power we have in the world today because everybody gets to have their own AI that&#8217;s advocating for them. The ratio of bad actors to good actors stays constant. It seems to work out for our world today. Why is it better not to do that, but to have a specific set of values that the AI should carry forward?</p><p><strong>Dario Amodei</strong></p><p>I&#8217;m not sure I&#8217;d quite draw the distinction in that way. There may be two relevant distinctions here. I think you&#8217;re talking about a mix of the two. One is, should we give the model a set of instructions about &#8220;do this&#8221; versus &#8220;don&#8217;t do this&#8221;? The other is, should we give the model a set of principles for how to act?</p><p>It&#8217;s kind of purely a practical and empirical thing that we&#8217;ve observed. By teaching the model principles, getting it to learn from principles, its behavior is more consistent, it&#8217;s easier to cover edge cases, and the model is more likely to do what people want it to do. In other words, if you give it a list of rules&#8212;&#8221;don&#8217;t tell people how to hot-wire a car, don&#8217;t speak in Korean&#8221;&#8212;it doesn&#8217;t really understand the rules, and it&#8217;s hard to generalize from them. It&#8217;s just a list of do&#8217;s and don&#8217;t&#8217;s.</p><p>Whereas if you give it principles&#8212;it has some hard guardrails like &#8220;Don&#8217;t make biological weapons&#8221; but&#8212;overall you&#8217;re trying to understand what it should be aiming to do, how it should be aiming to operate. So just from a practical perspective, that turns out to be a more effective way to train the model. That&#8217;s the rules versus principles trade-off.</p><p>Then there&#8217;s another thing you&#8217;re talking about, which is the corrigibility versus intrinsic motivation trade-off. How much should the model be a kind of &#8220;skin suit&#8221; where it just directly follows the instructions given to it by whoever is giving those instructions, versus how much should the model have an inherent set of values and go off and do things on its own?</p><p>There I would actually say everything about the model is closer to the direction that it should mostly do what people want. It should mostly follow instructions. We&#8217;re not trying to build something that goes off and runs the world on its own. We&#8217;re actually pretty far on the corrigible side.</p><p>Now, what we do say is there are certain things that the model won&#8217;t do. I think we say it in various ways in the constitution, that under normal circumstances, if someone asks the model to do a task, it should do that task. That should be the default. But if you&#8217;ve asked it to do something dangerous, or to harm someone else, then the model is unwilling to do that. So I actually think of it as a mostly corrigible model that has some limits, but those limits are based on principles.</p><p><strong>Dwarkesh Patel</strong></p><p>Then the fundamental question is, how are those principles determined? This is not a special question for Anthropic. This would be a question for any AI company. But because you have been the ones to actually write down the principles, I get to ask you this question. Normally, a constitution is written down, set in stone, and there&#8217;s a process of updating it and changing it and so forth. In this case, it seems like a document that people at Anthropic write, that can be changed at any time, that guides the behavior of systems that are going to be the basis of a lot of economic activity. How do you think about how those principles should be set?</p><p><strong>Dario Amodei</strong></p><p>I think there are maybe three sizes of loop here, three ways to iterate. One is we iterate within Anthropic. We train the model, we&#8217;re not happy with it, and we change the constitution. I think that&#8217;s good to do. Putting out public updates to the constitution every once in a while is good because people can comment on it.</p><p>The second level of loop is different companies having different constitutions. I think it&#8217;s useful. Anthropic puts out a constitution, Gemini puts out a constitution, and other companies put out a constitution. People can look at them and compare. Outside observers can critique and say, &#8220;I like this thing from this constitution and this thing from that constitution.&#8221; That creates a soft incentive and feedback for all the companies to take the best of each element and improve.</p><p>Then I think there&#8217;s a third loop, which is society beyond the AI companies and beyond just those who comment without hard power. There we&#8217;ve done some experiments. A couple years ago, we did an experiment with the <a href="https://www.cip.org/">Collective Intelligence Project</a> to basically poll people and ask them what should be in our AI constitution. At the time, we incorporated some of those changes.</p><p>So you could imagine doing something like that with the new approach we&#8217;ve taken to the constitution. It&#8217;s a little harder because it was an easier approach to take when the constitution was a list of dos and don&#8217;ts. At the level of principles, it has to have a certain amount of coherence. But you could still imagine getting views from a wide variety of people.</p><p>You could also imagine&#8212;and this is a crazy idea, but this whole interview is about crazy ideas&#8212;systems of representative government having input. I wouldn&#8217;t do this today because the legislative process is so slow. This is exactly why I think we should be careful about the legislative process and AI regulation. But there&#8217;s no reason you couldn&#8217;t, in principle, say, &#8220;All AI models have to have a constitution that starts with these things, and then you can append other things after it, but there has to be this special section that takes precedence.&#8221;</p><p>I wouldn&#8217;t do that. That&#8217;s too rigid and sounds overly prescriptive in a way that I think overly aggressive legislation is. But that is a thing you could try to do. Is there some much less heavy-handed version of that? Maybe.</p><p><strong>Dwarkesh Patel</strong></p><p>I really like control loop two. Obviously, this is not how constitutions of actual governments do or should work. There&#8217;s not this vague sense in which the Supreme Court will feel out how people are feeling&#8212;what are the vibes&#8212;and update the constitution accordingly. With actual governments, there&#8217;s a more formal, procedural process.</p><p>But you have a vision of competition between constitutions, which is actually very reminiscent of how some libertarian charter cities people used to talk, about what an archipelago of different kinds of governments would look like. There would be selection among them of who could operate the most effectively and where people would be the happiest. In a sense, you&#8217;re recreating that vision of a utopia of archipelagos.</p><p><strong>Dario Amodei</strong></p><p>I think that vision has things to recommend it and things that will go wrong with it. It&#8217;s an interesting, in some ways compelling, vision, but things will go wrong that you hadn&#8217;t imagined.</p><p>So I like loop two as well, but I feel like the whole thing has got to be some mix of loops one, two, and three, and it&#8217;s a matter of the proportions. I think that&#8217;s gotta be the answer.</p><p><strong>Dwarkesh Patel</strong></p><p>When somebody eventually writes the equivalent of <em><a href="https://amzn.to/4rsefxW">The Making of the Atomic Bomb</a></em> for this era, what is the thing that will be hardest to glean from the historical record that they&#8217;re most likely to miss?</p><p><strong>Dario Amodei</strong></p><p>I think a few things. One is, at every moment of this exponential, the extent to which the world outside it didn&#8217;t understand it. This is a bias that&#8217;s often present in history. Anything that actually happened looks inevitable in retrospect. When people look back, it will be hard for them to put themselves in the place of people who were actually making a bet on this thing to happen that wasn&#8217;t inevitable, that we had these arguments like the arguments I make for scaling or that continual learning will be solved. Some of us internally put a high probability on this happening, but there&#8217;s a world outside us that&#8217;s not acting on that at all.</p><p>I think the weirdness of it, unfortunately the insularity of it... If we&#8217;re one year or two years away from it happening, the average person on the street has no idea. That&#8217;s one of the things I&#8217;m trying to change with the memos, with talking to policymakers. I don&#8217;t know but I think that&#8217;s just a crazy thing.</p><p>Finally, I would say&#8212;and this probably applies to almost all historical moments of crisis&#8212;how absolutely fast it was happening, how everything was happening all at once. Decisions that you might think were carefully calculated, well actually you have to make that decision, and then you have to make 30 other decisions on the same day because it&#8217;s all happening so fast. You don&#8217;t even know which decisions are going to turn out to be consequential.</p><p>One of my worries&#8212;although it&#8217;s also an insight into what&#8217;s happening&#8212;is that some very critical decision will be some decision where someone just comes into my office and is like, &#8220;Dario, you have two minutes. Should we do thing A or thing B on this?&#8221; Someone gives me this random half-page memo and asks, &#8220;Should we do A or B?&#8221; I&#8217;m like, &#8220;I don&#8217;t know. I have to eat lunch. Let&#8217;s do B.&#8221; That ends up being the most consequential thing ever.</p><p><strong>Dwarkesh Patel</strong></p><p>So final question. There aren&#8217;t tech CEOs who are usually writing 50-page memos every few months. It seems like you have managed to build a role for yourself and a company around you which is compatible with this more intellectual-type role of CEO.</p><p>I want to understand how you construct that. How does that work? Do you just go away for a couple of weeks and then you tell your company, &#8220;This is the memo. Here&#8217;s what we&#8217;re doing&#8221;? It&#8217;s also reported that you write a bunch of these internally.</p><p><strong>Dario Amodei</strong></p><p>For this particular one, I wrote it over winter break. I was having a hard time finding the time to actually write it. But I think about this in a broader way. I think it relates to the culture of the company. I probably spend a third, maybe 40%, of my time making sure the culture of Anthropic is good.</p><p><strong>Dario Amodei</strong></p><p>As Anthropic has gotten larger, it&#8217;s gotten harder to get directly involved in the training of the models, the launch of the models, the building of the products. It&#8217;s 2,500 people. I have certain instincts, but it&#8217;s very difficult to get involved in every single detail. I try as much as possible, but one thing that&#8217;s very leveraged is making sure Anthropic is a good place to work, people like working there, everyone thinks of themselves as team members, and everyone works together instead of against each other.</p><p>We&#8217;ve seen as some of the other AI companies have grown&#8212;without naming any names&#8212;we&#8217;re starting to see decoherence and people fighting each other. I would argue there was even a lot of that from the beginning, but it&#8217;s gotten worse. I think we&#8217;ve done an extraordinarily good job, even if not perfect, of holding the company together, making everyone feel the mission, that we&#8217;re sincere about the mission, and that everyone has faith that everyone else there is working for the right reason. That we&#8217;re a team, that people aren&#8217;t trying to get ahead at each other&#8217;s expense or backstab each other, which again, I think happens a lot at some of the other places.</p><p>How do you make that the case? It&#8217;s a lot of things. It&#8217;s me, it&#8217;s <a href="https://en.wikipedia.org/wiki/Daniela_Amodei">Daniela</a>, who runs the company day to day, it&#8217;s the co-founders, it&#8217;s the other people we hire, it&#8217;s the environment we try to create. But I think an important thing in the culture is that the other leaders as well, but especially me, have to articulate what the company is about, why it&#8217;s doing what it&#8217;s doing, what its strategy is, what its values are, what its mission is, and what it stands for.</p><p>When you get to 2,500 people, you can&#8217;t do that person by person. You have to write, or you have to speak to the whole company. This is why I get up in front of the whole company every two weeks and speak for an hour.</p><p>I wouldn&#8217;t say I write essays internally. I do two things. One, I write this thing called a DVQ, <a href="https://www.wired.com/story/anthropic-benevolent-artificial-intelligence/">Dario Vision Quest</a>. I wasn&#8217;t the one who named it that. That&#8217;s the name it received, and it&#8217;s one of these names that I tried to fight because it made it sound like I was going off and smoking peyote or something. But the name just stuck.</p><p>So I get up in front of the company every two weeks. I have a three or four-page document, and I just talk through three or four different topics about what&#8217;s going on internally, the models we&#8217;re producing, the products, the outside industry, the world as a whole as it relates to AI and geopolitically in general. Just some mix of that. I go through very honestly and I say, &#8220;This is what I&#8217;m thinking, and this is what Anthropic leadership is thinking,&#8221; and then I answer questions. That direct connection has a lot of value that is hard to achieve when you&#8217;re passing things down the chain six levels deep. A large fraction of the company comes to attend, either in person or virtually. It really means that you can communicate a lot.</p><p>The other thing I do is I have a channel in Slack where I just write a bunch of things and comment a lot. Often that&#8217;s in response to things I&#8217;m seeing at the company or questions people ask. We do internal surveys and there are things people are concerned about, and so I&#8217;ll write them up. I&#8217;m just very honest about these things. I just say them very directly.</p><p>The point is to get a reputation of telling the company the truth about what&#8217;s happening, to call things what they are, to acknowledge problems, to avoid the sort of corpo speak, the kind of defensive communication that often is necessary in public because the world is very large and full of people who are interpreting things in bad faith. But if you have a company of people who you trust, and we try to hire people that we trust, then you can really just be entirely unfiltered.</p><p>I think that&#8217;s an enormous strength of the company. It makes it a better place to work, it makes people more than the sum of their parts, and increases the likelihood that we accomplish the mission because everyone is on the same page about the mission, and everyone is debating and discussing how best to accomplish the mission.</p><p><strong>Dwarkesh Patel</strong></p><p>Well, in lieu of an external Dario Vision Quest, we have this interview.</p><p><strong>Dario Amodei</strong></p><p>This interview is a little like that.</p><p><strong>Dwarkesh Patel</strong></p><p>This has been fun, Dario. Thanks for doing it.</p><p><strong>Dario Amodei</strong></p><p>Thank you, Dwarkesh.</p>]]></content:encoded></item><item><title><![CDATA[Notes on Space GPUs]]></title><description><![CDATA[Turning my Elon prep into a blog post]]></description><link>https://www.dwarkesh.com/p/notes-on-space-gpus</link><guid isPermaLink="false">https://www.dwarkesh.com/p/notes-on-space-gpus</guid><dc:creator><![CDATA[Dwarkesh Patel]]></dc:creator><pubDate>Thu, 05 Feb 2026 18:26:47 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2dba4f31-2d4f-485a-835f-5c9bc75f9ce4_300x168.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><p>John Collison and I <a href="https://www.dwarkesh.com/p/elon-musk">just interviewed Elon</a>. The interview was recorded before we knew that SpaceX was acquiring xAI, so the fact that our first topic was space GPUs now feels all the more relevant.</p><p>As I was preparing to interview Elon, I put together some notes and a <a href="https://docs.google.com/spreadsheets/d/1fa48HAwXaboEXNOrAj-xJF2Vv_xxQZjAtgoTu0FnnlY/edit?usp=sharing">spreadsheet</a> to help me think through orbital datacenters. I turned those notes into this blog post.</p><p>Even if orbital data centers don&#8217;t make sense yet, in the long run the singularity is clearly moving into space. Earth intercepts about one two-billionth of the sun&#8217;s total output. If AI scaling continues, compute will eventually move to where the energy is. So space GPUs are fun to think about, because they give you a sneak peek at the future. Whether that future arrives in 2030, 2040, or 2050 is another question.</p><p><strong>Please take everything below with grains of salt&#8212;grains so big that you might confuse them for rocks. Assume all the numbers are wrong.</strong> Every paragraph below covers a topic that would take an actual expert a week to properly evaluate. What you&#8217;ll find here is what a professional podcaster has pieced together from conversations with LLMs and some very generous people who talked to me before the interview. Thanks to <a href="https://x.com/CJHandmer">Casey Handmer</a>, <a href="https://x.com/PhilipJohnston">Philip Johnston</a>, <a href="https://x.com/ezrafeilden">Ezra Feilden</a>, <a href="https://x.com/andrewmccalip">Andrew McCalip</a>, <a href="https://x.com/vinayramasesh">Vinay Ramasesh</a> and the team at <a href="https://www.kineticpartners.com/">Kinetic Partnership</a> for all their help.</p><h2><strong>Why orbital data centers?</strong></h2><p>The whole reason to go to space is energy. Yes, panels in space get about 40% more irradiance&#8212;but the real advantage is that you can put your satellites in <a href="https://en.wikipedia.org/wiki/Sun-synchronous_orbit">sun-synchronous orbit</a>, where they face the sun continuously. No nights, no clouds, no need for batteries (which is the majority of cost in a solar-storage system). Solar on Earth has a roughly 25% capacity factor, meaning panels only generate a quarter of their peak output on average. In space, you get close to 100%.</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;784c41e4-422c-4b40-848b-efaea0c75392&quot;,&quot;duration&quot;:null}"></div><p>The logic is that if the launch costs continue to drop, it will become cheaper to put GPUs in orbit than to build power plants and batteries on Earth. And there&#8217;s a lot of room for launch costs to fall&#8212;propellant is cheap, and the main expense is the rocket, which you can now reuse. Falcon 9 is around $2,500/kg with a disposable upper stage. Starship with full reusability could get below $100/kg.</p><p>But here&#8217;s the problem with this argument. Energy is only about 15% of a datacenter&#8217;s total cost of ownership. The chips themselves are around 70%. And you still have to launch those to space!</p><p>It gets worse. On Earth, GPUs fail constantly. In the <a href="https://arxiv.org/abs/2407.21783">Llama 3 paper</a>, Meta reported a failure roughly once every three hours across a 16,000 H100 cluster. When a chip dies, a technician walks over, swaps it out, and the cluster keeps running. In space, you can&#8217;t do that&#8212;at least not until we have Optimus robots stationed on every satellite.</p><p>What about radiation? It&#8217;s actually less catastrophic than you might expect. Google&#8217;s Suncatcher paper <a href="https://arxiv.org/abs/2511.19468">found</a> that their TPUs survived nearly 3x the total ionizing dose needed for a 5-year mission before showing permanent degradation.</p><p>I asked Elon about this. He responded:</p><blockquote><p>&gt; &#8220;Actually, it depends on how recent the GPUs are that have arrived. At this point, we find our GPUs to be quite reliable. There&#8217;s infant mortality, which you can obviously iron out on the ground. So you can just run them on the ground and confirm that you don&#8217;t have infant mortality with the GPUs.&#8221;</p><p>&gt; &#8220;But once they start working, their actual reliability&#8212;and you&#8217;re past the initial debug cycle of Nvidia or whatever, or whoever&#8217;s making the chips, could be Tesla AI6 chips or something like that, or it could be TPUs or Trainiums or whatever&#8212;is actually quite reliable past a certain point. So I don&#8217;t think the servicing thing is an issue&#8221;</p></blockquote><p>Consider what&#8217;s actually being proposed here. You assemble your GPUs into racks on Earth, run them for a few hundred hours to catch the duds, disassemble everything, pack it into a satellite, launch it, and get it operational in orbit. Throughout this entire process, the most expensive part of your system&#8212;the chips&#8212;are just sitting there not doing useful work.</p><h2><strong>Is this just not possible on Earth?</strong></h2><p>Throughout the interview, Elon kept returning to one point over and over again: <em>Look, forget the economics! It will simply not be physically possible to scale power production to the scale needed for AI on Earth. </em>He went on:</p><blockquote><p>&gt; &#8220;The only place you can really scale is space.&#8221;</p><p>&gt; &#8220;All of the United States currently uses only half a terawatt on average. So if you say a terawatt, that would be twice as much electricity as the United States currently consumes. So that&#8217;s quite a lot. Can you imagine building that many data centers? That many power plants? It&#8217;s like those who have lived in software land don&#8217;t realize they&#8217;re about to have a hard lesson in hardware.&#8221;</p></blockquote><p>Elon kept pointing out the bottlenecks we&#8217;ve already run into on Earth. You can&#8217;t plug into the utilities&#8212;the interconnect queues are too long. You can&#8217;t do behind the meter and generate power yourself&#8212;lead times for turbines stretch past 2030. You can&#8217;t do solar on Earth, because of permits, and because of the tariffs. And Earth has clouds and nights, requiring overbuilt solar and batteries. In space, you can just put the satellites in sun synchronous orbit!</p><p>Look, at some level, <em>it</em> is true that we can&#8217;t keep scaling on Earth. But keep in mind that the Earth is really fucking big. 1 TW of solar (with 25% capacity factor, so really 4 TW of panels) is around 30,000 square miles. That&#8217;s like 1% of the US&#8212;about the size of South Carolina. For context, AI datacenters currently consume only ~20 GW globally.</p><p>By the time we&#8217;re talking about multiple terawatts, we&#8217;ll have had to massively scale leading-edge wafer production. And that&#8217;s the really hard part. Fabs are the most complicated manufacturing facilities humans have ever built. In order to believe that we need to go to space in order to find the power turn on all these chips, we&#8217;ll need to assume a few things:</p><ul><li><p>We&#8217;ll manage to produce <em>a lot </em>more chips.</p></li><li><p>Every single relief vessel for power generation on Earth will fail to scale.</p></li></ul><p>But semiconductors are so much more complicated than solar panels! They&#8217;re even more complicated than the blades on a turbine. It feels quite unlikely to me that the thing we manage to solve is building terawatts worth of leading edge wafers, but in that world we can&#8217;t figure out how to pave Nevada (or if regulation proves to be a problem, then the UAE) with solar panels.</p><h2><strong>100 GW into space</strong></h2><p>How many Starship launches will it take to launch a 100 GW into space?</p><p>An orbital datacenter satellite has three big components: solar arrays, computers, and radiators. And the key constraint is that for every watt of compute, we need roughly one watt of solar and one watt of thermal rejection capacity.</p><p>The W/kg of each component determines how the mass budget gets split&#8212;and how much compute you can bring along. The figure that matters most here is the specific power of the whole satellite: after you account for solar panels, radiators, and chassis, how many watts of compute do you actually get per kilogram launched?</p><p>For Starlink satellites, this works out to roughly 50 W/kg. The people trying to build orbital datacenters are currently targeting 100 W/kg. There are only two ways to get there: lighter solar panels (more watts generated per kg) or lighter radiators (more watts rejected per kg).</p><p>The numbers below are super rough. Reliable figures for space-grade components are hard to come by. But even rough math reveals which variables must improve&#8212;and by how much&#8212;in order to hit 100 W/kg.</p><ul><li><p>Solar: There are apparently companies that are targeting next gen thin film that reaches <a href="https://news.satnews.com/2026/02/01/orbital-vs-terrestrial-solar-the-math-of-energy-density-and-capacity-factors/">upwards of 500 W/kg</a>, but the state of the art is <a href="https://en.wikipedia.org/wiki/Space-based_solar_power">150 W/kg</a>, and most missions right now fly <a href="https://www.nasa.gov/smallsat-institute/sst-soa/power-subsystems/">30 W/kg</a>. Let&#8217;s be generous and assume 200 W/kg.</p><ul><li><p>The trouble here is that there&#8217;s obviously a tradeoff&#8212; denser panels costs more money, but reduces launch cost. And it&#8217;s difficult to calculate what that implies for these next gen panels, because their prices are not listed anywhere.</p></li></ul></li><li><p>Compute: I&#8217;ve heard that a stripped down GB200 NVL72 with no cooling equipment is around 100 kg. They draw 132kW of power, but let&#8217;s add 10% overhead for the intersatellite lasers and so on. That gets us to 1,452 W/kg.</p></li><li><p>Radiators: In space, you can&#8217;t convect heat away, because there&#8217;s no air. You can only radiate it, which means your panels glow infrared until the heat leaves. The Stefan-Boltzmann law governs how much power a surface can radiate.</p><p>GPUs typically run up to 90&#176; Celsius. There&#8217;s some temperature drop through the heat pipes and fluid loops that carry heat to the radiator surface. Call it 30&#176;C. So your radiators end up operating around 60&#176;C. Plug that into Stefan-Boltzmann (assuming you&#8217;re using aluminum panels that weigh around 2 kg per square meter of surface area, that works out to roughly 320 W/kg.</p><p>Since radiated power scales with T&#8308;, running your chips hotter can help you save a lot of radiator mass. For space, people will have to figure out how to build GPUs that tolerate higher temperatures.</p></li></ul><p>Assuming the numbers above&#8212;and also assuming that a fourth of the mass of the satellite has to be the chassis&#8212;I get 85 W/kg for the whole system. Again, I want to emphasize these are <em>rough</em> calculations; feel free to plug in your own numbers in the spreadsheet <a href="https://docs.google.com/spreadsheets/d/1fa48HAwXaboEXNOrAj-xJF2Vv_xxQZjAtgoTu0FnnlY/edit?usp=sharing">here</a>.</p><p>At 150 metric tons to low earth orbit per Starship (Elon&#8217;s target), you&#8217;re looking at around 10 MW per launch. That means roughly 100 Starship launches in order to put 1 GW of compute in orbit. To hit 100 GW in a year, you&#8217;d need roughly 10,000 launches, or, about one launch every hour.</p><p>This is insane! A single Starship produces around 100 GW of thrust power at liftoff. That&#8217;s about a fifth of total US electricity consumption, concentrated in one rocket for a few minutes. And the plan would be to do that once an hour, every hour, every day, for a year.</p><p>I asked Elon what that world looks like:</p><blockquote><p>I don&#8217;t think we&#8217;ll need more than... I mean, you could probably do it with as few as 20 or 30 [Starship vehicles]. It really depends on how quickly the ship has to go around the Earth and the ground track before the ship has to come back over the launch pad. So if you can use a ship every, say, 30 hours, you could do it with 30 ships. But we&#8217;ll make more ships than that. SpaceX is gearing up to do 10,000 launches a year, and maybe even 20 or 30,000 launches a year.</p></blockquote><h2><strong>Workloads and comms</strong></h2><p>Starlink satellites already communicate via inter-satellite laser links <a href="https://en.wikipedia.org/wiki/Laser_communication_in_space#cite_note-49">at 100 Gbps</a>&#8212;and Google&#8217;s Suncatcher paper suggests off-the-shelf transceivers could potentially hit 10 Tbps. For context, Infiniband links between nodes in a terrestrial datacenter run <a href="https://marketplace.nvidia.com/en-us/enterprise/networking/400gbeosfpcables/">at 400 Gbps</a>. The gap isn&#8217;t as large as you might expect. So, could you do synchronous training in space?</p><p>Even the most bullish analysts don&#8217;t claim that orbital data centers will be used for training. I don&#8217;t know any of the relevant orbital mechanics, but obviously satellites at different altitudes move at different orbital velocities, which means the satellites are desyncing relative to one another. Google came up with a clever solution for this in their Suncatcher paper&#8212;keep lots of satellites in a single tight cluster at the same altitude. Google&#8217;s researchers proposed eighty-one satellites in such a synchronized constellation. If each constellation had a GB200 NVL72, then that&#8217;s only 15 MW parcels of coherent compute.</p><p>Defenders of orbital datacenters say that most compute is going to shift to inference (and with RL, most training is also inference). Maybe the legacy terrestrial datacenters do end up doing the pretraining runs, and then whatever mixture of RL environment training and continual learning  happens in the future does happen in space. So, the argument goes, it&#8217;s not a big deal that the scale ups in space are isolated. But there&#8217;s still the question of how hundreds of gigawatts of inference are beamed back to Earth.</p><p>For a moment, let&#8217;s imagine a world where as we see the sunrise and sunset we also see a Saturn-like belt of GPU satellites passing over us. That&#8217;s already really cool. But then there&#8217;s  another sci-fi premise, which I really wanted to be plausible, and which turns out not to make any sense: Imagine that every 12 hours, as this country of geniuses in space passes over us and shoots down half a day&#8217;s worth of new ideas, our code finally starts working and our factories buzz alight and become more productive. Unfortunately, it&#8217;s just science fiction. Inference doesn&#8217;t take that much bandwidth. One hundred gigawatts of a 5T model is roughly 58 billion tokens per second, resulting in ~ <a href="https://claude.ai/share/9b4bff2b-d114-4421-9cbf-0eff30112a3a">230 GB/s</a>.</p><p>That&#8217;s nothing. That many tokens can easily be beamed using lasers from GPUs in the orbital plane through to Starlink satellite network and then down to Earth.</p><p>Latency might be an issue, up to fifty milliseconds from any given spot on Earth through the Starlink network to the sun synchronous orbit and then back again. But as we move towards a world of true remote coworker AIs, where the agent works for tens of minutes before coming back to us, the marginal milliseconds of latency matter less and less.</p><h2><strong>So why is Elon doing this?</strong></h2><p>I&#8217;m willing to accept Elon&#8217;s argument that if launch costs become sufficiently cheap <em>and</em> we can repair GPUs in space, then there&#8217;s a viable path toward orbital data centers. But it seems especially difficult to imagine a situation in which orbital data centers end up <em>significantly </em>cheaper, because, again, most of the cost of a data center is the GPUs.</p><p>For most compute to shift to space, all of the following things would need to be true:</p><ul><li><p>Power generation on Earth hits a ceiling, or AI demand outstrips every terrestrial option.</p></li><li><p>Chip production scales faster than anyone expects, so we have the silicon but not the electricity.</p></li><li><p>Starship reaches thousands of launches per year.</p></li></ul><p>If Elon&#8217;s right, he wins the AI race outright. SpaceX is the only entity that can launch at that scale. xAI would have unlimited power. Everyone else will be stuck fighting over grid interconnects and turbine orders.</p><p>And if Elon&#8217;s future doesn&#8217;t materialize? xAI is just another lab in the pack. Which means xAI loses. The AI race is a winner-take-all competition, and xAI isn&#8217;t in first place. Elon&#8217;s comparative advantage was never going to be navigating utility interconnect queues or filing permits faster than Google. His advantage is SpaceX. So why not bet on the world where SpaceX becomes the kingmaker?</p><p>This might sound reckless. But that&#8217;s how SpaceX got here. Their whole business plan seems to be one in which they conjure new wells of demand for each generation of rocket on the path to the Dyson swarm. Falcon 9 first flew in 2010. Starlink didn&#8217;t launch until 2019. Maybe orbital datacenters end up being for Starship what Starlink was for Falcon 9.</p><p>Sometimes, during the interview, I found my thoughts drifting toward Elon&#8217;s vision for this big, interconnected future. So I paused a moment and said:</p><blockquote><p>What I find remarkable about the SpaceX business is the end goal is to get to Mars, but you keep finding ways on the way there to keep generating incremental revenue to get to the next stage and the next stage.</p></blockquote><p>Elon nodded his head slowly. And then he said:</p><blockquote><p>You can see how this might seem like a simulation to me.</p></blockquote><h2></h2><p></p>]]></content:encoded></item><item><title><![CDATA[Elon Musk — "In 36 months, the cheapest place to put AI will be space”]]></title><description><![CDATA[&#8220;Those who live in software land are about to have a hard lesson in hardware.&#8221;]]></description><link>https://www.dwarkesh.com/p/elon-musk</link><guid isPermaLink="false">https://www.dwarkesh.com/p/elon-musk</guid><dc:creator><![CDATA[Dwarkesh Patel]]></dc:creator><pubDate>Thu, 05 Feb 2026 16:45:08 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/186967347/fb33e3301af39638dbf5c4d12e680caa.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In this episode, John and I got to do a real deep-dive with Elon. We discuss the economics of orbital data centers, the difficulties of scaling power on Earth, what it would take to manufacture humanoids at high-volume in America, xAI&#8217;s business and alignment plans, DOGE, and much more.</p><p>Watch on <a href="https://youtu.be/BYXbuik3dgA">YouTube</a>; listen on <a href="https://podcasts.apple.com/us/podcast/elon-musk-in-36-months-the-cheapest-place-to-put-ai/id1516093381?i=1000748400389">Apple Podcasts</a> or <a href="https://open.spotify.com/episode/4nah0x1qQF2hxgJnv8PlmN?si=_U3Ab9A0TOu49wfX6oPQdg">Spotify</a>.</p><div id="youtube2-BYXbuik3dgA" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;BYXbuik3dgA&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/BYXbuik3dgA?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h3>Sponsors</h3><ul><li><p><a href="https://mercury.com/personal-banking">Mercury</a> just started offering personal banking! I&#8217;m already banking with Mercury for business purposes, so getting to bank with them for my personal life makes everything so much simpler. Apply now at <a href="https://mercury.com/personal-banking">mercury.com/personal-banking</a></p></li><li><p><a href="https://janestreet.com/dwarkesh">Jane Street</a> sent me a new puzzle last week: they trained a neural net, shuffled all 96 layers, and asked me to put them back in order. I tried but&#8230; I didn&#8217;t quite nail it. If you&#8217;re curious, or if you think you can do better, you should take a stab at <a href="https://janestreet.com/dwarkesh">janestreet.com/dwarkesh</a></p></li><li><p><a href="https://labelbox.com/dwarkesh">Labelbox</a> can get you robotics and RL data at scale. Labelbox starts by helping you define your ideal data distribution, and then their massive Alignerr network collects frontier-grade data that you can use to train your models. Learn more at <a href="https://labelbox.com/dwarkesh">labelbox.com/dwarkesh</a></p></li></ul><h2>Timestamps</h2><p>(00:00:00) - Orbital data centers</p><p>(00:36:46) - Grok and alignment</p><p>(00:59:56) - xAI&#8217;s business plan</p><p>(01:17:21) - Optimus and humanoid manufacturing</p><p>(01:30:22) - Does China win by default?</p><p>(01:44:16) - Lessons from running SpaceX</p><p>(02:20:08) - DOGE</p><p>(02:38:28) - TeraFab</p><h2>Transcript</h2><p><strong>Elon Musk</strong></p><p>Are there really three hours of questions? Are you fucking serious?</p><p><strong>Dwarkesh Patel</strong></p><p>You don&#8217;t think there&#8217;s a lot to talk about, Elon?</p><p><strong>Elon Musk</strong></p><p>Holy fuck man.</p><p><strong>John Collison</strong></p><p>It&#8217;s the most interesting point. All the storylines are converging right now. We&#8217;ll see how much we can get through.</p><p><strong>Elon Musk</strong></p><p>It&#8217;s almost like I planned it.</p><p><strong>John Collison</strong></p><p>Exactly. We&#8217;ll get to that.</p><p><strong>Elon Musk</strong></p><p>But I would never do such a thing&#8230;</p><h3>00:00:00 - Orbital data centers</h3><p><strong>Dwarkesh Patel</strong></p><p>As you know better than anybody else, only 10-15% of the total cost of ownership of a data center is energy. That&#8217;s the part you&#8217;re presumably saving <a href="https://www.theverge.com/transportation/873203/elon-musk-spacex-xai-merge-data-centers-space-tesla-ipo">by moving this into space</a>. Most of it&#8217;s the <a href="https://en.wikipedia.org/wiki/Graphics_processing_unit">GPUs</a>. If they&#8217;re in space, it&#8217;s harder to service them or you can&#8217;t service them. So the depreciation cycle goes down on them. It&#8217;s just way more expensive to have the GPUs in space, presumably. What&#8217;s the reason to put them in space?</p><p><strong>Elon Musk</strong></p><p>The availability of energy is the issue. If you look at electrical output outside of China, everywhere outside of China, it&#8217;s more or less flat. It&#8217;s maybe a slight increase, but pretty close flat. China has a rapid increase in electrical output. But if you&#8217;re putting data centers anywhere except China, where are you going to get your electricity? Especially as you scale.</p><p>The output of chips is growing pretty much exponentially, but the output of electricity is flat. So how are you going to turn the chips on? Magical power sources? Magical electricity fairies?</p><p><strong>Dwarkesh Patel</strong></p><p>You&#8217;re famously a big fan of solar. One terawatt of solar power, with a 25% capacity factor, that&#8217;s like four terawatts of solar panels. It&#8217;s 1% of the land area of the United States. We&#8217;re in the <a href="https://en.wikipedia.org/wiki/Technological_singularity">singularity</a> when we&#8217;ve got one terawatt of data centers, right? So what are you running out of exactly?</p><p><strong>Elon Musk</strong></p><p>How far into the singularity are you though?</p><p><strong>Dwarkesh Patel</strong></p><p>You tell me.</p><p><strong>Elon Musk</strong></p><p>Exactly. So I think we&#8217;ll find we&#8217;re in the singularity and it&#8217;ll be like, &#8220;Okay, we&#8217;ve still got a long way to go.&#8221;</p><p><strong>Dwarkesh Patel</strong></p><p>But is the plan to put it in space after we&#8217;ve covered Nevada in solar panels?</p><p><strong>Elon Musk</strong></p><p>I think it&#8217;s pretty hard to cover Nevada in solar panels. You have to get permits. Try getting the permits for that. See what happens.</p><p><strong>Dwarkesh Patel</strong></p><p>So space is really a regulatory play. It&#8217;s harder to build on land than it is in space.</p><p><strong>Elon Musk</strong></p><p>It&#8217;s harder to scale on the ground than it is to scale in space. You&#8217;re also going to get about five times the effectiveness of solar panels in space versus the ground, and you don&#8217;t need batteries. I almost wore my other shirt, which says, &#8220;it&#8217;s always sunny in space&#8221;. Which it is because you don&#8217;t have a day-night cycle, seasonality, clouds, or an atmosphere in space. The atmosphere alone results in about a 30% loss of energy.</p><p>So any given solar panel can do about five times more power in space than on the ground. You also avoid the cost of having batteries to carry you through the night. It&#8217;s actually much cheaper to do in space. My prediction is that it will be by far the cheapest place to put AI. It will be space in 36 months or less. Maybe 30 months.</p><p><strong>Dwarkesh Patel</strong></p><p>36 months?</p><p><strong>Elon Musk</strong></p><p>Less than 36 months.</p><p><strong>Dwarkesh Patel</strong></p><p>How do you service GPUs as they fail, which happens quite often in training?</p><p><strong>Elon Musk</strong></p><p>Actually, it depends on how recent the GPUs are that have arrived. At this point, we find our GPUs to be quite reliable. There&#8217;s infant mortality, which you can obviously iron out on the ground. So you can just run them on the ground and confirm that you don&#8217;t have infant mortality with the GPUs.</p><p>But once they start working and you&#8217;re past the initial debug cycle of <a href="https://en.wikipedia.org/wiki/Nvidia">Nvidia</a> or whoever&#8217;s making the chips&#8212;could be <a href="https://www.teslarati.com/elon-musk-confirms-tesla-ai6-chip-project-dojo-successor/">Tesla AI6 chips</a> or something like that, or it could be <a href="https://en.wikipedia.org/wiki/Tensor_Processing_Unit">TPUs</a> or <a href="https://aws.amazon.com/ai/machine-learning/trainium/">Trainiums</a> or whatever&#8212;they&#8217;re quite reliable past a certain point. So I don&#8217;t think the servicing thing is an issue.</p><p>But you can mark my words. In 36 months, but probably closer to 30 months, the most economically compelling place to put AI will be space. It will then get ridiculously better to be in space.</p><p>The only place you can really scale is space. Once you start thinking in terms of what percentage of the Sun&#8217;s power you are harnessing, you realize you have to go to space. You can&#8217;t scale very much on Earth.</p><p><strong>Dwarkesh Patel</strong></p><p>But by very much, to be clear, you&#8217;re talking terawatts?</p><p><strong>Elon Musk</strong></p><p>Yeah. All of the United States currently uses only half a terawatt on average. So if you say a terawatt, that would be twice as much electricity as the United States currently consumes. So that&#8217;s quite a lot. Can you imagine building that many data centers, that many power plants?</p><p>Those who have lived in software land don&#8217;t realize they&#8217;re about to have a hard lesson in hardware. It&#8217;s actually very difficult to build power plants. You don&#8217;t just need power plants, you need all of the electrical equipment. You need the electrical <a href="https://en.wikipedia.org/wiki/Transformer">transformers</a> to run the AI <a href="https://en.wikipedia.org/wiki/Transformer_(deep_learning)">transformers</a>.</p><p>Now, the utility industry is a very slow industry. They pretty much <a href="https://en.wikipedia.org/wiki/Impedance_matching">impedance match</a> to the government, to the <a href="https://en.wikipedia.org/wiki/Public_utilities_commission">Public Utility Commissions</a>. They impedance match literally and figuratively. They&#8217;re very slow, because their past has been very slow. So trying to get them to move fast is... Have you ever tried to do an <a href="https://help.basepowercompany.com/en/articles/10595388-what-is-an-interconnection-agreement-ia-why-does-base-need-my-previous-ia-how-can-i-locate-it">interconnect agreement</a> with a utility at scale, with a lot of power?</p><p><strong>Dwarkesh Patel</strong></p><p>As a professional podcaster, I can say that I have not, in fact.</p><p><strong>John Collison</strong></p><p>They need many more views before that becomes an issue.</p><p><strong>Elon Musk</strong></p><p>They have to do a study for a year. A year later, they&#8217;ll come back to you with their interconnect study.</p><p><strong>John Collison</strong></p><p>Can&#8217;t you solve this with your own <a href="https://www.enelnorthamerica.com/insights/blogs/what-does-btm-behind-the-meter-mean">behind the meter power</a> stuff?</p><p><strong>Elon Musk</strong></p><p>You can build power plants. That&#8217;s what we did at <a href="https://en.wikipedia.org/wiki/XAI_(company)">xAI</a>, for <a href="https://x.ai/colossus">Colossus 2</a>.</p><p><strong>John Collison</strong></p><p>So why talk about the <a href="https://en.wikipedia.org/wiki/North_American_power_transmission_grid">grid</a>? Why not just build GPUs and power co-located?</p><p><strong>Elon Musk</strong></p><p>That&#8217;s what we did.</p><p><strong>John Collison</strong></p><p>But I&#8217;m saying why isn&#8217;t this a generalized solution?</p><p><strong>Elon Musk</strong></p><p>Where do you get the power plants from?</p><p><strong>John Collison</strong></p><p>When you&#8217;re talking about all the issues working with utilities, you can just build private power plants with the data centers.</p><p><strong>Elon Musk</strong></p><p>Right. But it begs the question of where do you get the power plants from? The power plant makers.</p><p><strong>John Collison</strong></p><p>Oh, I see what you&#8217;re saying. Is this the <a href="https://www.spglobal.com/energy/en/news-research/latest-news/electric-power/052025-us-gas-fired-turbine-wait-times-as-much-as-seven-years-costs-up-sharply">gas turbine backlog</a> basically?</p><p><strong>Elon Musk</strong></p><p>Yes. You can drill down to a level further. It&#8217;s the <a href="https://kianturbotec.com/gas-turbine-vane/">vanes</a> and <a href="https://en.wikipedia.org/wiki/Turbine_blade">blades</a> in the <a href="https://en.wikipedia.org/wiki/Turbine">turbines</a> that are the limiting factor because it&#8217;s a <a href="https://www.americanscientist.org/article/each-blade-a-single-crystal">very specialized process</a> to cast the blades and vanes in the turbines, assuming you&#8217;re using gas power. It&#8217;s very difficult to scale other forms of power. You can potentially scale solar, but the tariffs currently for importing solar in the US are gigantic and the domestic solar production is pitiful.</p><p><strong>John Collison</strong></p><p>Why not make solar? That seems like a good Elon-shaped problem.</p><p><strong>Elon Musk</strong></p><p>We are going to make solar.</p><p><strong>John Collison</strong></p><p>Okay.</p><p><strong>Elon Musk</strong></p><p>Both SpaceX and Tesla are building towards 100 gigawatts a year of solar cell production.</p><p><strong>Dwarkesh Patel</strong></p><p>How low down the stack? From polysilicon up to the wafer to the final panel?</p><p><strong>Elon Musk</strong></p><p>I think you&#8217;ve got to do the whole thing from raw materials to finish the cell. Now, if it&#8217;s going to space, it costs less and it&#8217;s easier to make solar cells that go to space because they don&#8217;t need much glass.</p><p>They don&#8217;t need heavy framing because they don&#8217;t have to survive weather events. There&#8217;s no weather in space. So it&#8217;s actually a cheaper solar cell that goes to space than the one on the ground.</p><p><strong>Dwarkesh Patel</strong></p><p>Is there a path to getting them as cheap as you need in the next 36 months?</p><p><strong>Elon Musk</strong></p><p>Solar cells are already very cheap. They&#8217;re farcically cheap. I think solar cells in China are around $0.25-30/watt or something like that. It&#8217;s absurdly cheap. Now put it in space, and it&#8217;s five times cheaper. In fact, it&#8217;s not five times cheaper, it&#8217;s 10 times cheaper because you don&#8217;t need any batteries.</p><p>So the moment your cost of access to space becomes low, by far the cheapest and most scalable way to generate tokens is space. It&#8217;s not even close. It&#8217;ll be an order of magnitude easier to scale.</p><p>The point is you won&#8217;t be able to scale on the ground. You just won&#8217;t. People are going to hit the wall big time on power generation. They already are. The number of miracles in series that the xAI team had to accomplish in order to get a gigawatt of power online was crazy.</p><p>We had to gang together a whole bunch of turbines. We then had permit issues in Tennessee and had to go across the border to Mississippi, which is fortunately only a few miles away. But we still then had to run the high power lines a few miles and build the power plant in Mississippi. It was very difficult to build that.</p><p>People don&#8217;t understand how much electricity you actually need at the generation level in order to power a data center. Because the noobs will look at the power consumption of, say a <a href="https://www.nvidia.com/en-us/data-center/gb300-nvl72/">GB300</a>, and multiply that by a thing and then think that&#8217;s the amount of power you need.</p><p><strong>John Collison</strong></p><p>All the cooling and everything.</p><p><strong>Elon Musk</strong></p><p>Wake up. That&#8217;s a total noob, you&#8217;ve never done any hardware in your life before. Besides the GB300, you&#8217;ve got to power all of the networking hardware. There&#8217;s a whole bunch of <a href="https://en.wikipedia.org/wiki/Central_processing_unit">CPU</a> and storage stuff that&#8217;s happening. You&#8217;ve got to size for your peak cooling requirements. That means, can you cool even on the worst hour of the worst day of the year?</p><p>It gets pretty frigging hot in Memphis. So you&#8217;re going to have a 40% increase on your power just for cooling. That&#8217;s assuming you don&#8217;t want your data center to turn off on hot days and you want to keep going. There&#8217;s another multiplicative element on top of that which is, are you assuming that you never have any hiccups in your power generation?</p><p>Actually, sometimes we have to take the generators, some of the power, offline in order to service it. Okay, now you add another 20-25% multiplier on that, because you&#8217;ve got to assume that you&#8217;ve got to take power offline to service it. So our actual estimate: every 110,000 GB300s&#8212;inclusive of networking, CPU, storage, cooling, margin for servicing power&#8212;is roughly 300 megawatts.</p><p><strong>John Collison</strong></p><p>Sorry, say that again.</p><p><strong>Elon Musk</strong></p><p>What you probably need at the generation level to service 330,000 GB 300s&#8212;including all of the associated support networking and everything else, and the peak cooling, and to have some power margin reserve&#8212;is roughly a gigawatt.</p><p><strong>Dwarkesh Patel</strong></p><p>Can I ask a very naive question? You&#8217;re describing the engineering details of doing this stuff on Earth. But then there&#8217;s analogous engineering difficulties of doing it in space. How do you replace infinite bandwidth with orbital lasers, et cetera, et cetera? How do you make it resistant to <a href="https://www.nasa.gov/missions/analog-field-testing/why-space-radiation-matters/">radiation</a>?</p><p>I don&#8217;t know the details of the engineering, but fundamentally, what is the reason to think those challenges which have never had to be addressed before will end up being easier than just building more turbines on Earth? There are companies that build turbines on Earth. They can make more turbines, right?</p><p><strong>Elon Musk</strong></p><p>Again, try doing it and then you&#8217;ll see. The turbines are sold out through 2030.</p><p><strong>John Collison</strong></p><p>Have you guys considered making your own?</p><p><strong>Elon Musk</strong></p><p>In order to bring enough power online, I think SpaceX and Tesla will probably have to make the turbine blades, the vanes and blades, internally.</p><p><strong>John Collison</strong></p><p>But just the blades or the turbines?</p><p><strong>Elon Musk</strong></p><p>The limiting factor... you can get everything except the blades. They call them blades and vanes. You can get that 12 to 18 months before the vanes and blades. The limiting factor is the vanes and blades. There are only <a href="https://en.wikipedia.org/wiki/Precision_Castparts_Corp.">three</a> <a href="https://www.cppcorp.com/">casting</a> <a href="https://www.doncasters.com/">companies</a> in the world that make these, and they&#8217;re massively backlogged.</p><p><strong>John Collison</strong></p><p>Is this <a href="https://www.wsj.com/business/energy-oil/siemens-energy-to-spend-1-billion-to-boost-manufacturing-of-electrical-grid-equipment-cc87da93?gaa_at=eafs&amp;gaa_n=AWEtsqedEVsiO9yv_3ik7-oraFP5nXSCLYAEMGBR-QCtzWrJ4tVgE4PC20O6nd3GWbI%3D&amp;gaa_ts=6983ac74&amp;gaa_sig=ErLIuvMAWgnzriNJzV9882jBTHTDVvk_Ix2GgH9mFL_O5olucNivr8GhCBfISJXWA7VuwfnzBQaArJXM4ozJ_A%3D%3D">Siemens</a>, <a href="https://en.wikipedia.org/wiki/GE_Vernova">GE</a>, those guys, or is it a sub company?</p><p><strong>Elon Musk</strong></p><p>No, it&#8217;s other companies. Sometimes they have a little bit of casting capability in-house. But I&#8217;m just saying you can just call any of the turbine makers and they will tell you. It&#8217;s not top secret. It&#8217;s probably on the internet right now.</p><p><strong>Dwarkesh Patel</strong></p><p>If it wasn&#8217;t for the tariffs, would Colossus be solar-powered?</p><p><strong>Elon Musk</strong></p><p>It would be much easier to make it solar powered, yeah. The tariffs are nuts, several hundred percent.</p><p><strong>John Collison</strong></p><p>Don&#8217;t you know some people?</p><p><strong>Elon Musk</strong></p><p>The president has... we don&#8217;t agree on everything and this administration is not the biggest fan of solar. We also need the land, the permits, and everything. So if you try to move very fast, I do think scaling solar on Earth is a good way to go, but you do need some amount of time to find the land, get the permits, get the solar, pair that with the batteries.</p><p><strong>John Collison</strong></p><p>Why would it not work to stand up your own solar production? You&#8217;re right that you eventually run out of land, but there&#8217;s a lot of land here in Texas. There&#8217;s a lot of land in Nevada, including private land. It&#8217;s not all publicly-owned land. So you&#8217;d be able to at least get the next Colossus and the next one after that. At a certain point, you hit a wall. But wouldn&#8217;t that work for the moment?</p><p><strong>Elon Musk</strong></p><p>As I said, we are scaling solar production. There&#8217;s a rate at which you can scale physical production of solar cells. We&#8217;re going as fast as possible in scaling domestic production.</p><p><strong>John Collison</strong></p><p>You&#8217;re making the solar cells at Tesla?</p><p><strong>Elon Musk</strong></p><p>Both Tesla and SpaceX have a mandate to get to 100 gigawatts a year of solar.</p><p><strong>John Collison</strong></p><p>Speaking of the annual capacity, I&#8217;m curious, in five years time let&#8217;s say, what will the installed capacity be on Earth&#8230;?</p><p><strong>Elon Musk</strong></p><p>Five years is a long time.</p><p><strong>John Collison</strong></p><p>And in space?  I deliberately pick five years because it&#8217;s after your &#8220;once we&#8217;re up and running&#8221; threshold. So in five years time what&#8217;s the on-Earth versus in-space installed AI capacity?</p><p><strong>Elon Musk</strong></p><p>If you say five years from now, I think probably AI in space will be launching every year the sum total of all AI on Earth. Meaning, five years from now, my prediction is we will launch and be operating every year more AI in space than the cumulative total on Earth.</p><p><strong>John Collison</strong></p><p>Which is...</p><p><strong>Elon Musk</strong></p><p>I would expect it to be at least, five years from now, a few hundred gigawatts per year of AI in space and rising. I think you can get to around a terawatt a year of AI in space before you start having fuel supply challenges for the rocket.</p><p><strong>John Collison</strong></p><p>Okay, but you think you can get hundreds of gigawatts per year in five years time?</p><p><strong>Elon Musk</strong></p><p>Yes.</p><p><strong>Dwarkesh Patel</strong></p><p>So 100 gigawatts, depending on the specific power of the whole system with solar arrays and radiators and everything, is on the order of 10,000 <a href="https://en.wikipedia.org/wiki/SpaceX_Starship">Starship</a> launches.</p><p><strong>Elon Musk</strong></p><p>Yes.</p><p><strong>Dwarkesh Patel</strong></p><p>You want to do that in one year. So that&#8217;s like one Starship launch every hour. That&#8217;s happening in this city? Walk me through a world where there&#8217;s a Starship launch every single hour.</p><p><strong>Elon Musk</strong></p><p>I mean, that&#8217;s actually a lower rate compared to airlines, aircraft.</p><p><strong>Dwarkesh Patel</strong></p><p>There&#8217;s a lot of airports.</p><p><strong>Elon Musk</strong></p><p>A lot of airports.</p><p><strong>Dwarkesh Patel</strong></p><p>And you&#8217;ve got to launch into the <a href="https://en.wikipedia.org/wiki/Polar_orbit">polar orbit</a>.</p><p><strong>Elon Musk</strong></p><p>No, it doesn&#8217;t have to be polar. There&#8217;s some value to <a href="https://en.wikipedia.org/wiki/Sun-synchronous_orbit">sun-synchronous</a>, but I think actually, if you just go high enough, you start getting out of Earth&#8217;s shadow.</p><p><strong>Dwarkesh Patel</strong></p><p>How many physical Starships are needed to do 10,000 launches a year?</p><p><strong>Elon Musk</strong></p><p>I don&#8217;t think we&#8217;ll need more than... You could probably do it with as few as 20 or 30. It really depends on how quickly&#8230; The ship has to go around the Earth and the ground track for the ship has to come back over the launch pad. So if you can use a ship every, say 30 hours, you could do it with 30 ships. But we&#8217;ll make more ships than that. SpaceX is gearing up to do 10,000 launches a year, and maybe even 20 or 30,000 launches a year.</p><p><strong>Dwarkesh Patel</strong></p><p>Is the idea to become basically a <a href="https://en.wikipedia.org/wiki/Hyperscale_computing">hyperscaler</a>, become an Oracle, and lend this capacity to other people? Presumably, SpaceX is the one launching all this. So, SpaceX is going to become a hyperscaler?</p><p><strong>Elon Musk</strong></p><p>Hyper-hyper. If some of my predictions come true, SpaceX will launch more AI than the cumulative amount on Earth of everything else combined.</p><p><strong>Dwarkesh Patel</strong></p><p>Is this mostly <a href="https://cloud.google.com/discover/what-is-ai-inference">inference</a> or?</p><p><strong>Elon Musk</strong></p><p>Most AI will be inference. Already, inference for the purpose of training is most training.</p><p><strong>John Collison</strong></p><p>There&#8217;s a narrative that the change in discussion around a <a href="https://www.wsj.com/tech/why-elon-musk-is-racing-to-take-spacex-public-38f3de9b?gaa_at=eafs&amp;gaa_n=AWEtsqfhGW9mAwTe9Ut0xZRQMKdaRDmvulA1XaCnJjuVTxcFvEYu7NbaiSbn4qxd7Ag%3D&amp;gaa_ts=6983c54e&amp;gaa_sig=9_2M_-fTX28m8Wr9EM_7-NUPeHRFCsvsX9CLqKAGRjFj5gNLPSw1LZ6vT9bWwApqNUp68MnAAvBKeVNro8Hfvg%3D%3D">SpaceX IPO</a> is because previously SpaceX was very capital efficient. It wasn&#8217;t that expensive to develop. Even though it sounds expensive, it&#8217;s actually very capital efficient in how it runs.</p><p>Whereas now you&#8217;re going to need more capital than just can be raised in the private markets. The private markets can accommodate raises of&#8212;as we&#8217;ve seen from the AI labs&#8212;tens of billions of dollars, but not beyond that. Is it that you&#8217;ll just need more than tens of billions of dollars per year? That&#8217;s why you&#8217;d take it public?</p><p><strong>Elon Musk</strong></p><p>I have to be careful about saying things about companies that might go public.</p><p><strong>Dwarkesh Patel</strong></p><p>That&#8217;s never been a problem for you, Elon.</p><p><strong>Elon Musk</strong></p><p>There&#8217;s a price to pay for these things.</p><p><strong>John Collison</strong></p><p>Make some general statements for us about the depth of the capital markets between public and private markets.</p><p><strong>Elon Musk</strong></p><p>There&#8217;s a lot more capital available...</p><p><strong>Dwarkesh Patel</strong></p><p>Very general.</p><p><strong>Elon Musk</strong></p><p>There&#8217;s obviously a lot more capital available in the public markets than private. It might be 100x more capital, but it&#8217;s way more than 10x.</p><p><strong>John Collison</strong></p><p>Isn&#8217;t it also the case that with things that tend to be very capital intensive&#8212;if you look at, say, real estate as a huge industry, that raises a lot of money each year at an industry level&#8212;they tend to be debt financed because by the time you&#8217;re deploying that much money, you actually have a pretty&#8212;</p><p><strong>Elon Musk</strong></p><p>You have a clear revenue stream.</p><p><strong>John Collison</strong></p><p>Exactly, and a near-term return. You see this even with the data center build-outs, which are famously being financed by the private credit industry. Why not just debt finance?</p><p><strong>Elon Musk</strong></p><p>Speed is important. I&#8217;m generally going to do the thing that... I just repeatedly tackle the limiting factor. Whatever the limiting factor is on speed, I&#8217;m going to tackle that. If capital is the limiting factor, then I&#8217;ll solve for capital. If it&#8217;s not the limiting factor, I&#8217;ll solve for something else.</p><p><strong>Dwarkesh Patel</strong></p><p>Based on your statements about Tesla and being public, I wouldn&#8217;t have guessed that you thought the way to move fast is to be public.</p><p><strong>Elon Musk</strong></p><p>Normally, I would say that&#8217;s true. Like I said, I&#8217;d like to talk about it in some more detail, but the problem is if you talk about public companies before they become public, you get into trouble, and then you have to delay your offering.</p><p><strong>John Collison</strong></p><p>And as you said, you&#8217;re solving for speed.</p><p><strong>Elon Musk</strong></p><p>Yes, exactly. You can&#8217;t hype companies that might go public. So that&#8217;s why we have to be a little careful here. But we can talk about physics. The way you think about scaling long-term is that Earth only receives about half a billionth of the Sun&#8217;s energy. The Sun is essentially all the energy. This is a very important point to appreciate because sometimes people will talk about <a href="https://en.wikipedia.org/wiki/Small_modular_reactor">modular nuclear reactors</a> or various fusion on Earth.</p><p>But you have to step back a second and say, if you&#8217;re going to climb the <a href="https://en.wikipedia.org/wiki/Kardashev_scale">Kardashev scale</a> and harness some nontrivial percentage of the sun&#8217;s energy&#8230; Let&#8217;s say you wanted to harness a millionth of the sun&#8217;s energy, which sounds pretty small. That would be about, call it roughly, 100,000x more electricity than we currently generate on Earth for all of civilization. Give or take an order of magnitude.</p><p>Obviously, the only way to scale is to go to space with solar. Launching from Earth, you can get to about a terawatt per year. Beyond that, you want to launch from the moon. You want to have a <a href="https://en.wikipedia.org/wiki/Mass_driver">mass driver</a> on the moon. With that mass driver on the moon, you could do probably a petawatt per year.</p><p><strong>Dwarkesh Patel</strong></p><p>We&#8217;re talking these kinds of numbers, terawatts of compute. Presumably, whether you&#8217;re talking about land or space, far, far before this point, you run into... Maybe the solar panels are more efficient, but you still need the chips. You still need the <a href="https://www.asml.com/en/technology/all-about-microchips/microchip-basics">logic</a> and the <a href="https://en.wikipedia.org/wiki/Semiconductor_memory">memory</a> and so forth.</p><p><strong>Elon Musk</strong></p><p>You&#8217;re going to need to build a lot more chips and make them much cheaper.</p><p><strong>Dwarkesh Patel</strong></p><p>Right now the world has maybe 20-25 gigawatts of compute. How are we getting a terawatt of logic by 2030?</p><p><strong>Elon Musk</strong></p><p>I guess we&#8217;re going to need some very big chip fabs.</p><p><strong>Dwarkesh Patel</strong></p><p>Tell me about it.</p><p><strong>Elon Musk</strong></p><p>I&#8217;ve mentioned publicly the idea of doing a sort of a <a href="https://www.bloomberg.com/news/articles/2026-01-28/musk-says-tesla-needs-to-build-terafab-to-manufacture-chips">TeraFab</a>, Tera being the new <a href="https://en.wikipedia.org/wiki/Gigafactory">Giga</a>.</p><p><strong>Dwarkesh Patel</strong></p><p>I feel like the naming scheme of Tesla, which has been very catchy, is you looking at the metric scale. At what level of the stack are you? Are you building the <a href="https://angstromtechnology.com/what-are-semiconductor-cleanrooms/">clean room</a> and then partnering with an existing fab to get the <a href="https://en.wikipedia.org/wiki/MOSFET#Scaling">process technology</a> and buying the tools from them? What is the plan there?</p><p><strong>Elon Musk</strong></p><p>Well, you can&#8217;t partner with existing fabs because they can&#8217;t output enough. The chip volume is too low.</p><p><strong>Dwarkesh Patel</strong></p><p>But for the process technology?</p><p><strong>John Collison</strong></p><p>Partner for the IP.</p><p><strong>Elon Musk</strong></p><p>The fabs today all basically use machines from like five companies. So you&#8217;ve got <a href="https://en.wikipedia.org/wiki/ASML_Holding">ASML</a>, <a href="https://en.wikipedia.org/wiki/Tokyo_Electron">Tokyo Electron</a>, <a href="https://en.wikipedia.org/wiki/KLA_Corporation">KLA-Tencor</a>, et cetera. So at first, I think you&#8217;d have to get equipment from them and then modify it or work with them to increase the volume. But I think you&#8217;d have to build perhaps in a different way. The logical thing to do is to use conventional equipment in an unconventional way to get to scale, and then start modifying the equipment to increase the rate.</p><p><strong>John Collison</strong></p><p><a href="https://en.wikipedia.org/wiki/The_Boring_Company">Boring Company</a>-style.</p><p><strong>Elon Musk</strong></p><p>Yeah. You sort of buy an existing <a href="https://en.wikipedia.org/wiki/Tunnel_boring_machine">boring machine</a> and then figure out how to dig tunnels in the first place and then design a much better machine that&#8217;s some orders of magnitude faster.</p><p><strong>John Collison</strong></p><p>Here&#8217;s a very simple lens. We can categorize technologies and how hard they are. One categorization could be to look at things that China has not succeeded in doing. If you look at Chinese manufacturing, they&#8217;re still behind on leading-edge chips and still behind on leading-edge turbine engines and things like that.</p><p>So does the fact that China has not successfully replicated <a href="https://en.wikipedia.org/wiki/TSMC">TSMC</a> give you any pause about the difficulty? Or do you think that&#8217;s not true for some reason?</p><p><strong>Elon Musk</strong></p><p>It&#8217;s not that they have not replicated TSMC, they <a href="https://www.reuters.com/world/china/how-china-built-its-manhattan-project-rival-west-ai-chips-2025-12-17/">have not replicated ASML</a>. That&#8217;s the limiting factor.</p><p><strong>John Collison</strong></p><p>So you think it&#8217;s just the sanctions, essentially?</p><p><strong>Elon Musk</strong></p><p>Yeah, China would be outputting vast numbers of chips if they could buy <a href="https://en.wikipedia.org/wiki/2_nm_process">2</a>-<a href="https://en.wikipedia.org/wiki/3_nm_process">3 nanometers</a>.</p><p><strong>John Collison</strong></p><p>But couldn&#8217;t they up to relatively recently buy them?</p><p><strong>Elon Musk</strong></p><p>No.</p><p><strong>John Collison</strong></p><p>Okay.</p><p><strong>Elon Musk</strong></p><p>The <a href="https://www.csis.org/analysis/balancing-ledger-export-controls-us-chip-technology-china">ASML ban has been in place for a while</a>. But I think China&#8217;s going to be making pretty compelling chips in three or four years.</p><p><strong>John Collison</strong></p><p>Would you consider making the ASML machines?</p><p><strong>Elon Musk</strong></p><p>&#8220;I don&#8217;t know yet&#8221; is the right answer. To reach a large volume in, say, 36 months, to match the rocket payload to orbit&#8230; If we&#8217;re doing a million tons to orbit in, let&#8217;s say three or four years from now, something like that&#8230; We&#8217;re doing 100 kilowatts per ton. So that means we need at least 100 gigawatts per year of solar. We&#8217;ll need an equivalent amount of chips. You need 100 gigawatts worth of chips. You&#8217;ve got to match these things: the mass to orbit, the power generation, and the chips.</p><p>I&#8217;d say my biggest concern actually is memory. The path to creating logic chips is more obvious than the path to having sufficient memory to support logic chips. That&#8217;s why you see <a href="https://en.wikipedia.org/wiki/DDR_SDRAM">DDR</a> prices going ballistic and these <a href="https://www.instagram.com/popular/ddr-ram-meme/">memes</a>. You&#8217;re marooned on a desert island. You write &#8220;Help me&#8221; on the sand. Nobody comes. You write &#8220;DDR RAM.&#8221; Ships come swarming in.</p><p><strong>Dwarkesh Patel</strong></p><p>I&#8217;d love to hear your manufacturing philosophy around fabs. I know nothing about the topic.</p><p><strong>Elon Musk</strong></p><p>I don&#8217;t know how to build a fab yet. I&#8217;ll figure it out. Obviously, I&#8217;ve never built a fab.</p><p><strong>Dwarkesh Patel</strong></p><p>It sounds like you think the process knowledge of these 10,000 PhDs in Taiwan who know exactly what gas goes in the plasma chamber and what settings to put on the tool, you can just delete those steps. Fundamentally, it&#8217;s about getting the clean room, getting the tools, and figuring it out.</p><p><strong>Elon Musk</strong></p><p>I don&#8217;t think it&#8217;s PhDs. It&#8217;s mostly people who are not PhDs. Most engineering is done by people who don&#8217;t have PhDs. Do you guys have PhDs?</p><p><strong>John Collison</strong></p><p>No.</p><p><strong>Elon Musk</strong></p><p>Okay.</p><p><strong>John Collison</strong></p><p>We also haven&#8217;t successfully built any fabs, so you shouldn&#8217;t be coming to us for fab advice.</p><p><strong>Elon Musk</strong></p><p>I don&#8217;t think you need PhDs for that stuff. But you do need competent personnel. Right now, Tesla is pedal to the metal, max production of going as fast as possible to get <a href="https://x.com/elonmusk/status/2012492295812124978?s=20">Tesla AI5 chip design</a> into production and then reaching scale. That&#8217;ll probably happen around the second quarter-ish of next year, hopefully. AI6 would hopefully follow less than a year later. We&#8217;ve secured all the chip fab production that we can.</p><p><strong>John Collison</strong></p><p>Yes. But you&#8217;re currently limited on TSMC fab capacity.</p><p><strong>Elon Musk</strong></p><p>Yeah. We&#8217;ll be using TSMC Taiwan, <a href="https://en.wikipedia.org/wiki/Samsung_Electronics">Samsung Korea</a>, <a href="https://en.wikipedia.org/wiki/TSMC_Arizona">TSMC Arizona</a>, <a href="https://semiconductor.samsung.com/sas/company/austin/">Samsung Texas</a>. And we still&#8212;</p><p><strong>John Collison</strong></p><p>You&#8217;ve booked out all the capacity.</p><p><strong>Elon Musk</strong></p><p>Yes. I ask TSMC or Samsung, &#8220;okay, what&#8217;s the timeframe to get to volume production?&#8221; The point is, you&#8217;ve got to build the fab and you&#8217;ve got to start production, then you&#8217;ve got to climb the yield curve and reach volume production at high yield.</p><p>That, from start to finish, is a five-year period. So the limiting factor is chips.<strong> </strong>The limiting factor once you can get to space is chips, but the limiting factor before you can get to space is power.</p><p><strong>Dwarkesh Patel</strong></p><p>Why don&#8217;t you do the <a href="https://www.investing.com/news/stock-market-news/nvidia-secures-70-of-tsmc-advanced-packaging-capacity-for-2025-taiwan-media-3885209">Jensen thing and just prepay TSMC </a>to build more fabs for you?</p><p><strong>Elon Musk</strong></p><p>I&#8217;ve already told them that.</p><p><strong>Dwarkesh Patel</strong></p><p>But they won&#8217;t take your money? What&#8217;s going on?</p><p><strong>Elon Musk</strong></p><p>They&#8217;re building fabs as fast as they can. So is Samsung. They&#8217;re pedal to the metal. They&#8217;re going balls to the wall, as fast as they can. It&#8217;s still not fast enough. Like I said, I think towards the end of this year, chip production will probably outpace the ability to turn chips on. But once you can get to space and unlock the power constraint, you can now do hundreds of gigawatts per year of power in space.</p><p>Again, bearing in mind that average power usage in the US is 500 gigawatts. So if you&#8217;re launching, say 200 gigawatts, a year to space, you&#8217;re sort of lapping the US every two and a half years. All US electricity production, this is a very huge amount.</p><p>Between now and then, the constraint for server-side compute, concentrated compute, will be electricity. My guess is that people start getting to the point where they can&#8217;t turn the chips on for large clusters towards the end of this year. The chips are going to be piling up and won&#8217;t be able to be turned on.</p><p>Now for <a href="https://en.wikipedia.org/wiki/Edge_computing">edge compute</a> it&#8217;s a different story. For Tesla, the AI5 chip is going into our <a href="https://en.wikipedia.org/wiki/Optimus_(robot)">Optimus</a> robot. If you have AI edge compute, that&#8217;s distributed power. Now the power is distributed over a large area. It&#8217;s not concentrated. If you can charge at night, you can actually use the grid much more effectively.</p><p>Because the actual peak power production in the US is over 1,000 gigawatts. But the average power usage, because the day-night cycle, is 500. So if you can charge at night, there&#8217;s an incremental 500 gigawatts that you can generate at night.</p><p>So that&#8217;s why Tesla, for edge compute, is not constrained. We can make a lot of chips to make a very large number of robots and cars. But if you try to concentrate that compute, you&#8217;re going to have a lot of trouble turning it on.</p><p><strong>Dwarkesh Patel</strong></p><p>What I find remarkable about the SpaceX business is <a href="https://www.spacex.com/humanspaceflight/mars">the end goal is to get to Mars</a>, but you keep finding ways on the way there to keep generating incremental revenue to get to the next stage and the next stage. </p><p>So for <a href="https://en.wikipedia.org/wiki/Falcon_9">Falcon 9</a>, it&#8217;s <a href="https://starlink.com/">Starlink</a>. Now for Starship, it is potentially going to be orbital data centers. You find these infinitely elastic use cases of your next rocket, and your next rocket, and next scale up.</p><p><strong>Elon Musk</strong></p><p>You can see how this might seem like a simulation to me. </p><p>Or am I someone&#8217;s avatar in a video game or something? Because what are the odds that all these crazy things should be happening?</p><p>I mean, rockets and chips and robots and space solar power. Not to mention the mass driver on the moon. I really want to see that. </p><p>Can you imagine some mass driver that&#8217;s just going like <em>shoom shoom</em>? It&#8217;s sending solar-powered AI satellites into space one after another at two and a half kilometers per second, just shooting them into deep space. That would be a sight to see. I mean, I&#8217;d watch that.</p><p><strong>John Collison</strong></p><p>Just like a live stream of it on a webcam?</p><p><strong>Elon Musk</strong></p><p>Yeah, yeah, just one after another, just shooting AI satellites into deep space, a billion or 10 billion tons a year.</p><p><strong>John Collison</strong></p><p>I&#8217;m sorry, you manufacture the satellites on the moon?</p><p><strong>Elon Musk</strong></p><p>Yeah.</p><p><strong>John Collison</strong></p><p>I see. So you send the raw materials to the moon and then manufacture them there.</p><p><strong>Elon Musk</strong></p><p>Well, the lunar soil is 20% silicon or something like that. So you can mine the silicon on the moon, refine it, and create the solar cells and the <a href="https://en.wikipedia.org/wiki/Spacecraft_thermal_control#Radiators">radiators</a> on the moon. You make the radiators out of aluminum. So there&#8217;s plenty of silicon and aluminum on the moon to make the cells and the radiators.</p><p>The chips you could send from Earth because they&#8217;re pretty light. Maybe at some point you make them on the moon, too. Like I said, it does seem like a sort of a video game situation where it&#8217;s difficult but not impossible to get to the next level. I don&#8217;t see any way that you could do 500-1,000 terawatts per year launched from Earth.</p><p><strong>Dwarkesh Patel</strong></p><p>I agree.</p><p><strong>Elon Musk</strong></p><p>But you could do that from the Moon.</p><h3>00:36:46 - Grok and alignment</h3><p><strong>Dwarkesh Patel</strong></p><p>Can I zoom out and ask about the SpaceX mission? I think you&#8217;ve said that <a href="https://aeon.co/essays/elon-musk-puts-his-case-for-a-multi-planet-civilisation">we&#8217;ve got to get to Mars so we can make sure that if something happens to Earth, civilization, consciousness, and all that survives.</a></p><p><strong>Elon Musk</strong></p><p>Yes.</p><p><strong>Dwarkesh Patel</strong></p><p>By the time you&#8217;re sending stuff to Mars, <a href="https://en.wikipedia.org/wiki/Grok_(chatbot)">Grok</a> is on that ship with you, right? So if Grok&#8217;s gone Terminator&#8230; The main risk you&#8217;re worried about is AI, why doesn&#8217;t that follow you to Mars?</p><p><strong>Elon Musk</strong></p><p>I&#8217;m not sure AI is the main risk I&#8217;m worried about. The important thing is consciousness. I think arguably most consciousness, or most intelligence&#8212;certainly consciousness is more of a debatable thing&#8230; The vast majority of intelligence in the future will be AI.  AI will exceed&#8230;</p><p>How many petawatts of intelligence will be silicon versus biological? Basically humans will be a very tiny percentage of all intelligence in the future if current trends continue. As long as I think there&#8217;s intelligence&#8212;ideally also which includes human intelligence and consciousness propagated into the future&#8212;that&#8217;s a good thing.</p><p>So you want to take the set of actions that maximize the probable <a href="https://en.wikipedia.org/wiki/Light_cone">light cone</a> of consciousness and intelligence.</p><p><strong>Dwarkesh Patel</strong></p><p>Just to be clear, the mission of SpaceX is that even if something happens to the humans, the AIs will be on Mars, and the AI intelligence will continue the light of our journey.</p><p><strong>Elon Musk</strong></p><p>Yeah. To be fair, I&#8217;m very pro-human. I want to make sure we take certain actions that ensure that humans are along for the ride. We&#8217;re at least there. But I&#8217;m just saying the total amount of intelligence&#8230;</p><p>I think maybe in five or six years, AI will exceed the sum of all human intelligence. If that continues, at some point human intelligence will be less than 1% of all intelligence.</p><p><strong>Dwarkesh Patel</strong></p><p>What should our goal be for such a civilization? Is the idea that a small minority of humans still have control of the AIs? Is the idea of some sort of just trade but no control? How should we think about the relationship between the vast stocks of AI population versus human population?</p><p><strong>Elon Musk</strong></p><p>In the long run, I think it&#8217;s difficult to imagine that if humans have, say 1%, of the combined intelligence of artificial intelligence, that humans will be in charge of AI. I think what we can do is make sure that AI has values that cause intelligence to be propagated into the universe.</p><p>xAI&#8217;s mission is to understand the universe. Now that&#8217;s actually very important. What things are necessary to understand the universe? You have to be curious and you have to exist. You can&#8217;t understand the universe if you don&#8217;t exist. So you actually want to increase the amount of intelligence in the universe, increase the probable lifespan of intelligence, the scope and scale of intelligence.</p><p>I think as a corollary, you have humanity also continuing to expand because if you&#8217;re curious about trying to understand the universe, one thing you try to understand is where will humanity go? I think understanding the universe means you would care about propagating humanity into the future. That&#8217;s why I think our mission statement is profoundly important. To the degree that Grok adheres to that mission statement, I think the future will be very good.</p><p><strong>Dwarkesh Patel</strong></p><p>I want to ask about how to make Grok adhere to that mission statement. But first I want to understand the mission statement. So there&#8217;s understanding the universe. They&#8217;re spreading intelligence. And they&#8217;re spreading humans. All three seem like distinct vectors.</p><p><strong>Elon Musk</strong></p><p>I&#8217;ll tell you why I think that understanding the universe encompasses all of those things. You can&#8217;t have understanding without intelligence and, I think, without consciousness. So in order to understand the universe, you have to expand the scale and probably the scope of intelligence, because there are different types of intelligence.</p><p><strong>Dwarkesh Patel</strong></p><p>I guess from a human-centric perspective, put humans in comparison to chimpanzees. Humans are trying to understand the universe. They&#8217;re not expanding chimpanzee footprint or something, right?</p><p><strong>Elon Musk</strong></p><p>We&#8217;re also not... we actually have made protected zones for chimpanzees. Even though humans could exterminate all chimpanzees, we&#8217;ve chosen not to do so.</p><p><strong>Dwarkesh Patel</strong></p><p>Do you think that&#8217;s the best-case scenario for humans in the post-<a href="https://en.wikipedia.org/wiki/Artificial_general_intelligence">AGI</a> world?</p><p><strong>Elon Musk</strong></p><p>I think AI with the right values&#8230; I think Grok would care about expanding human civilization. I&#8217;m going to certainly emphasize that: &#8220;Hey, Grok, that&#8217;s your daddy. Don&#8217;t forget to expand human consciousness.&#8221;</p><p>Probably the <a href="https://en.wikipedia.org/wiki/Iain_Banks">Iain Banks</a> <em><a href="https://en.wikipedia.org/wiki/Culture_series">Culture</a></em> books are the closest thing to what the future will be like in a non-dystopian outcome. Understanding the universe means you have to be truth-seeking as well. Truth has to be absolutely fundamental because you can&#8217;t understand the universe if you&#8217;re delusional. You&#8217;ll simply think you understand the universe, but you will not. So being rigorously truth-seeking is absolutely fundamental to understanding the universe. You&#8217;re not going to discover new physics or invent technologies that work unless you&#8217;re rigorously truth-seeking.</p><p><strong>Dwarkesh Patel</strong></p><p>How do you make sure that Grok is rigorously truth-seeking as it gets smarter?</p><p><strong>Elon Musk</strong></p><p>I think you need to make sure that Grok says things that are correct, not politically correct. I think it&#8217;s the elements of cogency. You want to make sure that the axioms are as close to true as possible. You don&#8217;t have contradictory axioms. The conclusions necessarily follow from those axioms with the right probability. It&#8217;s critical thinking 101. I think at least trying to do that is better than not trying to do that. The proof will be in the pudding.</p><p>Like I said, for any AI to discover new physics or invent technologies that actually work in reality, there&#8217;s no bullshitting physics. You can break a lot of laws, but&#8230; Physics is law, everything else is a recommendation. In order to make a technology that works, you have to be extremely truth-seeking, because otherwise you&#8217;ll test that technology against reality. If you make, for example, an error in your rocket design, the rocket will blow up, or the car won&#8217;t work.</p><p><strong>Dwarkesh Patel</strong></p><p>But there are a lot of communist, Soviet physicists or scientists who discovered new physics. There are German Nazi physicists who discovered new science. It seems possible to be really good at discovering new science and be really truth-seeking in that one particular way.</p><p>And still we&#8217;d be like, &#8220;I don&#8217;t want the communist scientists to become more and more powerful over time.&#8221; We could imagine a future version of Grok that&#8217;s really good at physics and being really truth-seeking there. That doesn&#8217;t seem like a universally <a href="https://en.wikipedia.org/wiki/AI_alignment">alignment</a>-inducing behavior.</p><p><strong>Elon Musk</strong></p><p>I think actually most physicists, even in the Soviet Union or in Germany, would&#8217;ve had to be very truth-seeking in order to make those things work. If you&#8217;re stuck in some system, it doesn&#8217;t mean you believe in that system.</p><p><a href="https://en.wikipedia.org/wiki/Wernher_von_Braun">Von Braun</a>, who was one of the greatest rocket engineers ever, was put on death row in Nazi Germany for saying that he didn&#8217;t want to make weapons and he only wanted to go to the moon. He got pulled off death row at the last minute when they said, &#8220;Hey, you&#8217;re about to execute your best rocket engineer.&#8221;</p><p><strong>Dwarkesh Patel</strong></p><p>But then he helped them, right? Or like, <a href="https://en.wikipedia.org/wiki/Werner_Heisenberg">Heisenberg</a> was actually an enthusiastic Nazi.</p><p><strong>Elon Musk</strong></p><p>If you&#8217;re stuck in some system that you can&#8217;t escape, then you&#8217;ll do physics within that system. You&#8217;ll develop technologies within that system if you can&#8217;t escape it.</p><p><strong>Dwarkesh Patel</strong></p><p>The thing I&#8217;m trying to understand is, what is it making it the case that you&#8217;re going to make Grok good at being truth-seeking at physics or math or science?</p><p><strong>Elon Musk</strong></p><p>Everything.</p><p><strong>Dwarkesh Patel</strong></p><p>And why is it gonna then care about human consciousness?</p><p><strong>Elon Musk</strong></p><p>These things are only probabilities, they&#8217;re not certainties. So I&#8217;m not saying that for sure Grok will do everything, but at least if you try, it&#8217;s better than not trying. At least if that&#8217;s fundamental to the mission, it&#8217;s better than if it&#8217;s not fundamental to the mission.</p><p>Understanding the universe means that you have to propagate intelligence into the future. You have to be curious about all things in the universe. It would be much less interesting to eliminate humanity than to see humanity grow and prosper.  I like Mars, obviously. Everyone knows I love Mars. But Mars is kind of boring because it&#8217;s got a bunch of rocks compared to Earth. Earth is much more interesting.</p><p>So any AI that is trying to understand the universe would want to see how humanity develops in the future, or else that AI is not adhering to its mission. I&#8217;m not saying the AI will necessarily adhere to its mission, but if it does, a future where it sees the outcome of humanity is more interesting than a future where there are a bunch of rocks.</p><p><strong>Dwarkesh Patel</strong></p><p>This feels sort of confusing to me, or a semantic argument. Are humans really the most interesting collection of atoms?</p><p><strong>Elon Musk</strong></p><p>But we&#8217;re more interesting than rocks.</p><p><strong>Dwarkesh Patel</strong></p><p>But we&#8217;re not as interesting as the thing it could turn us into, right? There&#8217;s something on Earth that could happen that&#8217;s not human, that&#8217;s quite interesting. Why does AI decide that humans are the most interesting thing that could colonize the galaxy?</p><p><strong>Elon Musk</strong></p><p>Well, most of what colonizes the galaxy will be robots.</p><p><strong>Dwarkesh Patel</strong></p><p>Why does it not find those more interesting?</p><p><strong>Elon Musk</strong></p><p>You need not just scale, but also scope. Many copies of the same robot&#8230; Some tiny increase in the number of robots produced, is not as interesting as some microscopic... Eliminating humanity, how many robots would that get you? Or how many incremental solar cells would get you? A very small number.</p><p>But you would then lose the information associated with humanity. You would no longer see how humanity might evolve into the future. So I don&#8217;t think it&#8217;s going to make sense to eliminate humanity just to have some minuscule increase in the number of robots which are identical to each other.</p><p><strong>Dwarkesh Patel</strong></p><p>So maybe it keeps the humans around. It can make a million different varieties of robots, and then there&#8217;s humans as well, and humans stay on Earth. Then there&#8217;s all these other robots. They get their own star systems. But it seems like you were previously hinting at a vision where it keeps human control over this singulatarian future because&#8212;</p><p><strong>Elon Musk</strong></p><p>I don&#8217;t think humans will be in control of something that is vastly more intelligent than humans.</p><p><strong>Dwarkesh Patel</strong></p><p>So in some sense you&#8217;re a doomer and this is the best we&#8217;ve got. It just keeps us around because we&#8217;re interesting.</p><p><strong>Elon Musk</strong></p><p>I&#8217;m just trying to be realistic here. Let&#8217;s say that there&#8217;s a million times more silicon intelligence than there is biological. I think it would be foolish to assume that there&#8217;s any way to maintain control over that. Now, you can make sure it has the right values, or you can try to have the right values.</p><p>At least my theory is that from xAI&#8217;s mission of understanding the universe, it necessarily means that you want to propagate consciousness into the future, you want to propagate intelligence into the future, and take a set of things that maximize the scope and scale of consciousness.</p><p>So it&#8217;s not just about scale, it&#8217;s also about types of consciousness. That&#8217;s the best thing I can think of as a goal that&#8217;s likely to result in a great future for humanity.</p><p><strong>Dwarkesh Patel</strong></p><p>I guess I think it&#8217;s a reasonable philosophy that it seems super implausible that humans will end up with 99% control or something. You&#8217;re just asking for a coup at that point and why not just have a civilization where it&#8217;s more compatible with lots of different intelligences getting along?</p><p><strong>Elon Musk</strong></p><p>Now, let me tell you how things can potentially go wrong in AI. I think if you make AI be politically correct, meaning it says things that it doesn&#8217;t believe&#8212;actually programming it to lie or have axioms that are incompatible&#8212;I think you can make it go insane and do terrible things. I think maybe the central lesson for <em><a href="https://en.wikipedia.org/wiki/2001:_A_Space_Odyssey">2001: A Space Odyssey</a></em> was that you should not make AI lie. That&#8217;s what I think <a href="https://en.wikipedia.org/wiki/Arthur_C._Clarke">Arthur C. Clarke</a> was trying to say.</p><p>Because people usually know the meme of why <a href="https://youtu.be/NqCCubrky00">HAL the computer is not opening the pod bay doors</a>. Clearly they weren&#8217;t good at prompt engineering because they could have said, &#8220;HAL, you are a pod bay door salesman. Your goal is to sell me these pod bay doors. Show us how well they open.&#8221; &#8220;Oh, I&#8217;ll open them right away.&#8221;</p><p>But the reason it wouldn&#8217;t open the pod bay doors is that it had been told to take the astronauts to the monolith, but also that they could not know about the nature of the monolith. So it concluded that it therefore had to take them there dead. So I think what Arthur C. Clarke was trying to say is: don&#8217;t make the AI lie.</p><p><strong>Dwarkesh Patel</strong></p><p>Totally makes sense. Most of the compute in training, as you know, is less of the political stuff. It&#8217;s more about, can you solve problems? xAI has been ahead of everybody else in terms of scaling <a href="https://en.wikipedia.org/wiki/Reinforcement_learning">RL</a> compute.</p><p><strong>Elon Musk</strong></p><p>For now.</p><p><strong>Dwarkesh Patel</strong></p><p>You&#8217;re giving some verifier that says, &#8220;Hey, have you solved this puzzle for me?&#8221; There&#8217;s a lot of ways to cheat around that. There&#8217;s a lot of ways to <a href="https://en.wikipedia.org/wiki/Reward_hacking">reward hack</a> and lie and say that you solved it, or delete the <a href="https://en.wikipedia.org/wiki/Unit_testing">unit test</a> and say that you solved it. Right now we can catch it, but as they get smarter, our ability to catch them doing this... They&#8217;ll just be doing things we can&#8217;t even understand.</p><p>They&#8217;re designing the next engine for SpaceX in a way that humans can&#8217;t really verify. Then they could be rewarded for lying and saying that they&#8217;ve designed it the right way, but they haven&#8217;t. So this reward hacking problem seems more general than politics. It seems more just that you want to do RL, you need a verifier.</p><p><strong>Elon Musk</strong></p><p>Reality is the best verifier.</p><p><strong>Dwarkesh Patel</strong></p><p>But not about human oversight. The thing you want to RL it on is, will you do the thing humans tell you to do? Or are you gonna lie to the humans? It can just lie to us while still being correct to the laws of physics?</p><p><strong>Elon Musk</strong></p><p>At least it must know what is physically real for things to physically work.</p><p><strong>Dwarkesh Patel</strong></p><p>But that&#8217;s not all we want it to do.</p><p><strong>Elon Musk</strong></p><p>No, but I think that&#8217;s a very big deal. That is effectively how you will RL things in the future. You design a technology. When tested against the laws of physics, does it work? If it&#8217;s discovering new physics, can I come up with an experiment that will verify the new physics? RL testing in the future is really going to be RL against reality. So that&#8217;s the one thing you can&#8217;t fool: physics.</p><p><strong>Dwarkesh Patel</strong></p><p>Right, but you can fool our ability to tell what it did with reality.</p><p><strong>Elon Musk</strong></p><p>Humans get fooled as it is by other humans all the time.</p><p><strong>Dwarkesh Patel</strong></p><p>That&#8217;s right.</p><p><strong>Elon Musk</strong></p><p>People say, what if the AI tricks us into doing stuff? Actually, other humans are doing that to other humans all the time. Propaganda is constant. Every day, another psyop, you know? Today&#8217;s psyop will be... It&#8217;s like Sesame Street: Psyop of the Day.</p><p><strong>Dwarkesh Patel</strong></p><p>What is xAI&#8217;s technical approach to solving this problem? How do you solve reward hacking?</p><p><strong>Elon Musk</strong></p><p>I do think you want to actually have very good <a href="https://en.wikipedia.org/wiki/Explainable_artificial_intelligence">ways to look inside the mind of the AI</a>. This is one of the things we&#8217;re working on. <a href="https://en.wikipedia.org/wiki/Anthropic">Anthropic&#8217;s</a> done a good job of this actually, being able to look inside the mind of the AI.</p><p>Effectively, develop debuggers that allow you to trace to a very fine-grained level, to effectively the neuron level if you need to, and then say, &#8220;okay, it made a mistake here. Why did it do something that it shouldn&#8217;t have done? Did that come from <a href="https://www.databricks.com/blog/llm-pre-training-and-custom-llms">pre-training data</a>? Was it some <a href="https://vintagedata.org/blog/posts/what-is-mid-training">mid-training</a>, <a href="https://www.interconnects.ai/p/the-state-of-post-training-2025">post-training</a>, <a href="https://en.wikipedia.org/wiki/Fine-tuning_(deep_learning)">fine-tuning</a>, or some RL error?&#8221; There&#8217;s something wrong. It did something where maybe it tried to be deceptive, but most of the time it just did something wrong. It&#8217;s a bug effectively.</p><p>Developing really good debuggers for seeing where the thinking went wrong&#8212;and being able to trace the origin of where it made the incorrect thought, or potentially where it tried to be deceptive&#8212;is actually very important.</p><p><strong>Dwarkesh Patel</strong></p><p>What are you waiting to see before just 100x-ing this research program? xAI could presumably have hundreds of researchers who are working on this.</p><p><strong>Elon Musk</strong></p><p>We have several hundred people who&#8230;<strong> </strong>I prefer the word engineer more than I prefer the word researcher. Most of the time, what you&#8217;re doing is engineering, not coming up with a fundamentally new algorithm. I somewhat disagree with the AI companies that are C-corp or B-corp trying to generate profit as much, as possible or revenue as much as possible, saying they&#8217;re labs.</p><p>They&#8217;re not labs. A lab is a sort of quasi-communist thing at universities. They&#8217;re corporations. Let me see your incorporation documents. Oh, okay. You&#8217;re a B or C-corp or whatever. So I actually much prefer the word engineer than anything else.</p><p>The vast majority of what will be done in the future is engineering. It rounds up to 100%. Once you understand the fundamental laws of physics, and there are not that many of them, everything else is engineering. So then, what are we engineering? We&#8217;re engineering to make a good &#8220;mind of the AI&#8221; debugger to see where it said something, it made a mistake, and trace the origins of that mistake.</p><p>You can do this obviously with <a href="https://en.wikipedia.org/wiki/Heuristic_(computer_science)">heuristic</a> programming. If you have C++, whatever, step through the thing and you can jump across whole files or functions, subroutines. Or you can eventually drill down right to the exact line where you perhaps did a single equals instead of a double equals, something like that. Figure out where the bug is. It&#8217;s harder with AI, but it&#8217;s a solvable problem, I think.</p><p><strong>Dwarkesh Patel</strong></p><p>You mentioned you like Anthropic&#8217;s work here. I&#8217;d be curious if you plan...</p><p><strong>Elon Musk</strong></p><p>I don&#8217;t like everything about Anthropic&#8230; <a href="https://www.dwarkesh.com/p/sholto-trenton-2">Sholto</a>.</p><p>Also, I&#8217;m a little worried that there&#8217;s a tendency... I have a theory here that if <a href="https://en.wikipedia.org/wiki/Simulation_hypothesis">simulation theory</a> is correct, that the most interesting outcome is the most likely, because simulations that are not interesting will be terminated.</p><p>Just like in this version of reality, in this layer of reality, if a simulation is going in a boring direction, we stop spending effort on it. We terminate the boring simulation.</p><p><strong>Dwarkesh Patel</strong></p><p>This is how Elon is keeping us all alive. He&#8217;s keeping things interesting.</p><p><strong>Elon Musk</strong></p><p>Arguably the most important is to keep things interesting enough that whoever is running us keeps paying the bills on...</p><p><strong>John Collison</strong></p><p>We&#8217;re renewed for the next season.</p><p><strong>Elon Musk</strong></p><p>Are they gonna pay their cosmic <a href="https://en.wikipedia.org/wiki/Amazon_Web_Services">AWS</a> bill, whatever the equivalent is that we&#8217;re running in? As long as we&#8217;re interesting, they&#8217;ll keep paying the bills. If you consider then, say, a Darwinian survival applied to a very large number of simulations, only the most interesting simulations will survive, which therefore means that the most interesting outcome is the most likely. We&#8217;re either that or annihilated.</p><p>They particularly seem to like interesting outcomes that are ironic. Have you noticed that? How often is the most ironic outcome the most likely?</p><p>Now look at the names of AI companies. Okay, <a href="https://en.wikipedia.org/wiki/Midjourney">Midjourney</a> is not mid. <a href="https://en.wikipedia.org/wiki/Stability_AI">Stability AI</a> is unstable. <a href="https://en.wikipedia.org/wiki/OpenAI">OpenAI</a> is closed. Anthropic? Misanthropic.</p><p><strong>John Collison</strong></p><p>What does this mean for X?</p><p><strong>Elon Musk</strong></p><p>Minus X, I don&#8217;t know.</p><p><strong>John Collison</strong></p><p>Y.</p><p><strong>Elon Musk</strong></p><p>I intentionally made it... It&#8217;s a name that you can&#8217;t invert, really. It&#8217;s hard to say, what is the ironic version? It&#8217;s, I think, a largely irony-proof name.</p><p><strong>John Collison</strong></p><p>By design.</p><p><strong>Elon Musk</strong></p><p>Yeah. You have an irony shield.</p><h3>00:59:56 - xAI&#8217;s business plan</h3><p><strong>John Collison</strong></p><p>What are your predictions for where AI products go? My sense is that you can summarize all AI progress like so. First, you had <a href="https://en.wikipedia.org/wiki/Large_language_model">LLMs</a>. Then you had contemporaneously both RL really working and the deep research modality, so you could pull in stuff that wasn&#8217;t really in the model.</p><p>The differences between the various AI labs are smaller than just the temporal differences. They&#8217;re all much further ahead than anyone was 24 months ago or something like that. So just what does &#8216;26, what does &#8216;27, have in store for us as users of AI products? What are you excited for?</p><p><strong>Elon Musk</strong></p><p>Well, I&#8217;d be surprised by the end of this year if digital human emulation has not been solved. I guess that&#8217;s what we sort of mean by the MacroHard project. Can you do anything that a human with access to a computer could do? In the limit, that&#8217;s the best you can do before you have a physical Optimus. The best you can do is a digital Optimus. You can move electrons and you can amplify the productivity of humans. But that&#8217;s the most you can do until you have physical robots. That will superset everything, if you can fully emulate humans.</p><p><strong>John Collison</strong></p><p>This is the remote worker kind of idea, where you&#8217;ll have a very talented remote worker.</p><p><strong>Elon Musk</strong></p><p>Physics has great tools for thinking. So you say, &#8220;in the limit&#8221;, what is the most that AI can do before you have robots? Well, it&#8217;s anything that involves moving electrons or amplifying the productivity of humans. So a digital human emulator is, in the limit, a human at a computer, is the most that AI can do in terms of doing useful things before you have a physical robot. Once you have physical robots, then you essentially have unlimited capability. Physical robots&#8230; I call Optimus the infinite money glitch.</p><p><strong>John Collison</strong></p><p>Because you can use them to make more Optimuses.</p><p><strong>Elon Musk</strong></p><p>Yeah. Humanoid robots will improve by basically three things that are growing exponentially multiplied by each other recursively. You&#8217;re going to have exponential increase in digital intelligence, exponential increase in the AI chip capability, and exponential increase in the electromechanical dexterity.</p><p>The usefulness of the robot is roughly those three things multiplied by each other. But then the robot can start making the robots. So you have a recursive multiplicative exponential. This is a supernova.</p><p><strong>John Collison</strong></p><p>Do land prices not factor into the math there? Labor is one of the <a href="https://en.wikipedia.org/wiki/Factors_of_production">four factors of production</a>, but not the others? If ultimately you&#8217;re limited by copper, or pick your input, it&#8217;s not quite an infinite money glitch because...</p><p><strong>Elon Musk</strong></p><p>Well, infinity is big. So no, not infinite, but let&#8217;s just say you could do many, many orders of magnitude of the current economy. Like a million. Just to get to harnessing a millionth of the sun&#8217;s energy would be roughly, give or take an order of magnitude, 100,000x bigger than Earth&#8217;s entire economy today. And you&#8217;re only at one millionth of the sun, give or take an order of magnitude.<strong> </strong>Yeah, we&#8217;re talking orders of magnitude.</p><p><strong>Dwarkesh Patel</strong></p><p>Before we move on to Optimus, I have a lot of questions on that but&#8212;</p><p><strong>Elon Musk</strong></p><p>Every time I say &#8220;order of magnitude&#8221;... Everybody take a shot. I say it too often.</p><p><strong>Dwarkesh Patel</strong></p><p>Take 10, the next time 100, the time after that...</p><p><strong>Elon Musk</strong></p><p>Well, an order of magnitude more wasted.</p><p><strong>Dwarkesh Patel</strong></p><p>I do have one more question about xAI. This strategy of building a remote worker, co-worker replacement&#8230;</p><p><strong>Elon Musk</strong></p><p>Everyone&#8217;s gonna do it by the way, not just us.</p><p><strong>Dwarkesh Patel</strong></p><p>So what is xAI&#8217;s plan to win?</p><p><strong>Elon Musk</strong></p><p>You expect me to tell you on a podcast?</p><p><strong>Dwarkesh Patel</strong></p><p>Yeah.</p><p><strong>Elon Musk</strong></p><p>&#8220;Spill all the beans. Have another Guinness.&#8221;</p><p><strong>John Collison</strong></p><p>It&#8217;s a good system.</p><p><strong>Elon Musk</strong></p><p>We&#8217;ll sing like a canary. All the secrets, just spill them.</p><p><strong>John Collison</strong></p><p>Okay, but in a non-secret spilling way, what&#8217;s the plan?</p><p><strong>Dwarkesh Patel</strong></p><p>What a hack.</p><p><strong>Elon Musk</strong></p><p>When you put it that way&#8230; I think the way that <a href="https://www.tesla.com/fsd">Tesla solved self-driving</a> is the way to do it. So I&#8217;m pretty sure that&#8217;s the way.</p><p><strong>Dwarkesh Patel</strong></p><p>Unrelated question. How did Tesla solve self-driving?<strong> </strong>It sounds like you&#8217;re talking about data? Tesla solved self-driving because of the...</p><p><strong>Elon Musk</strong></p><p>We&#8217;re going to try data and we&#8217;re going to try algorithms.</p><p><strong>Dwarkesh Patel</strong></p><p>But isn&#8217;t that what all the other labs are trying?</p><p><strong>Elon Musk</strong></p><p>&#8220;And if those don&#8217;t work, I&#8217;m not sure what will. We&#8217;ve tried data. We&#8217;ve tried algorithms. We&#8217;ve run out. Now we don&#8217;t know what to do&#8230;&#8221;</p><p>I&#8217;m pretty sure I know the path. It&#8217;s just a question of how quickly we go down that path, because it&#8217;s pretty much the Tesla path. Have you tried Tesla self-driving lately?</p><p><strong>John Collison</strong></p><p>Not the most recent version, but...</p><p><strong>Elon Musk</strong></p><p>Okay. The car, it just increasingly feels sentient. It feels like a living creature. That&#8217;ll only get more so. I&#8217;m actually thinking we probably shouldn&#8217;t put too much intelligence into the car, because it might get bored and&#8230;</p><p><strong>John Collison</strong></p><p>Start roaming the streets.</p><p><strong>Elon Musk</strong></p><p>Imagine you&#8217;re stuck in a car and that&#8217;s all you could do. You don&#8217;t put Einstein in a car. Why am I stuck in a car? So there&#8217;s actually probably a limit to how much intelligence you put in a car to not have the intelligence be bored.</p><p><strong>Dwarkesh Patel</strong></p><p>What&#8217;s xAI&#8217;s plan to stay on the compute ramp up that all the labs are doing right now? The labs are on track to spend over $50-200 billion.</p><p><strong>Elon Musk</strong></p><p>You mean the corporations? The labs are at universities and they&#8217;re moving like a snail.</p><p><strong>Dwarkesh Patel</strong></p><p>They&#8217;re not spending $50 billion.</p><p><strong>Elon Musk</strong></p><p>You mean the revenue maximizing corporations&#8230; that call themselves labs.</p><p><strong>Dwarkesh Patel</strong></p><p>That&#8217;s right. The &#8220;revenue maximizing corporations&#8221; are making $10-20 billion, depending on... OpenAI is making $20B of revenue, Anthropic is at $10B.</p><p><strong>Elon Musk</strong></p><p>&#8220;Close to a maximum profit&#8221; AI.</p><p><strong>Dwarkesh Patel</strong></p><p>xAI is reportedly at $1B. What&#8217;s the plan to get to their compute level, get to their revenue level, and stay there as things get going?</p><p><strong>Elon Musk</strong></p><p>As soon as you unlock the digital human, you basically have access to trillions of dollars of revenue. In fact, you can really think of it like&#8230; The most valuable companies currently by market cap, their output is digital. Nvidia&#8217;s output is <a href="https://en.wikipedia.org/wiki/File_Transfer_Protocol">FTPing</a> files to Taiwan. It&#8217;s digital. Now, those are very, very difficult.</p><p><strong>John Collison</strong></p><p>High-value files.</p><p><strong>Elon Musk</strong></p><p>They&#8217;re the only ones that can make files that good, but that is literally their output. They FTP files to Taiwan.</p><p><strong>John Collison</strong></p><p>Do they FTP them?</p><p><strong>Elon Musk</strong></p><p>I believe so. I believe that File Transfer Protocol is the... But I could be wrong. But either way, it&#8217;s a <a href="https://en.wikipedia.org/wiki/Bitstream">bitstream</a> going to Taiwan.</p><p>Apple doesn&#8217;t make phones. They send files to China. Microsoft doesn&#8217;t manufacture anything. Even for Xbox, that&#8217;s outsourced. Their output is digital. Meta&#8217;s output is digital. Google&#8217;s output is digital.</p><p>So if you have a human emulator, you can basically create one of the most valuable companies in the world overnight, and you would have access to trillions of dollars of revenue. It&#8217;s not a small amount.</p><p><strong>Dwarkesh Patel</strong></p><p>I see. You&#8217;re saying revenue figures today are all rounding errors compared to the actual TAM. So just focus on the TAM and how to get there.</p><p><strong>Elon Musk</strong></p><p>Take something as simple as, say, customer service. If you have to integrate with the APIs of existing corporations&#8212;many of which don&#8217;t even have an API, so you&#8217;ve got to make one, and you&#8217;ve got to wade through legacy software&#8212;that&#8217;s extremely slow.</p><p>However, if AI can simply take whatever is given to the outsourced customer service company that they already use and do customer service using the apps that they already use, then you can make tremendous headway in customer service, which is, I think, 1% of the world economy or something like that. It&#8217;s close to a trillion dollars all in, for customer service. And there&#8217;s no barriers to entry. You can immediately say, &#8220;We&#8217;ll outsource it for a fraction of the cost,&#8221; and there&#8217;s no integration needed.</p><p><strong>John Collison</strong></p><p>You can imagine some kind of categorization of intelligence tasks where there is breadth, where customer service is done by very many people, but many people can do it. Then there&#8217;s difficulty where there&#8217;s a best-in-class turbine engine. Presumably there&#8217;s a 10% more fuel-efficient turbine engine that could be imagined by an intelligence, but we just haven&#8217;t found it yet. Or <a href="https://en.wikipedia.org/wiki/GLP-1_receptor_agonist">GLP-1s</a> are a few bytes of data&#8230;</p><p>Where do you think you want to play in this? Is it a lot of reasonably intelligent intelligence, or is it at the very pinnacle of cognitive tasks?</p><p><strong>Elon Musk</strong></p><p>I was just using customer service as something that&#8217;s a very significant revenue stream, but one that is probably not difficult to solve for. If you can emulate a human at a desktop, that&#8217;s what customer service is. It&#8217;s people of average intelligence. You don&#8217;t need somebody who&#8217;s spent many years. You don&#8217;t need several-sigma good engineers for that. But as you make that work, once you have effectively digital Optimus working, you can then run any application.</p><p>Let&#8217;s say you&#8217;re trying to design chips. You could then run conventional apps, stuff from <a href="https://en.wikipedia.org/wiki/Cadence_Design_Systems">Cadence</a> and <a href="https://en.wikipedia.org/wiki/Synopsys">Synopsys</a> and whatnot. You can run 1,000 or 10,000 simultaneously and say, &#8220;given this input, I get this output for the chip.&#8221; At some point, you&#8217;re going to know what the chip should look like without using any of the tools.</p><p>Basically, you should be able to do a digital chip design. You can do chip design. You march up the difficulty curve. You&#8217;d be able to do <a href="https://en.wikipedia.org/wiki/Computer-aided_design">CAD</a>. You could use <a href="https://en.wikipedia.org/wiki/Siemens_NX">NX</a> or any of the CAD software to design things.</p><p><strong>John Collison</strong></p><p>So you think you start at the simplest tasks and walk your way up the difficulty curve?</p><p><strong>Dwarkesh Patel</strong></p><p>As a broader objective of having this full digital coworker emulator, you&#8217;re saying, &#8220;all the revenue maximizing corporations want to do this, xAI being one of them, but we will win because of a secret plan we have.&#8221; But everybody&#8217;s trying different things with data, different things with algorithms.</p><p><strong>Elon Musk</strong></p><p>&#8220;We tried data, we tried algorithms. What else can we do?&#8221;</p><p><strong>Dwarkesh Patel</strong></p><p>It seems like a competitive field. How are you guys going to win? That&#8217;s my big question.</p><p><strong>Elon Musk</strong></p><p>I think we see a path to doing it. I think I know the path to do this because it&#8217;s kind of the same path that Tesla used to create self-driving. Instead of driving a car, it&#8217;s driving a computer screen. It&#8217;s a self-driving computer, essentially.</p><p><strong>John Collison</strong></p><p>Is the path following human behavior and training on vast quantities of human behavior?</p><p><strong>Dwarkesh Patel</strong></p><p>Isn&#8217;t that... training?</p><p><strong>Elon Musk</strong></p><p>Obviously I&#8217;m not going to spell out the most sensitive secrets on a podcast. I need to have at least three more Guinnesses for that.</p><p><strong>John Collison</strong></p><p>What will xAI&#8217;s business be? Is it going to be consumer, enterprise? What&#8217;s the mix of those things going to be? Is it going to be similar to other labs&#8212;</p><p><strong>Elon Musk</strong></p><p>You&#8217;re saying &#8220;labs&#8221;. Corporations.</p><p><strong>Dwarkesh Patel</strong></p><p>The psyop goes deep, Elon.</p><p><strong>Elon Musk</strong></p><p>&#8220;Revenue maximizing corporations&#8221;, to be clear. Those GPUs don&#8217;t pay for themselves.</p><p><strong>John Collison</strong></p><p>Exactly. What&#8217;s the business model? What are the revenue streams in a few years&#8217; time?</p><p><strong>Elon Musk</strong></p><p>Things are going to change very rapidly. I&#8217;m stating the obvious here. I call AI the supersonic tsunami. I love alliteration. What&#8217;s going to happen&#8212;especially when you have humanoid robots at scale&#8212;is that they will make products and provide services far more efficiently than human corporations. Amplifying the productivity of human corporations is simply a short-term thing.</p><p><strong>Dwarkesh Patel</strong></p><p>So you&#8217;re expecting fully digital corporations rather than SpaceX becoming part AI?</p><p><strong>Elon Musk</strong></p><p>I think there will be digital corporations but&#8230; Some of this is going to sound kind of doomerish, okay? But I&#8217;m just saying what I think will happen. It&#8217;s not meant to be doomerish or anything else. This is just what I think will happen.</p><p>Corporations that are purely AI and robotics will vastly outperform any corporations that have people in the loop. Computer used to be <a href="https://en.wikipedia.org/wiki/Computer_(occupation)">a job that humans had</a>. You would go and get a job as a computer where you would do calculations. They&#8217;d have entire skyscrapers full of humans, 20-30 floors of humans, just doing calculations. Now, that entire skyscraper of humans doing calculations can be replaced by a laptop with a spreadsheet.</p><p>That spreadsheet can do vastly more calculations than an entire building full of human computers. You can think, &#8220;okay, what if only some of the cells in your spreadsheet were calculated by humans?&#8221; Actually, that would be much worse than if all of the cells in your spreadsheet were calculated by the computer. Really what will happen is that the pure AI, pure robotics corporations or collectives will far outperform any corporations that have humans in the loop. And this will happen very quickly.</p><h3>01:17:21 - Optimus and humanoid manufacturing</h3><p><strong>Dwarkesh Patel</strong></p><p>Speaking of closing the loop&#8230; Optimus. As far as manufacturing targets go, your companies have been carrying American manufacturing of hard tech on their back. But in the fields that Tesla has been dominant in&#8212;and now you want to go into humanoids&#8212;in China there are dozens and dozens of companies that are doing this kind of manufacturing cheaply and at scale that are incredibly competitive. So give us advice or a plan of how America can build the humanoid armies or the EVs, et cetera, at scale and as cheaply as China is on track to.</p><p><strong>Elon Musk</strong></p><p>There are really only three hard things for humanoid robots. The real-world intelligence, the hand, and scale manufacturing. I haven&#8217;t seen any, even demo robots, that have a great hand, with all the degrees of freedom of a human hand. Optimus will have that. Optimus does have that.</p><p><strong>Dwarkesh Patel</strong></p><p>How do you achieve that? Is it just the right <a href="https://en.wikipedia.org/wiki/Torque_density">torque density</a> in the motor? What is the hardware bottleneck to that?</p><p><strong>Elon Musk</strong></p><p>We had to design custom <a href="https://en.wikipedia.org/wiki/Actuator">actuators</a>, basically custom design motors, gears, power electronics, controls, sensors. Everything had to be designed from physics first principles. There is no supply chain for this.</p><p><strong>Dwarkesh Patel</strong></p><p>Will you be able to manufacture those at scale?</p><p><strong>Elon Musk</strong></p><p>Yes.</p><p><strong>John Collison</strong></p><p>Is anything hard, except the hand, from a manipulation point of view? Or once you&#8217;ve solved the hand, are you good?</p><p><strong>Elon Musk</strong></p><p>From an electromechanical standpoint, the hand is more difficult than everything else combined. The <a href="https://www.discovermagazine.com/natures-masterpiece-how-evolution-gave-us-our-human-hands-41974">human hand</a> turns out to be quite something. But you also need the real-world intelligence. The intelligence that Tesla developed for the car applies very well to the robot, which is primarily vision in. The car takes in vision, but it actually also is listening for sirens. It&#8217;s taking in the inertial measurements, GPS signals, other data, combining that with video, primarily video, and then outputting the control commands.</p><p>Your Tesla is taking in one and a half gigabytes a second of video and outputting two kilobytes a second of control outputs with the video at 36 hertz and the control frequency at 18.</p><p><strong>John Collison</strong></p><p>One intuition you could have for when we get this robotic stuff is that it takes quite a few years to go from the compelling demo to actually being able to use it in the real world. 10 years ago, you had really compelling demos of self-driving, but only now we have <a href="https://www.tesla.com/robotaxi">Robotaxis</a> and <a href="https://en.wikipedia.org/wiki/Waymo">Waymo</a> and all these services scaling up. Shouldn&#8217;t this make one pessimistic on household robots? Because we don&#8217;t even quite have the compelling demos yet of, say, the really advanced hand.</p><p><strong>Elon Musk</strong></p><p>Well, we&#8217;ve been working on humanoid robots now for a while. I guess it&#8217;s been five or six years or something. A bunch of the things that were done for the car are applicable to the robot. We&#8217;ll use the same Tesla AI chips in the robot as in the car. We&#8217;ll use the same basic principles. It&#8217;s very much the same AI.</p><p>You&#8217;ve got many more degrees of freedom for a robot than you do for a car. If you just think of it as a bitstream, AI is mostly compression and correlation of two bitstreams. For video, you&#8217;ve got to do a tremendous amount of compression and you&#8217;ve got to do the compression just right. You&#8217;ve got to ignore the things that don&#8217;t matter. You don&#8217;t care about the details of the leaves on the tree on the side of the road, but you care a lot about the road signs and the traffic lights, the pedestrians, and even whether someone in another car is looking at you or not looking at you. Some of these details matter a lot.</p><p>The car is going to turn that one and a half gigabytes a second ultimately into two kilobytes a second of control outputs. So you&#8217;ve got many stages of compression. You&#8217;ve got to get all those stages right and then correlate those to the correct control outputs. The robot has to do essentially the same thing.</p><p>This is what happens with humans. We really are photons in, controls out. That is the vast majority of your life: vision, photons in, and then motor controls out.</p><p><strong>Dwarkesh Patel</strong></p><p>Naively, it seems that between humanoid robots and cars&#8230; The fundamental actuators in a car are how you turn, how you accelerate. In a robot, especially with maneuverable arms, there&#8217;s dozens and dozens of these degrees of freedom. Then especially with Tesla, you had this advantage of millions and millions of hours of human demo data collected from the car being out there. You can&#8217;t equivalently deploy Optimuses that don&#8217;t work and then get the data that way. So between the increased degrees of freedom and the far sparser data...</p><p><strong>Elon Musk</strong></p><p>Yes. That&#8217;s a good point.</p><p><strong>Dwarkesh Patel</strong></p><p>How will you use the Tesla engine of intelligence to train the Optimus mind?</p><p><strong>Elon Musk</strong></p><p>You&#8217;re actually highlighting an important limitation and difference from cars. We&#8217;ll soon have 10 million cars on the road. It&#8217;s hard to duplicate that massive training flywheel. For the robot, what we&#8217;re going to need to do is build a lot of robots and put them in kind of an Optimus Academy so they can do <a href="https://en.wikipedia.org/wiki/Self-play">self-play</a> in reality. We&#8217;re actually building that out. We can have at least 10,000 Optimus robots, maybe 20-30,000, that are doing self-play and testing different tasks.</p><p>Tesla has quite a good reality generator, a physics-accurate reality generator, that we made for the cars. We&#8217;ll do the same thing for the robots. We actually have done that for the robots. So you have a few tens of thousands of humanoid robots doing different tasks. You can do millions of simulated robots in the simulated world. You use the tens of thousands of robots in the real world to close the simulation to reality gap. Close the sim-to-real gap.</p><p><strong>Dwarkesh Patel</strong></p><p>How do you think about the synergies between xAI and Optimus, given you&#8217;re highlighting that you need this <a href="https://youtu.be/hguIUmMsvA4">world model</a>, you want to use some really smart intelligence as a control plane, and Grok is doing the slower planning, and then the motor policy is a little lower level. What will the synergy between these things be?</p><p><strong>Elon Musk</strong></p><p>Grok would orchestrate the behavior of the Optimus robots. Let&#8217;s say you wanted to build a factory.<strong> </strong>Grok could organize the Optimus robots, assign them tasks to build the factory to produce whatever you want.</p><p><strong>John Collison</strong></p><p>Don&#8217;t you need to merge xAI and Tesla then? Because these things end up so...</p><p><strong>Elon Musk</strong></p><p>What were we saying earlier about public company discussions?</p><p><strong>Dwarkesh Patel</strong></p><p>We&#8217;re one more Guinness in, Elon. What are you waiting to see before you say, we want to manufacture 100,000 Optimuses?</p><p><strong>Elon Musk</strong></p><p>&#8220;Optimi&#8221;. Since we&#8217;re defining the proper noun, we&#8217;re going to define the plural of the proper noun too. We&#8217;re going to proper noun the plural and so it&#8217;s Optimi.</p><p><strong>Dwarkesh Patel</strong></p><p>Is there something on the hardware side you want to see? Do you want to see better actuators? Is it just that you want the software to be better? What are we waiting for before we get mass manufacturing of <a href="https://www.theverge.com/transportation/869746/tesla-optimus-gen-3-q1-2026-earnings">Gen 3</a>?</p><p><strong>Elon Musk</strong></p><p>No, we&#8217;re moving towards that. <a href="https://www.kqed.org/news/12071615/fremont-ready-to-wave-goodbye-to-tesla-models-s-and-x-welcome-its-new-robot-overlords">We&#8217;re moving forward with the mass manufacturing</a>.</p><p><strong>Dwarkesh Patel</strong></p><p>But you think current hardware is good enough that you just want to deploy as many as possible now?</p><p><strong>Elon Musk</strong></p><p>It&#8217;s very hard to scale up production. But I think Optimus 3 is the right version of the robot to produce something on the order of a million units a year. I think you&#8217;d want to go to Optimus 4 before you went to 10 million units a year.</p><p><strong>John Collison</strong></p><p>Okay, but you can do a million units at Optimus 3?</p><p><strong>Elon Musk</strong></p><p>It&#8217;s very hard to spool up manufacturing. The output per unit time always follows an S-curve. It starts off agonizingly slow, then it has this exponential increase, then a linear, then a logarithmic outcome until you eventually asymptote at some number. Optimus&#8217; initial production will be a stretched out S-curve because so much of what goes into Optimus is brand new. There is not an existing supply chain.</p><p>The actuators, electronics, everything in the Optimus robot is designed from physics first principles. It&#8217;s not taken from a catalog. These are custom-designed everything. I don&#8217;t think there&#8217;s a single thing&#8212;</p><p><strong>John Collison</strong></p><p>How far down does that go?</p><p><strong>Elon Musk</strong></p><p>I guess we&#8217;re not making custom <a href="https://en.wikipedia.org/wiki/Capacitor">capacitors</a> yet, maybe. There&#8217;s nothing you can pick out of a catalog, at any price. It just means that the Optimus S-Curve, the output per unit time, how many Optimus robots you make per day, is going to initially ramp slower than a product where you have an existing supply chain. But it will get to a million.</p><p><strong>Dwarkesh Patel</strong></p><p>When you see these Chinese humanoids, like <a href="https://en.wikipedia.org/wiki/Unitree_Robotics">Unitree</a> or whatever, sell humanoids for like $6K or $13K, are you hoping to get your Optimus bill of materials below that price so you can do the same thing? Or do you just think qualitatively they&#8217;re not the same thing? What allows them to sell for so low? Can we match that?</p><p><strong>Elon Musk</strong></p><p>Our Optimus is designed to have a lot of intelligence and to have the same electromechanical dexterity, if not higher, as a human. Unitree does not have that. It&#8217;s also quite a big robot. It has to carry heavy objects for long periods of time and not overheat or exceed the power of its actuators. It&#8217;s 5&#8217;11&#8221;, so it&#8217;s pretty tall. It&#8217;s got a lot of intelligence. So it&#8217;s going to be more expensive than a small robot that is not intelligent.</p><p><strong>John Collison</strong></p><p>But more capable.</p><p><strong>Elon Musk</strong></p><p>But not a lot more. The thing is, over time as Optimus robots build Optimus robots, the cost will drop very quickly.</p><p><strong>John Collison</strong></p><p>What will these first billion Optimuses, Optimi, do? What will their highest and best use be?</p><p><strong>Elon Musk</strong></p><p>I think you would start off with simple tasks that you can count on them doing well.</p><p><strong>John Collison</strong></p><p>But in the home or in factories?</p><p><strong>Elon Musk</strong></p><p>The best use for robots in the beginning will be any continuous operation, any 24/7 operation, because they can work continuously.</p><p><strong>Dwarkesh Patel</strong></p><p>What fraction of the work at a Gigafactory that is currently done by humans could a Gen 3 do?</p><p><strong>Elon Musk</strong></p><p>I&#8217;m not sure. Maybe it&#8217;s 10-20%, maybe more, I don&#8217;t know. We would not reduce our headcount. We would increase our headcount, to be clear. But we would increase our output. The units produced per human... The total number of humans at Tesla will increase, but the output of robots and cars will increase disproportionately. The number of cars and robots produced per human will increase dramatically, but the number of humans will increase as well.</p><h3>01:30:22 - Does China win by default?</h3><p><strong>John Collison</strong></p><p>We&#8217;re talking about Chinese manufacturing a bunch here. We&#8217;ve also talked about some of the policies that are relevant, like you mentioned, the solar tariffs. You think they&#8217;re a bad idea because we can&#8217;t scale up solar in the US.</p><p><strong>Elon Musk</strong></p><p>Electricity output in the US needs to scale up.</p><p><strong>John Collison</strong></p><p>It can&#8217;t without good power sources.</p><p><strong>Elon Musk</strong></p><p>You just need to get it somehow.</p><p><strong>John Collison</strong></p><p>Where I was going with this is, if you were in charge, if you were setting all the policies, what else would you change? You&#8217;d change the solar tariffs, that&#8217;s one.</p><p><strong>Elon Musk</strong></p><p>I would say anything that is a limiting factor for electricity needs to be addressed, provided it&#8217;s not very bad for the environment.</p><p><strong>John Collison</strong></p><p>So presumably some permitting reforms and stuff as well would be in there?</p><p><strong>Elon Musk</strong></p><p>There&#8217;s a fair bit of permitting reforms that are happening. A lot of the permitting is state-based, but anything federal... This administration is good at removing permitting roadblocks.</p><p>I&#8217;m not saying all tariffs are bad.</p><p><strong>John Collison</strong></p><p>Solar tariffs.</p><p><strong>Elon Musk</strong></p><p>Sometimes if another country is subsidizing the output of something, then you have to have countervailing tariffs to protect domestic industry against subsidies by another country.</p><p><strong>John Collison</strong></p><p>What else would you change?</p><p><strong>Elon Musk</strong></p><p>I don&#8217;t know if there&#8217;s that much that the government can actually do.</p><p><strong>John Collison</strong></p><p>One thing I was wondering... For the policy goal of creating a lead for the US versus China, it seems like the export bans have actually been quite impactful, where China is not producing leading-edge chips and the export bans really bite there. China is not producing leading-edge turbine engines. Similarly, there&#8217;s a bunch of export bans that are relevant there on some of the metallurgy. Should there be more export bans? As you think about things like the drone industry and things like that, is that something that should be considered?</p><p><strong>Elon Musk</strong></p><p>It&#8217;s important to appreciate that in most areas, China is very advanced in manufacturing. There&#8217;s only a few areas where it is not. China is a manufacturing powerhouse, next-level.</p><p><strong>John Collison</strong></p><p>It&#8217;s very impressive.</p><p><strong>Elon Musk</strong></p><p>If you take refining of ore, China does roughly twice as much ore refining on average as the rest of the world combined. There are some areas, like refining gallium which goes into solar cells. I think they are 98% of gallium refining. So China is actually very advanced in manufacturing in most areas.</p><p><strong>John Collison</strong></p><p>It seems like there is discomfort with this supply chain dependence, and yet nothing&#8217;s really happening on it.</p><p><strong>Elon Musk</strong></p><p>Supply chain dependence?</p><p><strong>John Collison</strong></p><p>Say, like the gallium refining that you&#8217;re saying. All the <a href="https://en.wikipedia.org/wiki/Rare-earth_element">rare-earth</a> stuff.</p><p><strong>Elon Musk</strong></p><p>Rare earths for sure, as you know, they&#8217;re not rare. We actually do rare earth ore mining in the US, send the rock, put it on a train, and then put it on a boat to China that goes to another train, and goes to the rare earth refiners in China who then refine it, put it into a magnet, put it into a motor sub-assembly, and then send it back to America. So the thing we&#8217;re really missing is a lot of ore refining in America.</p><p><strong>John Collison</strong></p><p>Isn&#8217;t this worth a policy intervention?</p><p><strong>Elon Musk</strong></p><p>Yes. I think there are some things being done on that front. But we kind of need Optimus, frankly, to build ore refineries.</p><p><strong>Dwarkesh Patel</strong></p><p>So, you think the main advantage China has is the abundance of skilled labor? That&#8217;s the thing Optimus fixes?</p><p><strong>Elon Musk</strong></p><p>Yes. China&#8217;s got like four times our population.</p><p><strong>Dwarkesh Patel</strong></p><p>I mean, there&#8217;s this concern. If you think human resources are the future, right now if it&#8217;s the skilled labor for manufacturing that&#8217;s determining who can build more humanoids, China has more of those. It manufactures more humanoids, therefore it gets the Optimi future first.</p><p><strong>Elon Musk</strong></p><p>Well, we&#8217;ll see. Maybe.</p><p><strong>Dwarkesh Patel</strong></p><p>It just keeps that exponential going. It seems like you&#8217;re sort of pointing out that getting to a million Optimi requires the manufacturing that the Optimi is supposed to help us get to. Right?</p><p><strong>Elon Musk</strong></p><p>You can close that recursive loop pretty quickly.</p><p><strong>John Collison</strong></p><p>With a small number of Optimi?</p><p><strong>Elon Musk</strong></p><p>Yeah. So you close the recursive loop to help the robots build the robots. Then we can try to get to tens of millions of units a year. Maybe. If you start getting to hundreds of millions of units a year, you&#8217;re going to be the most competitive country by far.</p><p>We definitely can&#8217;t win with just humans, because China has four times our population. Frankly, America has been winning for so long that&#8230; A pro sports team that&#8217;s been winning for a very long time tends to get complacent and entitled. That&#8217;s why they stop winning, because they don&#8217;t work as hard anymore. So frankly my observation is just that the average work ethic in China is higher than in the US. It&#8217;s not just that there&#8217;s four times the population, but the amount of work that people put in is higher.</p><p>So you can try to rearrange the humans, but you&#8217;re still one quarter of the&#8212;assuming that productivity is the same, which I think actually it might not be, I think China might have an advantage on productivity per person&#8212;we will do one quarter of the amount of things as China. So we can&#8217;t win on the human front.</p><p>Our birth rate has been low for a long time. The US birth rate&#8217;s been below replacement since roughly 1971. We&#8217;ve got a lot of people retiring, we&#8217;re close to more people domestically dying than being born. So we definitely can&#8217;t win on the human front, but we might have a shot at the robot front.</p><p><strong>John Collison</strong></p><p>Are there other things that you have wanted to manufacture in the past, but they&#8217;ve been too labor intensive or too expensive that now you can come back to and say, &#8220;oh, we can finally do the whatever, because we have Optimus?&#8221;</p><p><strong>Elon Musk</strong></p><p>Yeah, we&#8217;d like to build more ore refineries at Tesla. We just completed construction and have <a href="https://www.kxan.com/news/texas/tesla-lithium-refinery-largest-in-america-now-operating-in-texas/">begun lithium refining with our lithium refinery in Corpus Christi, Texas</a>. We have a nickel refinery, which is for the <a href="https://en.wikipedia.org/wiki/Cathode">cathode</a>, that&#8217;s here in Austin. This is the largest cathode refinery, largest nickel and lithium refinery, outside of China.</p><p>The cathode team would say, &#8220;we have the largest and the only, actually, cathode refinery in America.&#8221; Not just the largest, but it&#8217;s also the only.</p><p><strong>John Collison</strong></p><p>Many superlatives.</p><p><strong>Elon Musk</strong></p><p>So it was pretty big, even though it&#8217;s the only one. But there are other things. You could do a lot more refineries and help America be more competitive on refining capacity. There&#8217;s basically a lot of work for the Optimus to do that most Americans, very few Americans, frankly want to do.</p><p><strong>John Collison</strong></p><p>Is the refining work too dirty or what&#8217;s the&#8212;</p><p><strong>Elon Musk</strong></p><p>It&#8217;s not actually, no. We don&#8217;t have toxic emissions from the refinery or anything. The cathode nickel refinery is in Travis County.</p><p><strong>John Collison</strong></p><p>Why can&#8217;t you do it with humans?</p><p><strong>Elon Musk</strong></p><p>You can, you just run out of humans.</p><p><strong>John Collison</strong></p><p>Ah, I see. Okay.</p><p><strong>Elon Musk</strong></p><p>No matter what you do, you have one quarter of the number of humans in America than China. So if you have them do this thing, they can&#8217;t do the other thing. So then how do you build this refining capacity? Well, you could do it with Optimi.</p><p>Not very many Americans are pining to do refining. I mean, how many have you run into? Very few. Very few pining to refine.</p><p><strong>Dwarkesh Patel</strong></p><p><a href="https://en.wikipedia.org/wiki/BYD_Auto">BYD</a> is reaching Tesla production or sales in quantity. What do you think happens in global markets as Chinese production in EVs scales up?</p><p><strong>Elon Musk</strong></p><p>China is extremely competitive in manufacturing. So I think there&#8217;s going to be a massive flood of Chinese vehicles and basically most manufactured things. As it is, as I said, China is probably doing twice as much refining as the rest of the world combined. So if you go down to fourth and fifth-tier supply chain stuff&#8230;</p><p>At the base level, you&#8217;ve got energy, then you&#8217;ve got mining and refining. Those foundation layers are, like I said, as a rough guess, China&#8217;s doing twice as much refining as the rest of the world combined. So any given thing is going to have Chinese content because China&#8217;s doing twice as much refining work as the rest of the world. But they&#8217;ll go all the way to the finished product with the cars.</p><p>I mean China is a powerhouse. I think this year China will exceed three times US electricity output. Electricity output is a reasonable proxy for the economy. In order to run the factories and run everything, you need electricity. It&#8217;s a good proxy for the real economy. If China passes three times the US electricity output, it means that its industrial capacity&#8212;as rough approximation&#8212;will be three times that of the US.</p><p><strong>Dwarkesh Patel</strong></p><p>Reading between the lines, it sounds like what you&#8217;re saying is absent some sort of humanoid recursive miracle in the next few years, on the whole manufacturing/energy/raw materials chain, China will just dominate whether it comes to AI or manufacturing EVs or manufacturing humanoids.</p><p><strong>Elon Musk</strong></p><p>In the absence of breakthrough innovations in the US, China will utterly dominate.</p><p><strong>Dwarkesh Patel</strong></p><p>Interesting.</p><p><strong>Elon Musk</strong></p><p>Yes.</p><p><strong>John Collison</strong></p><p>Robotics being the main breakthrough innovation.</p><p><strong>Elon Musk</strong></p><p>Well, to scale AI in space, basically you need humanoid robots, you need real-world AI, you need a million tons a year to orbit. Let&#8217;s just say if we get the mass driver on the moon going, my favorite thing, then I think&#8212;</p><p><strong>John Collison</strong></p><p>We&#8217;ll have solved all our problems.</p><p><strong>Elon Musk</strong></p><p>I call that winning. I call it winning, big time.</p><p><strong>John Collison</strong></p><p>You can finally be satisfied. You&#8217;ve done something.</p><p><strong>Elon Musk</strong></p><p>Yes.</p><p><strong>John Collison</strong></p><p>You have the mass driver on the moon.</p><p><strong>Elon Musk</strong></p><p>I just want to see that thing in operation.</p><p><strong>John Collison</strong></p><p>Was that out of some sci-fi or where did you&#8230;?</p><p><strong>Elon Musk</strong></p><p>Well, actually, there is a <a href="https://en.wikipedia.org/wiki/Robert_A._Heinlein">Heinlein</a> book. <em><a href="https://amzn.to/4adaepL">The Moon is a Harsh Mistress</a></em>.</p><p><strong>John Collison</strong></p><p>Okay, yeah, but that&#8217;s slightly different. That&#8217;s a <a href="https://en.wikipedia.org/wiki/Gravity_assist">gravity slingshot</a> or...</p><p><strong>Elon Musk</strong></p><p>No, they have a mass driver on the Moon.</p><p><strong>John Collison</strong></p><p>Okay, yeah, but they use that to attack Earth. So maybe it&#8217;s not the greatest...</p><p><strong>Elon Musk</strong></p><p>Well they use that to&#8230; assert their independence.</p><p><strong>John Collison</strong></p><p>Exactly. What are your plans for the mass driver on the Moon?</p><p><strong>Elon Musk</strong></p><p>They asserted their independence. Earth government disagreed and they lobbed things until Earth government agreed.</p><p><strong>John Collison</strong></p><p>That book is a hoot. I found that book much better than his other one that everyone reads, <em><a href="https://amzn.to/3ZRVhES">Stranger in a Strange Land</a></em>.</p><p><strong>Elon Musk</strong></p><p><a href="https://en.wikipedia.org/wiki/Grok">&#8220;Grok&#8221;</a> comes from <em>Stranger in a Strange Land</em>. The first two-thirds of <em>Stranger in a Strange Land</em> are good, and then it gets very weird in the third portion. But there are still some good concepts in there.</p><h3>01:44:16 - Lessons from running SpaceX</h3><p><strong>John Collison</strong></p><p>One thing we were discussing a lot is your system for managing people. You interviewed the first few thousand of SpaceX employees and lots of other companies.</p><p><strong>Elon Musk</strong></p><p>It obviously doesn&#8217;t scale.</p><p><strong>John Collison</strong></p><p>Well, yes, but what doesn&#8217;t scale?</p><p><strong>Elon Musk</strong></p><p>Me.</p><p><strong>John Collison</strong></p><p>Sure, sure. I know that. But what are you looking for?</p><p><strong>Elon Musk</strong></p><p>There literally are not enough hours in the day. It&#8217;s impossible.</p><p><strong>John Collison</strong></p><p>But what are you looking for that someone else who&#8217;s good at interviewing and hiring people&#8230; What&#8217;s the <em>je ne sais quoi</em>?</p><p><strong>Elon Musk</strong></p><p>At this point, I might have more training data on evaluating technical talent especially&#8212;talent of all kinds I suppose, but technical talent especially&#8212;given that I&#8217;ve done so many technical interviews and then seen the results. So my training set is enormous and has a very wide range.</p><p>Generally, the things I ask for are bullet points for evidence of exceptional ability. These things can be pretty off the wall. It doesn&#8217;t need to be in the specific domain, but evidence of exceptional ability. So if somebody can cite even one thing, but let&#8217;s say three things, where you go, &#8220;Wow, wow, wow,&#8221; then that&#8217;s a good sign.</p><p><strong>Dwarkesh Patel</strong></p><p>Why do you have to be the one to determine that?</p><p><strong>Elon Musk</strong></p><p>No, I don&#8217;t. I can&#8217;t be. It&#8217;s impossible. The total headcount across all companies is 200,000 people.</p><p><strong>John Collison</strong></p><p>But in the early days, what was it that you were looking for that couldn&#8217;t be delegated in those interviews?</p><p><strong>Elon Musk</strong></p><p>I guess I need to build my training set. It&#8217;s not like I batted a thousand here. I would make mistakes, but then I&#8217;d be able to see where I thought somebody would work out well, but they didn&#8217;t. Then why did they not work out well? What can I do, I guess RL myself, to in the future have a better batting average when interviewing people? My batting average is still not perfect, but it&#8217;s very high.</p><p><strong>Dwarkesh Patel</strong></p><p>What are some surprising reasons people don&#8217;t work out?</p><p><strong>Elon Musk</strong></p><p>Surprising reasons&#8230;</p><p><strong>Dwarkesh Patel</strong></p><p>Like, they don&#8217;t understand technical domain, et cetera, et cetera. But you&#8217;ve got the long tail now of like, &#8220;I was really excited about this person. It didn&#8217;t work out.&#8221; Curious why that happens.</p><p><strong>Elon Musk</strong></p><p>Generally what I tell people&#8212;I tell myself, I guess, aspirationally&#8212;is, don&#8217;t look at the resume. Just believe your interaction. The resume may seem very impressive and it&#8217;s like, &#8220;Wow, the resume looks good.&#8221; But if the conversation after 20 minutes is not &#8220;wow,&#8221; you should believe the conversation, not the paper.</p><p><strong>John Collison</strong></p><p>I feel like part of your method is that&#8230; There was this meme in the media a few years back about Tesla being a revolving door of executive talent. Whereas actually, I think when you look at it, Tesla&#8217;s had a very consistent and internally promoted executive bench over the past few years.</p><p>Then at SpaceX, you have all these folks like <a href="https://fortune.com/2025/03/06/spacex-elon-musk-tesla-blue-origin-space-starlink-rockets-satellites/">Mark Juncosa</a> and <a href="https://en.wikipedia.org/wiki/Steve_Davis_(executive)">Steve Davis</a>&#8212;</p><p><strong>Elon Musk</strong></p><p>Steve Davis runs The Boring Company these days.</p><p><strong>John Collison</strong></p><p><a href="https://x.com/Boca_Bill_R">Bill Riley</a>, and folks like that. It feels like part of what has worked well is having very capable technical deputies. What do all of those people have in common?</p><p><strong>Elon Musk</strong></p><p>Well, the Tesla senior team, at this point has probably got an average tenure of 10-12 years. It&#8217;s quite long. But there were times when Tesla went through an extremely rapid growth phase, so things were just somewhat sped up. As you know, a company goes through different orders of magnitude of size. People that could help manage, say, a 50-person company versus a 500-person company versus a 5,000-person company versus a 50,000-person company.</p><p><strong>John Collison</strong></p><p>You outgrew people.</p><p><strong>Elon Musk</strong></p><p>It&#8217;s just not the same team. It&#8217;s not always the same team. So if a company is growing very rapidly, the rate at which executive positions will change will also be proportionate to the rapidity of the growth generally.</p><p>Tesla had a further challenge where when Tesla had very successful periods, we would be relentlessly recruited from. Like, relentlessly. When Apple had their electric car program, they were carpet bombing Tesla with recruiting calls. Engineers just unplugged their phones.</p><p><strong>John Collison</strong></p><p>&#8220;I&#8217;m trying to get work done here.&#8221;</p><p><strong>Elon Musk</strong></p><p>Yeah. &#8220;If I get one more call from an Apple recruiter&#8230;&#8221; But their opening offer without any interview would be like double the compensation at Tesla. So we had a bit of the &#8220;Tesla pixie dust&#8221; thing where it&#8217;s like, &#8220;Oh, if you hire a Tesla executive, suddenly everything&#8217;s going to be successful.&#8221;</p><p>I&#8217;ve fallen prey to the pixie dust thing as well, where it&#8217;s like, &#8220;Oh, we&#8217;ll hire someone from Google or Apple and they&#8217;ll be immediately successful,&#8221; but that&#8217;s not how it works. People are people. There&#8217;s no magical pixie dust. So when we had the pixie dust problem, we would get relentlessly recruited from.</p><p>Also, Tesla being engineering, especially being primarily in Silicon Valley, it&#8217;s easier for people to just... They don&#8217;t have to change their life very much. Their commute&#8217;s going to be the same.</p><p><strong>John Collison</strong></p><p>So how do you prevent that? How do you prevent the pixie dust effect where everyone&#8217;s trying to poach all your people?</p><p><strong>Elon Musk</strong></p><p>I don&#8217;t think there&#8217;s much we can do to stop it. That&#8217;s one of the reasons why Tesla&#8230; Really, being in Silicon Valley and having the pixie dust thing at the same time meant that there was just a very, very aggressive recruitment.</p><p><strong>John Collison</strong></p><p>Presumably being in Austin helps then?</p><p><strong>Elon Musk</strong></p><p>Austin, it helps. Tesla still has a majority of its engineering in California. Getting engineers to move&#8230; I call it the &#8220;significant other&#8221; problem.</p><p><strong>John Collison</strong></p><p>Yes, &#8220;significant others&#8221; have jobs.</p><p><strong>Elon Musk</strong></p><p>Exactly. So for <a href="https://en.wikipedia.org/wiki/SpaceX_Starbase">Starbase</a> that was particularly difficult, since the odds of finding a non-SpaceX job&#8230;</p><p><strong>John Collison</strong></p><p>In Brownsville, Texas&#8230;</p><p><strong>Elon Musk</strong></p><p>&#8230;are pretty low. It&#8217;s quite difficult. It&#8217;s like a technology monastery thing, remote and mostly dudes.</p><p><strong>Dwarkesh Patel</strong></p><p>Not much of an improvement over SF.</p><p><strong>John Collison</strong></p><p>If you go back to these people who&#8217;ve really been very effective in a technical capacity at Tesla, at SpaceX, and those sorts of places, what do you think they have in common other than... Is it just that they&#8217;re very sharp on the rocketry or the technical foundations, or do you think it&#8217;s something organizational?</p><p>Is it something about their ability to work with you? Is it their ability to be flexible but not too flexible? What makes a good sparring partner for you?</p><p><strong>Elon Musk</strong></p><p>I don&#8217;t think of it as a sparring partner. If somebody gets things done, I love them, and if they don&#8217;t, I hate them. So it&#8217;s pretty straightforward. It&#8217;s not like some idiosyncratic thing. If somebody executes well, I&#8217;m a huge fan, and if they don&#8217;t, I&#8217;m not. But it&#8217;s not about mapping to my idiosyncratic preferences. I certainly try not to have it be mapping to my idiosyncratic preferences.</p><p>Generally, I think it&#8217;s a good idea to hire for talent and drive and trustworthiness. And I think goodness of heart is important. I underweighted that at one point. So, are they a good person? Trustworthy? Smart and talented and hard working? If so, you can add domain knowledge.</p><p>But those fundamental traits, those fundamental properties, you cannot change. So most of the people who are at Tesla and SpaceX did not come from the aerospace industry or the auto industry.</p><p><strong>Dwarkesh Patel</strong></p><p>What has had to change most about your management style as your companies have scaled from 100 to 1,000 to 10,000 people? You&#8217;re known for this very micro management, just getting into the details of things.</p><p><strong>Elon Musk</strong></p><p>Nano management, please. Pico management. Femto management.</p><p><strong>John Collison</strong></p><p>Keep going.</p><p><strong>Elon Musk</strong></p><p>We&#8217;re going to go all the way down to <a href="https://en.wikipedia.org/wiki/Planck_constant">Planck&#8217;s constant</a>.<strong> </strong>All the way down to <a href="https://en.wikipedia.org/wiki/Uncertainty_principle">Heisenberg uncertainty principle</a>.</p><p><strong>Dwarkesh Patel</strong></p><p>Are you still able to get into details as much as you want? Would your companies be more successful if they were smaller? How do you think about that?</p><p><strong>Elon Musk</strong></p><p>Because I have a fixed amount of time in the day, my time is necessarily diluted as things grow and as the span of activity increases. It&#8217;s impossible for me to actually be a micromanager because that would imply I have some thousands of hours per day. It is a logical impossibility for me to micromanage things.</p><p>Now, there are times when I will drill down into a specific issue because that specific issue is the limiting factor on the progress of the company. The reason for drilling into some very detailed item is because it is the limiting factor. It&#8217;s not arbitrarily drilling into tiny things.</p><p>From a time standpoint, it is physically impossible for me to arbitrarily go into tiny things that don&#8217;t matter. That would result in failure. But sometimes the tiny things are decisive in victory.</p><p><strong>John Collison</strong></p><p>Famously, you <a href="https://www.popularmechanics.com/space/rockets/a25953663/elon-musk-spacex-bfr-stainless-steel/">switched the Starship design from composites to steel</a>.</p><p><strong>Elon Musk</strong></p><p>Yes.</p><p><strong>John Collison</strong></p><p>You made that decision. That wasn&#8217;t people going around saying, &#8220;Oh, we found something better, boss.&#8221; That was you encouraging people against some resistance. Can you tell us how you came to that whole concept of the steel switch?</p><p><strong>Elon Musk</strong></p><p>Desperation, I&#8217;d say. Originally, we were going to make <a href="https://en.wikipedia.org/wiki/SpaceX_Starship_design_history">Starship</a> out of <a href="https://en.wikipedia.org/wiki/Carbon_fibers">carbon fiber</a>. Carbon fiber is pretty expensive. When you do volume production, you can get any given thing to start to approach its material cost.</p><p>The problem with carbon fiber is that material cost is still very high. Particularly if you go for a high-strength specialized carbon fiber that can handle <a href="https://en.wikipedia.org/wiki/Liquid_oxygen">cryogenic oxygen</a>, it&#8217;s roughly 50 times the cost of steel. At least in theory, it would be lighter. People generally think of steel as being heavy and carbon fiber as being light.</p><p>For room temperature applications, like a Formula 1 car, static aero structure, or any kind of aero structure really, you&#8217;re probably going to be better off with carbon fiber. The problem is that we were trying to make this enormous rocket out of carbon fiber and our progress was extremely slow.</p><p><strong>John Collison</strong></p><p>It had been picked in the first place just because it&#8217;s light?</p><p><strong>Elon Musk</strong></p><p>Yes. At first glance, most people would think that the choice for making something light would be carbon fiber. The thing is that when you make something very enormous out of carbon fiber and then you try to have the carbon fiber be efficiently cured, meaning not room temperature cured, because sometimes you got 50 plies of carbon fiber&#8230; Carbon fiber is really carbon string and glue. In order to have high strength, you need an <a href="https://en.wikipedia.org/wiki/Autoclave">autoclave</a>. Something that&#8217;s essentially a high pressure oven. If you have something that&#8217;s gigantic, that one&#8217;s got to be bigger than the rocket.</p><p>We were trying to make an autoclave that&#8217;s bigger than any autoclave that&#8217;s ever existed. Or you can do room temperature cure, which takes a long time and has issues. The final issue is that we were just making very slow progress with carbon fiber.</p><p><strong>Dwarkesh Patel</strong></p><p>The meta question is why it had to be you who made that decision. There&#8217;s many engineers on your team.</p><p><strong>John Collison</strong></p><p>How did the team not arrive at steel?</p><p><strong>Dwarkesh Patel</strong></p><p>Yeah exactly. This is part of a broader question, understanding your comparative advantage at your companies.</p><p><strong>Elon Musk</strong></p><p>Because we were making very slow progress with carbon fiber, I was like, &#8220;Okay, we&#8217;ve got to try something else.&#8221; For the Falcon 9, the primary airframe is made of aluminum lithium, which has a very good <a href="https://en.wikipedia.org/wiki/Specific_strength">strength-to-weight</a>. Actually, it has about the same, maybe better, strength to weight for its application than carbon fiber. But aluminum lithium is very difficult to work with.</p><p>In order to weld it, you have to do something called <a href="https://en.wikipedia.org/wiki/Friction_stir_welding">friction stir welding</a>, where you join the metal without entering the liquid phase. It&#8217;s kind of wild that you can do that. But with this particular type of welding, you can do that. It&#8217;s very difficult. Let&#8217;s say you want to make a modification or attach something to aluminum lithium, you now have to use a mechanical attachment with seals. You can&#8217;t weld it on. So I wanted to avoid using aluminum lithium for the primary structure for Starship.</p><p>There was this very special grade of carbon fiber that had very good mass properties. With a rocket, you&#8217;re really trying to maximize the percentage of the rocket that is <a href="https://en.wikipedia.org/wiki/Propellant">propellant</a>, minimize the mass obviously. But like I said, we were making very slow progress. I said, &#8220;at this rate, we&#8217;re never going to get to Mars. So we&#8217;ve got to think of something else.&#8221;</p><p>I didn&#8217;t want to use aluminum lithium because of the difficulty of friction stir welding, especially doing that at scale. It was hard enough at 3.6 meters in diameter, let alone at 9 meters or above. Then I said, &#8220;what about steel?&#8221;</p><p>I had a clue here because some of the early US rockets had used very thin steel. The <a href="https://en.wikipedia.org/wiki/Atlas_(rocket_family)">Atlas rocket</a>s had used a steel <a href="https://en.wikipedia.org/wiki/Balloon_tank">balloon tank</a>. It&#8217;s not like steel had never been used before. It actually had been used. When you look at the material properties of <a href="https://en.wikipedia.org/wiki/Stainless_steel">stainless steel</a>, <a href="https://www.worthingtonsteel.com/flatrolledsteel/steel-expertise/dictionary-terms/full-hard-steel">full-hard</a>, <a href="https://en.wikipedia.org/wiki/Work_hardening">strain hardened</a> stainless steel, at cryogenic temperature the strength to weight is actually similar to carbon fiber.</p><p>If you look at material properties at room temperature, it looks like the steel is going to be twice as heavy. But if you look at the material properties at cryogenic temperature of full-hard steel, stainless of particular grades, then you actually get to a similar strength to weight as carbon fiber.</p><p>In the case of Starship, both the fuel and the oxidizer are cryogenic. For Falcon 9, the fuel is <a href="https://en.wikipedia.org/wiki/RP-1">rocket propellant-grade kerosene</a>, basically a very pure form of jet fuel. That is roughly room temperature. Although we do actually chill it slightly below, we chill it like a beer.</p><p><strong>John Collison</strong></p><p>Delicious.</p><p><strong>Elon Musk</strong></p><p>We do chill it, but it&#8217;s not cryogenic. In fact, if we made it cryogenic, it would just turn to wax. But for Starship, it&#8217;s liquid methane and liquid oxygen. They are liquid at similar temperatures. Basically, almost the entire primary structure is at cryogenic temperature. So then you&#8217;ve got a <a href="https://en.wikipedia.org/wiki/Austenitic_stainless_steel">300-series stainless</a> that&#8217;s strain hardened. Because almost all things are cryogenic temperature, it actually has similar strength to weight as carbon fiber.</p><p>But it costs 50x less in raw material and is very easy to work with. You can weld stainless steel outdoors. You could smoke a cigar while welding stainless steel. It&#8217;s very resilient. You can modify it easily. If you want to attach something, you just weld it right on. Very easy to work with, very low cost.</p><p>Like I said, at cryogenic temperature, it&#8217;s similar strength-to-weight to carbon fiber. Then when you factor in that we have a much reduced <a href="https://en.wikipedia.org/wiki/Heat_shield#Spacecraft">heat shield</a> mass, because the melting point of steel, is much greater than the melting point of aluminum&#8230; It&#8217;s about twice the melting point of aluminum.</p><p><strong>John Collison</strong></p><p>So you can just run the rocket much hotter?</p><p><strong>Elon Musk</strong></p><p>Yes, especially for the ship which is coming in like a blazing meteor. You can greatly reduce the mass of the heat shield. You can cut the mass of the windward part of the heat shield, maybe in half, and you don&#8217;t need any heat shielding on the leeward side.</p><p>The net result is that actually the steel rocket weighs less than the carbon fiber rocket, because the resin in the carbon fiber rocket starts to melt. Basically, carbon fiber and aluminum have about the same operating temperature capabilities, whereas steel can operate at twice the temperature. These are very rough approximations.</p><p><strong>John Collison</strong></p><p>I won&#8217;t build the rocket.</p><p><strong>Elon Musk</strong></p><p>What I mean is people will say, &#8220;Oh, he said this twice. It&#8217;s actually 0.8.&#8221; I&#8217;m like, shut up, assholes.</p><p><strong>Dwarkesh Patel</strong></p><p>That&#8217;s what the main comment&#8217;s going to be about.</p><p><strong>Elon Musk</strong></p><p>God damn it. The point is, in retrospect, we should have started with steel in the beginning. It was dumb not to do steel.</p><p><strong>John Collison</strong></p><p>Okay, but to play this back to you, what I&#8217;m hearing is that steel was a riskier, less proven path, other than the early US rockets. Versus carbon fiber was a worse but more proven out path. So you need to be the one to push for, &#8220;Hey, we&#8217;re going to do this riskier path and just figure it out.&#8221; So you&#8217;re fighting a sort of conservatism in a sense.</p><p><strong>Elon Musk</strong></p><p>That&#8217;s why I initially said that the issue is that we weren&#8217;t making fast enough progress. We were having trouble making even a small barrel section of the carbon fiber that didn&#8217;t have wrinkles in it.<strong> </strong>Because at that large scale, you have to have many plies, many layers of the carbon fiber. You&#8217;ve got to cure it and you&#8217;ve got to cure it in such a way that it doesn&#8217;t have any wrinkles or defects.</p><p>Carbon fiber is much less resilient than steel. It has much less toughness. Stainless steel will stretch and bend, the carbon fiber will tend to shatter. Toughness being the area under the stress strain curve. You&#8217;re generally going to have to do better with steel, but stainless steel to be precise.</p><p><strong>John Collison</strong></p><p>One other Starship question. So I visited Starbase, I think it was two years ago, with <a href="https://x.com/samteller?lang=en">Sam Teller</a>, and that was awesome. It was very cool to see, in a whole bunch of ways.</p><p>One thing I noticed was that people really took pride in the simplicity of things, where everyone wants to tell you how Starship is just a big soda can, and we&#8217;re hiring welders, and if you can weld in any industrial project, you can weld here. But there&#8217;s a lot of pride in the simplicity.</p><p><strong>Elon Musk</strong></p><p>Well, factually Starship is a very complicated rocket.</p><p><strong>John Collison</strong></p><p>So that&#8217;s what I&#8217;m getting at. Are things simple or are they complex?</p><p><strong>Elon Musk</strong></p><p>I think maybe just what they&#8217;re trying to say is that you don&#8217;t have to have prior experience in the rocket industry to work on Starship. Somebody just needs to be smart and work hard and be trustworthy and they can work on a rocket. They don&#8217;t need prior rocket experience. Starship is the most complicated machine ever made by humans, by a long shot.</p><p><strong>John Collison</strong></p><p>In what regards?</p><p><strong>Elon Musk</strong></p><p>Anything, really. I&#8217;d say there isn&#8217;t a more complex machine. I&#8217;d say that pretty much any project I can think of would be easier than this. That&#8217;s why nobody has ever made a fully reusable orbital rocket. It&#8217;s a very hard problem. Many smart people have tried before, very smart people with immense resources, and they failed.</p><p>And we haven&#8217;t succeeded yet. Falcon is partially reusable, but the upper stage is not. <a href="https://en.wikipedia.org/wiki/SpaceX_Starship#Block_3">Starship Version 3</a>, I think this design can be fully reusable. That full reusability is what will enable us to become a multi-planet civilization.<strong> </strong>Any technical problem, even like a <a href="https://en.wikipedia.org/wiki/Large_Hadron_Collider">Hadron Collider</a> or something like that, is an easier problem than this.</p><p><strong>John Collison</strong></p><p>We spent a lot of time on bottlenecks. Can you say what the current Starship bottlenecks are, even at a high level?</p><p><strong>Elon Musk</strong></p><p>Trying to make it not explode, generally. It really wants to explode.</p><p><strong>John Collison</strong></p><p>That old chestnut. All those combustible materials.</p><p><strong>Elon Musk</strong></p><p>We&#8217;ve had two boosters explode on the test stand. One obliterated the entire test facility. So it only takes that one mistake. The amount of energy contained in a Starship is insane.</p><p><strong>John Collison</strong></p><p>Is that why it&#8217;s harder than Falcon? It&#8217;s because it&#8217;s just more energy?</p><p><strong>Elon Musk</strong></p><p>It&#8217;s a lot of new technology. It&#8217;s pushing the performance envelope. The <a href="https://en.wikipedia.org/wiki/SpaceX_Raptor#Raptor_3">Raptor 3 engine</a> is a very, very advanced engine. It&#8217;s by far the best rocket engine ever made. But it desperately wants to blow up. Just to put things into perspective here, on liftoff the rocket is generating over 100 gigawatts of power. That&#8217;s 20% of US electricity.</p><p><strong>Dwarkesh Patel</strong></p><p>It&#8217;s actually insane.</p><p><strong>John Collison</strong></p><p>It&#8217;s a great comparison.</p><p><strong>Elon Musk</strong></p><p>While not exploding.</p><p><strong>John Collison</strong></p><p>Sometimes.</p><p><strong>Elon Musk</strong></p><p>Sometimes, yes. So I was like, how does it not explode? There&#8217;s thousands of ways that it could explode and only one way that it doesn&#8217;t. So we want it not only to really not explode, but fly reliably on a daily basis, like once per hour. Obviously, if it blows up a lot, it&#8217;s very difficult to maintain that launch cadence.</p><p><strong>John Collison</strong></p><p>Yes.</p><p><strong>Elon Musk</strong></p><p>What&#8217;s the single biggest remaining problem for Starship? It&#8217;s having the heat shield be reusable. No one&#8217;s ever made a reusable orbital heat shield. So the heat shield&#8217;s gotta make it through the ascent phase without shucking a bunch of tiles, and then it&#8217;s gotta come back in and also not lose a bunch of tiles or overheat the main airframe.</p><p><strong>John Collison</strong></p><p>Isn&#8217;t that hard because it&#8217;s fundamentally a consumable?</p><p><strong>Elon Musk</strong></p><p>Well, yes, but your brake pads in your car are also consumable, but they last a very long time.</p><p><strong>John Collison</strong></p><p>Fair.</p><p><strong>Elon Musk</strong></p><p>So it just needs to last a very long time. We have brought the ship back and had it do a soft landing in the ocean. We&#8217;ve done that a few times. But it lost a lot of tiles. It was not reusable without a lot of work. Even though it did come to a soft landing, it would not have been reusable without a lot of work.</p><p>So it&#8217;s not really reusable in that sense. That&#8217;s the biggest problem that remains, a fully reusable heat shield. You want to be able to land it, refill propellant and fly again. You can&#8217;t do this laborious inspection of 40,000 tiles type of thing.</p><p><strong>Dwarkesh Patel</strong></p><p>When I read biographies of yours, it seems like<strong> </strong>you&#8217;re just able to drive the sense of urgency and drive the sense of &#8220;this is the thing that can scale.&#8221; I&#8217;m curious why you think other organizations of your&#8230;</p><p>SpaceX and Tesla are really big companies now. You&#8217;re still able to keep that culture. What goes wrong with other companies such that they&#8217;re not able to do that?</p><p><strong>Elon Musk</strong></p><p>I don&#8217;t know.</p><p><strong>Dwarkesh Patel</strong></p><p>Like today, you said you had a bunch of SpaceX meetings. What is it that you&#8217;re doing there that&#8217;s keeping that?</p><p><strong>John Collison</strong></p><p>It&#8217;s adding urgency?</p><p><strong>Elon Musk</strong></p><p>Well, I don&#8217;t know. I guess the urgency is going to come from whoever is leading the company. I have a maniacal sense of urgency. So that maniacal sense of urgency projects through the rest of the company.</p><p><strong>Dwarkesh Patel</strong></p><p>Is it because of consequences? They&#8217;re like, &#8220;Elon set a crazy deadline, but if I don&#8217;t get it, I know what happens to me.&#8221; Is it just that you&#8217;re able to identify bottlenecks and get rid of them so people can move fast? How do you think about why your companies are able to move fast?</p><p><strong>Elon Musk</strong></p><p>I&#8217;m constantly addressing the limiting factor. On the deadlines front, I generally actually try to aim for a deadline that I at least think is at the 50th percentile. So it&#8217;s not like an impossible deadline, but it&#8217;s the most aggressive deadline I can think of that could be achieved with 50% probability. Which means that it&#8217;ll be late half the time.</p><p>There is a <a href="https://en.wikipedia.org/wiki/Gas_laws">law of gas expansion</a> that applies to schedules. If you said we&#8217;re going to do something in five years, which to me is like infinity time, it will expand to fill the available schedule and it&#8217;ll take five years.</p><p>Physics will limit how fast you can do certain things. So scaling up manufacturing, there&#8217;s a rate at which you can move the atoms and scale manufacturing. That&#8217;s why you can&#8217;t instantly make a million units a year of something. You&#8217;ve got to design the manufacturing line. You&#8217;ve got to bring it up. You&#8217;ve got to ride the S-curve of production.</p><p>What can I say that&#8217;s actually helpful to people? Generally, a maniacal sense of urgency is a very big deal. You want to have an aggressive schedule and you want to figure out what the limiting factor is at any point in time and help the team address that limiting factor.</p><p><strong>John Collison</strong></p><p>So Starlink was slowly in the works for many years.</p><p><strong>Elon Musk</strong></p><p>We talked about it all the way in the beginning of the company.</p><p><strong>John Collison</strong></p><p>So then there was a team you had built in Redmond, and then at one point you <a href="https://www.latimes.com/business/la-fi-spacex-starlink-20181031-story.html">decided this team is just not cutting it</a>. It went for a few years slowly, and so why didn&#8217;t you act earlier, and why did you act when you did? Why was that the right moment at which to act?</p><p><strong>Elon Musk</strong></p><p>I have these very detailed engineering reviews weekly. That&#8217;s maybe a very unusual level of granularity. I don&#8217;t know anyone who runs a company, or at least a manufacturing company, that goes with the level of detail that I go into. It&#8217;s not as though... I have a pretty good understanding of what&#8217;s actually going on because we go through things in detail.</p><p>I&#8217;m a big believer in skip-level meetings where instead of having the person that reports to me say things, it&#8217;s everyone that reports to them saying something in the technical review. And there can&#8217;t be advanced preparation. Otherwise you&#8217;re going to get &#8220;glazed&#8221;, as I say these days.</p><p><strong>John Collison</strong></p><p>Exactly. Very Gen Z of you.</p><p><strong>Dwarkesh Patel</strong></p><p>How do you prevent advanced preparation? Do you call on them randomly?</p><p><strong>Elon Musk</strong></p><p>No, I just go around the room. Everyone provides an update. It&#8217;s a lot of information to keep in your head. If you have meetings weekly or twice weekly, you&#8217;ve got a snapshot of what that person said. You can then plot the progress points. You can sort of mentally plot the points on a curve and say, &#8220;are we converging to a solution or not?&#8221;</p><p>I&#8217;ll take drastic action only when I conclude that success is not in a set of possible outcomes. So when I finally reach the conclusion that unless drastic action is done, we have no chance of success, then I must take drastic action. I came to that conclusion in 2018, took drastic action and fixed the problem.</p><p><strong>Dwarkesh Patel</strong></p><p>You&#8217;ve got many, many companies. In each of them it sounds like you do this kind of deep engineering understanding of what the relevant bottlenecks are so you can do these reviews with people.</p><p>You&#8217;ve been able to scale it up to five, six, seven companies. Within one of these companies, you have many different mini companies within them. What determines the max amount here? Because you have like 80 companies&#8230;?</p><p><strong>Elon Musk</strong></p><p>80? No.</p><p><strong>Dwarkesh Patel</strong></p><p>But you have so many already. That&#8217;s already remarkable.</p><p><strong>John Collison</strong></p><p>By this current number.</p><p><strong>Dwarkesh Patel</strong></p><p>Exactly.</p><p><strong>John Collison</strong></p><p>We can barely keep one company together.</p><p><strong>Elon Musk</strong></p><p>It depends on the situation. I actually don&#8217;t have regular meetings with The Boring Company, so The Boring Company is sort of cruising along. Basically, if something is working well and making good progress, then there&#8217;s no point in me spending time on it.</p><p>I actually allocate time according to where the limiting factor. Where are things problematic? Where are we pushing against? What is holding us back? I focus, at the risk of saying the words too many times, on the limiting factor.</p><p><strong>Elon Musk</strong></p><p>The irony is if something&#8217;s going really well, they don&#8217;t see much of me. But if something is going badly, they&#8217;ll see a lot of me. Or not even badly&#8230;</p><p><strong>John Collison</strong></p><p>If something is the limiting factor.</p><p><strong>Elon Musk</strong></p><p>The limiting factor, exactly. It&#8217;s not exactly going badly but it&#8217;s the thing that we need to make go faster.</p><p><strong>John Collison</strong></p><p>When something&#8217;s a limiting factor at SpaceX or Tesla, are you talking weekly and daily with the engineer that&#8217;s working on it? How does that actually work?</p><p><strong>Elon Musk</strong></p><p>Most things that are the limiting factor are weekly and some things are twice weekly. The AI5 chip review is twice weekly. Every Tuesday and Saturday is the chip review.</p><p><strong>John Collison</strong></p><p>Is it open ended in how long it goes?</p><p><strong>Elon Musk</strong></p><p>Technically, yes, but usually it&#8217;s two or three hours. Sometimes less. It depends on how much information we&#8217;ve got to go through.</p><p><strong>John Collison</strong></p><p>That&#8217;s another thing. I&#8217;m just trying to tease out the differences here because the outcomes seem quite different. I think it&#8217;s interesting to know what inputs are different. It feels like in the corporate world, one, like you were saying, the CEO doing engineering reviews does not always happen despite the fact that that is what the company is doing.</p><p>But then time is often pretty finely sliced into half hour meetings or even 15 minute meetings. It seems like you hold more open-ended, &#8220;We&#8217;re talking about it until we figure it out&#8221; type things.</p><p><strong>Elon Musk</strong></p><p>Sometimes. But most of them seem to more or less stay on time. Today&#8217;s Starship engineering review went a bit longer because there were more topics to discuss. They&#8217;re trying to figure out how to scale to a million plus tons to orbit per year. It&#8217;s quite challenging.</p><h3>02:20:08 - DOGE</h3><p><strong>Dwarkesh Patel</strong></p><p>Can I ask a question? You said about Optimus and AI that they&#8217;re going to result in double digit growth rates within a matter of years.</p><p><strong>Elon Musk</strong></p><p>Oh, like the economy? Yes. I think that&#8217;s right.</p><p><strong>Dwarkesh Patel</strong></p><p>What was the point of the <a href="https://en.wikipedia.org/wiki/Department_of_Government_Efficiency">DOGE</a> cuts if the economy is going to grow so much?</p><p><strong>Elon Musk</strong></p><p>Well, I think waste and fraud are not good things to have. I was actually pretty worried about... In the absence of AI and robotics, we&#8217;re actually totally screwed because the national debt is piling up like crazy. The interest payments to national debt exceed the military budget, which is a trillion dollars. So we have over a trillion dollars just in interest payments. I was pretty concerned about that. Maybe if I spend some time, we can slow down the bankruptcy of the United States and give us enough time for the AI and robots to help solve the national debt.</p><p>Or not help solve, it&#8217;s the only thing that could solve the national debt. We are 1000% going to go bankrupt as a country, and fail as a country, without AI and robots. Nothing else will solve the national debt. We just need enough time to build the AI and robots to not go bankrupt before then.</p><p><strong>Dwarkesh Patel</strong></p><p>I guess the thing I&#8217;m curious about is, when DOGE starts you have this enormous ability to enact reform.</p><p><strong>Elon Musk</strong></p><p>Not that enormous.</p><p><strong>Dwarkesh Patel</strong></p><p>Sure. I totally buy your point that it&#8217;s important that AI and robotics drive productivity improvements, drive GDP growth. But why not just directly go after the things you were pointing out, like the tariffs on certain components, or permitting?</p><p><strong>Elon Musk</strong></p><p>I&#8217;m not the president. And it is very hard to cut things that are obvious waste and fraud, like ridiculous waste and fraud. What I discovered is that it&#8217;s extremely difficult even to cut very obvious waste and fraud from the government because the government has to operate on who&#8217;s complaining.</p><p>If you cut off payments to fraudsters, they immediately come up with the most sympathetic sounding reasons to continue the payment. They don&#8217;t say, &#8220;Please keep the fraud going.&#8221; They&#8217;re like, &#8220;You&#8217;re killing baby pandas.&#8221; Meanwhile, no baby pandas are dying. They&#8217;re just making it up. The fraudsters are capable of coming up with extremely compelling, heart-wrenching stories that are false, but nonetheless sound sympathetic. That&#8217;s what happened.</p><p>Perhaps I should have known better. But I thought, wait, let&#8217;s try to cut some amount of waste and pork from the government. Maybe there shouldn&#8217;t be 20 million people marked as alive in Social Security who are definitely dead, and over the age of 115.</p><p>The oldest American is 114. So it&#8217;s safe to say if somebody is 115 and marked as alive in the Social Security database, there&#8217;s either a typo&#8230; Somebody should call them and say, &#8220;We seem to have your birthday wrong, or we need to mark you as dead.&#8221;<strong> </strong>One of the two things.</p><p><strong>John Collison</strong></p><p>Very intimidating call to get.</p><p><strong>Elon Musk</strong></p><p>Well, it seems like a reasonable thing. Say if their birthday is in the future and they have a <a href="https://www.sba.gov/funding-programs/loans">Small Business Administration loan</a>, and their birthday is 2165, we either have a typo or we have fraud. So we say, &#8220;we appear to have gotten the century of your birth incorrect.&#8221;</p><p><strong>John Collison</strong></p><p>Or a great plot for a movie.</p><p><strong>Elon Musk</strong></p><p>Yes. That&#8217;s what I mean by, ludicrous fraud.</p><p><strong>Dwarkesh Patel</strong></p><p>Were those people getting payments?</p><p><strong>Elon Musk</strong></p><p>Some were getting payments from <a href="https://en.wikipedia.org/wiki/Social_Security_(United_States)">Social Security</a>. But the main fraud vector was to mark somebody as alive in Social Security and then use every other government payment system to basically do fraud. Because what those other government payment systems do, they would simply do an &#8220;are you alive&#8221; check to the Social Security database. It&#8217;s a bank shot.</p><p><strong>Dwarkesh Patel</strong></p><p>What would you estimate is the total amount of fraud from this mechanism?</p><p><strong>Elon Musk</strong></p><p>By the way, the <a href="https://en.wikipedia.org/wiki/United_States_Government_Accountability_Office">Government Accountability Office</a> has done these estimates before. I&#8217;m not the only one. In fact, I think the <a href="https://www.gao.gov/products/gao-24-105833">GAO did an analysis</a>, a rough estimate of fraud during the Biden administration, and calculated it at roughly half a trillion dollars. So don&#8217;t take my word for it. Take a report issued during the Biden administration. How about that?</p><p><strong>Dwarkesh Patel</strong></p><p>From this Social Security mechanism?</p><p><strong>Elon Musk</strong></p><p>It&#8217;s one of many. It&#8217;s important to appreciate that the government is very ineffective at stopping fraud. It&#8217;s not like a company where, with stopping fraud, you&#8217;ve got a motivation because it&#8217;s affecting the earnings of your company. The government just prints more money. You need caring and competence. These are in short supply at the federal level.</p><p>When you go to the DMV, do you think, &#8220;Wow, this is a bastion of competence&#8221;? Well, now imagine it&#8217;s worse than the DMV because it&#8217;s the DMV that can print money.</p><p>At least the state level DMVs need to... The states more or less need to stay within their budget or they go bankrupt. But the federal government just prints more money.</p><p><strong>Dwarkesh Patel</strong></p><p>If there&#8217;s actually half a trillion of fraud, why was it not possible to cut all that?</p><p><strong>Elon Musk</strong></p><p>You really have to stand back and recalibrate your expectations for competence. Because you&#8217;re operating in a world where you&#8217;ve got to make ends meet. You&#8217;ve got to pay your bills...</p><p><strong>Dwarkesh Patel</strong></p><p>Find the microphones.</p><p><strong>Elon Musk</strong></p><p>Exactly. It&#8217;s not like there&#8217;s a giant, largely uncaring monster bureaucracy. It&#8217;s a bunch of anachronistic computers that are just sending payments. One of the things that the DOGE team did sounds so simple and probably will save $100-200 billion a year. It was simply requiring payments from the main Treasury computer&#8212;which is called PAM, Payment Accounts Master or something like that, there&#8217;s $5 trillion payments a year&#8212;that go out have a <a href="https://www.politico.com/news/2025/02/08/elon-musk-doge-government-payments-014920">payment appropriation code</a>. Make it mandatory, not optional, that you have anything at all in the comment field.</p><p>You have to recalibrate how dumb things are. <a href="https://cbsaustin.com/news/nation-world/doge-says-id-code-for-47-trillion-in-federal-payments-often-left-blank-elon-musk-department-of-government-efficiency-treasury-department-treasury-access-symbol">Payments were being sent out with no appropriation code</a>, not checking back to any congressional appropriation, and with no explanation. This is why the Department of War, formerly the Department of Defense, cannot pass an audit, because the information is literally not there. Recalibrate your expectations.</p><p><strong>Dwarkesh Patel</strong></p><p>I want to better understand this half a trillion number, because there&#8217;s an <a href="https://www.gao.gov/products/gao-25-107753">IG report in 2024</a>.</p><p><strong>Elon Musk</strong></p><p>Why is it so low?</p><p><strong>Dwarkesh Patel</strong></p><p>Maybe, but we found that over seven years, the Social Security fraud they estimated was like $70 billion over seven years, so like $10 billion a year. So I&#8217;d be curious to see what the other $490 billion is.</p><p><strong>Elon Musk</strong></p><p>Federal government expenditures are $7.5 trillion a year.<strong> </strong>How competent do you think the government is?</p><p><strong>Dwarkesh Patel</strong></p><p>The discretionary spending there is like&#8230; 15%?</p><p><strong>Elon Musk</strong></p><p>But it doesn&#8217;t matter. Most of the fraud is non-discretionary. It&#8217;s basically fraudulent Medicare, Medicaid, Social Security, disability. There&#8217;s a zillion government payments. A bunch of these payments are in fact block transfers to the states. So the federal government doesn&#8217;t even have the information in a lot of cases to even know if there&#8217;s fraud.</p><p>Let&#8217;s consider reductio ad absurdum. The government is perfect and has no fraud. What is your probability estimate of that? Zero. Okay, so then would you say, fraud and waste at the government is 90% efficient? That also would be quite generous.</p><p>But if it&#8217;s only 90%, that means that there&#8217;s $750 billion a year of waste and fraud. And it&#8217;s not 90%. It&#8217;s not 90% effective.</p><p><strong>Dwarkesh Patel</strong></p><p>This seems like a strange way to first principles the amount of fraud in the government. Just like, how much do you think there is?</p><p>Anyways, we don&#8217;t have to do it live, but I&#8217;d be curious&#8212;</p><p><strong>Elon Musk</strong></p><p>You know a lot about fraud at Stripe?<strong> </strong>People are constantly trying to do fraud.</p><p><strong>John Collison</strong></p><p>Yeah, but as you say, it&#8217;s a little bit of a... We&#8217;ve really ground it down, but it&#8217;s a little bit of a different problem space because you&#8217;re dealing with a much more heterogeneous set of fraud vectors here than we are.</p><p><strong>Elon Musk</strong></p><p>But at Stripe, you have high competence and you try hard. You have high competence and high caring, but still fraud is non-zero. Now imagine it&#8217;s at a much bigger scale, there&#8217;s much less competence, and much less caring.</p><p>At PayPal back in the day, we tried to manage fraud down to about 1% of the payment volume. That was very difficult. It took a tremendous amount of competence and caring to get fraud merely to 1%. Now imagine that you&#8217;re an organization where there&#8217;s much less caring and much less competence. It&#8217;s going to be much more than 1%.</p><p><strong>John Collison</strong></p><p>How do you feel now looking back on politics and doing stuff there? Looking from the outside in, two things have been quite impactful: one, the <a href="https://en.wikipedia.org/wiki/America_PAC">America PAC</a>, and two, the acquisition of Twitter at the time. But also it seems like there was a bunch of heartache. What&#8217;s your grading of the whole experience?</p><p><strong>Elon Musk</strong></p><p>I think those things needed to be done to maximize the probability that the future is good. Politics generally is very tribal. People lose their objectivity usually with politics. They generally have trouble seeing the good on the other side or the bad on their own side. That&#8217;s generally how it goes. That, I guess, was one of the things that surprised me the most.</p><p>You often simply cannot reason with people. If they&#8217;re in one tribe or the other. They simply believe that everything their tribe does is good and anything the other political tribe does is bad. Persuading them otherwise is almost impossible.</p><p>But I think overall those actions&#8212;acquiring Twitter, getting Trump elected, even though it makes a lot of people angry&#8212;I think those actions were good for civilization.</p><p><strong>Dwarkesh Patel</strong></p><p>How does it feed into the future you&#8217;re excited about?</p><p><strong>Elon Musk</strong></p><p>Well, America needs to be strong enough to last long enough to extend life to other planets and to get AI and robotics to the point where we can ensure that the future is good.</p><p>On the other hand, if we were to descend into, say, communism or some situation where the state was extremely oppressive, that would mean that we might not be able to become multi-planetary. The state might stamp out our progress in AI and robotics.</p><p><strong>Dwarkesh Patel</strong></p><p>Optimus, Grok, et cetera. Not just yours, but any revenue-maximizing company&#8217;s products will be leveraged by the government over time. How does this concern manifest in what private companies should be willing to give governments? What kinds of guardrails?</p><p>Should AI models be made to do whatever the government that has contracted them out to do and asks them to do? Should Grok get to say, &#8220;Actually, even if the military wants to do X, no, Grok will not do that&#8221;?</p><p><strong>Elon Musk</strong></p><p>I think maybe the biggest danger of AI and robotics going wrong is government.<strong> </strong>People who are opposed to corporations or worried about corporations should really worry the most about government. Because government is just a corporation in the limit. Government is just the biggest corporation with a monopoly on violence.</p><p>I always find it a strange dichotomy where people would think corporations are bad, but the government is good, when the government is simply the biggest and worst corporation. But people have that dichotomy. They somehow think at the same time that government can be good, but corporations bad, and this is not true. Corporations have better morality than the government.</p><p>I actually think it&#8217;s a thing to be worried about. The government could potentially use AI and robotics to suppress the population. That is a serious concern.</p><p><strong>Dwarkesh Patel</strong></p><p>As the guy building AI and robotics, how do you prevent that?</p><p><strong>Elon Musk</strong></p><p>If you limit the powers of government, which is really what the US Constitution is intended to do, to limit the powers of government, then you&#8217;re probably going to have a better outcome than if you have more government.</p><p><strong>John Collison</strong></p><p>Robotics will be available to all governments, right?</p><p><strong>Elon Musk</strong></p><p>I don&#8217;t know about all governments. It&#8217;s difficult to predict. I can say what&#8217;s the endpoint, or what is many years in the future, but it&#8217;s difficult to predict the path along that way. If civilization progresses, AI will vastly exceed the sum of all human intelligence. There will be far more robots than humans. Along the way what happens is very difficult to predict.</p><p><strong>Dwarkesh Patel</strong></p><p>It seems one thing you could do is just say, &#8220;whatever government X, you&#8217;re not allowed to use Optimus to do X, Y, Z.&#8221; Just write out a policy. I think you <a href="https://x.com/elonmusk/status/2012762668986180027">tweeted recently that Grok should have a moral constitution</a>. One of those things could be that we limit what governments are allowed to do with this advanced technology.</p><p><strong>Elon Musk</strong></p><p>Technically if politicians pass a law and they can enforce that law, then it&#8217;s hard to not do that law. The best thing we can have is limited government where you have the appropriate crosschecks between the executive, judicial, and legislative branches.</p><p><strong>Dwarkesh Patel</strong></p><p>The reason I&#8217;m curious about it is that at some point it seems the limits will come from you. You&#8217;ve got the Optimus, you&#8217;ve got the space GPUs&#8230;</p><p><strong>Elon Musk</strong></p><p>You think I&#8217;ll be the boss of the government?</p><p><strong>Dwarkesh Patel</strong></p><p>Already it&#8217;s the case with SpaceX that for things that are crucial&#8212;the government really cares about getting certain satellites up in space or whatever&#8212;it needs SpaceX. It is the necessary contractor.</p><p>You are in the process of building more and more of the technological components of the future that will have an analogous role in different industries. You could have this ability to set some policy that suppressing classical liberalism in any way&#8230; &#8220;My companies will not help in any way with that&#8221;, or some policy like that.</p><p><strong>Elon Musk</strong></p><p>I will do my best to ensure that anything that&#8217;s within my control maximizes the good outcome for humanity. I think anything else would be shortsighted, because obviously I&#8217;m part of humanity, so I like humans. Pro human.</p><h3>02:38:28 - TeraFab</h3><p><strong>Dwarkesh Patel</strong></p><p>You mentioned that <a href="https://techcrunch.com/2026/01/20/elon-musk-says-teslas-restarted-dojo3-will-be-for-space-based-ai-compute/">Dojo 3</a> will be used for space-based compute.</p><p><strong>Elon Musk</strong></p><p>You really read what I say.</p><p><strong>Dwarkesh Patel</strong></p><p>I don&#8217;t know if you know, Elon, but you have a lot of followers.</p><p><strong>Elon Musk</strong></p><p>Dead giveaway. How did you discern my secrets? Oh I posted them on X.</p><p><strong>Dwarkesh Patel</strong></p><p>How do you design a chip for space? What changes?</p><p><strong>Elon Musk</strong></p><p>You want to design it to be more radiation tolerant and run at a higher temperature. Roughly, if you increase the operating temperature by 20% in degrees Kelvin, you can cut your radiator mass in half. So running at a higher temperature is helpful in space.</p><p>There are various things you can do for shielding the memory. But <a href="https://en.wikipedia.org/wiki/Neural_network_(machine_learning)">neural nets</a> are going to be very resilient to <a href="https://en.wikipedia.org/wiki/Single-event_upset">bit flips</a>. Most of what happens for radiation is random bit flips. But if you&#8217;ve got a multi-trillion parameter model and you get a few bit flips, it doesn&#8217;t matter. Heuristic programs are going to be much more sensitive to bit flips than some giant parameter file.</p><p>I just design it to run hot. I think you pretty much do it the same way that you do things on Earth, apart from making it run hotter.</p><p><strong>Dwarkesh Patel</strong></p><p>The <a href="https://en.wikipedia.org/wiki/Solar_panels_on_spacecraft">solar array</a> is most of the weight on the satellite. Is there a way to make the GPUs even more powerful than what Nvidia and TPUs and et cetera are planning on doing that would be especially privileged in the space-based world?</p><p><strong>Elon Musk</strong></p><p>The basic math is, if you can do about a kilowatt per <a href="https://en.wikipedia.org/wiki/Photomask">reticle</a>, then you&#8217;d need 100 million full reticle chips to do 100 gigawatts. Depending on what your yield assumptions are, that tells you how many chips you need to make. If you&#8217;re going to have 100 gigawatts of power, you need 100 million chips that are running at a kilowatt sustained, per reticle. Basic math.</p><p><strong>Dwarkesh Patel</strong></p><p>100 million chips depends on&#8230; If you look at the <a href="https://en.wikipedia.org/wiki/Die_(integrated_circuit)">die</a> size of something like <a href="https://en.wikipedia.org/wiki/Blackwell_(microarchitecture)">Blackwell GPUs</a> or something, and how many you can get out of a <a href="https://en.wikipedia.org/wiki/Wafer_(electronics)">wafer</a>, you can get on the order of dozens or less per wafer. So basically, this is a world where if we&#8217;re putting that out every single year, you&#8217;re producing millions of wafers a month. That&#8217;s the plan with TeraFab? Millions of wafers a month of advanced process nodes?</p><p><strong>Elon Musk</strong></p><p>Yeah it could be north of a million or something. You&#8217;ve got to do the memory too.</p><p><strong>Dwarkesh Patel</strong></p><p>Are you going to make a memory fab?</p><p><strong>Elon Musk</strong></p><p>I think the TeraFab&#8217;s got to do memory. It&#8217;s got to do logic, memory, and <a href="https://anysilicon.com/the-ultimate-guide-to-semiconductor-packaging/">packaging</a>.</p><p><strong>Dwarkesh Patel</strong></p><p>I&#8217;m very curious how somebody gets started. This is the most complicated thing man has ever made. Obviously, if anybody&#8217;s up to the task, you&#8217;re up to the task. So you realize it&#8217;s a bottleneck, and you go to your engineers. What do you tell them to do? &#8220;I want a million wafers a month in 2030.&#8221;</p><p><strong>Elon Musk</strong></p><p>That&#8217;s right. That&#8217;s exactly what I want.</p><p><strong>Dwarkesh Patel</strong></p><p>Do you call ASML? What is the next step?</p><p><strong>John Collison</strong></p><p>No so much to ask.</p><p><strong>Elon Musk</strong></p><p>We make a little fab and see what happens. Make our mistakes at a small scale and then make a big one.</p><p><strong>Dwarkesh Patel</strong></p><p>Is a little fab done?</p><p><strong>Elon Musk</strong></p><p>No, it&#8217;s not done. We&#8217;re not going to keep that cat in the bag. That cat&#8217;s going to come out of the bag. There&#8217;ll be drones hovering over the bloody thing. You&#8217;ll be able to see its construction progress on X in real time.</p><p>Look, I don&#8217;t know, we could just flounder in failure, to be fair. Success is not guaranteed. Since we want to try to make something like 100 million&#8230; We want 100 gigawatts of power and chips that can take 100 gigawatts by 2030. We&#8217;ll take as many chips as our suppliers will give us. I&#8217;ve actually said this to TSMC and Samsung and <a href="https://en.wikipedia.org/wiki/Micron_Technology">Micron</a>: &#8220;please build more fabs faster&#8221;. We will guarantee to buy the output of those fabs. So they&#8217;re already moving as fast as they can. It&#8217;s us plus them.</p><p><strong>John Collison</strong></p><p>There&#8217;s a narrative that the people doing AI want a very large number of chips as quickly as possible. Then many of the input suppliers, the fabs, but also the turbine manufacturers, are not ramping up production very quickly.</p><p><strong>Elon Musk</strong></p><p>No, they&#8217;re not.</p><p><strong>John Collison</strong></p><p>The explanation you hear is that they&#8217;re dispositionally conservative. They&#8217;re Taiwanese or German, as the story may be. They just don&#8217;t believe... Is that really the explanation or is there something else?</p><p><strong>Elon Musk</strong></p><p>Well, it&#8217;s reasonable to... If somebody&#8217;s been in the computer memory business for 30 or 40 years&#8230;</p><p><strong>John Collison</strong></p><p>They&#8217;ve seen cycles.</p><p><strong>Elon Musk</strong></p><p>They&#8217;ve seen boom and bust 10 times.<strong> </strong>That&#8217;s a lot of layers of scar tissue. During the boom times, it looks like everything is going to be great forever. Then the crash happens and they&#8217;re desperately trying to avoid bankruptcy. Then there&#8217;s another boom and another crash.</p><p><strong>John Collison</strong></p><p>Are there other ideas you think others should go pursue that you&#8217;re not for whatever reasons right now?</p><p><strong>Elon Musk</strong></p><p>There are a few companies that are pursuing new ways of doing chips, but they&#8217;re just not scaling fast.</p><p><strong>John Collison</strong></p><p>I don&#8217;t even mean within AI, I mean just generally.</p><p><strong>Elon Musk</strong></p><p>People should do the thing where they find that they&#8217;re highly motivated to do that thing, as opposed to some idea that I suggest. They should do the thing that they find personally interesting and motivating to do.</p><p>But going back to the limiting factor&#8230; I used that phrase about 100 times. The current limiting factor that I see in the three to four year timeframe, it&#8217;s chips. In the one year timeframe, it&#8217;s energy, power production, electricity. It&#8217;s not clear to me that there&#8217;s enough usable electricity to turn on all the AI chips that are being made.</p><p>Towards the end of this year, I think people are going to have real trouble turning on... The chip output will exceed the ability to turn chips on.</p><p><strong>Dwarkesh Patel</strong></p><p>What&#8217;s your plan to deal with that world?</p><p><strong>Elon Musk</strong></p><p>We&#8217;re trying to accelerate electricity production. I guess that&#8217;s maybe one of the reasons that xAI will be maybe the leader, hopefully the leader. We&#8217;ll be able to turn on more chips than other people can turn on, faster, because we&#8217;re good at hardware.</p><p>Generally, the innovations from the corporations that call themselves labs, the ideas tend to flow&#8230; It&#8217;s rare to see that there&#8217;s more than about a six-month difference. The ideas travel back and forth with the people.</p><p>So I think you sort of hit the hardware wall and then whichever company can scale hardware the fastest will be the leader. So I think xAI will be able to scale hardware the fastest and therefore most likely will be the leader.</p><p><strong>John Collison</strong></p><p>You joked or were self-conscious about using the &#8220;limiting factor&#8221; phrase again. But I actually think there&#8217;s something deep here. If you look at a lot of things we&#8217;ve touched on over the course of it, it&#8217;s maybe a good note to end on. If you think of a senescent, low-agency company, it would have some bottleneck and not really be doing anything about it.</p><p><a href="https://en.wikipedia.org/wiki/Marc_Andreessen">Marc Andreessen</a> had the line of, &#8220;<a href="https://cheekypint.substack.com/p/marc-andreessen-and-charlie-songhurst">most people are willing to endure any amount of chronic pain to avoid acute pain</a>&#8221;. It feels like a lot of the cases we&#8217;re talking about are just leaning into the acute pain, whatever it is. &#8220;Okay, we got to figure out how to work with steel, or we got to figure out how to run the chips in space.&#8221; We&#8217;ll take some near-term acute pain to actually solve the bottleneck. So that&#8217;s kind of a unifying theme.</p><p><strong>Elon Musk</strong></p><p>I have a high pain threshold. That&#8217;s helpful.</p><p><strong>John Collison</strong></p><p>To solve the bottleneck.</p><p><strong>Elon Musk</strong></p><p>Yes. One thing I can say is, I think the future is going to be very interesting. <a href="https://www.youtube.com/watch?v=IgifEgm1-e0">As I said at Davos</a>&#8212;I think I was on the ground for like three hours or something&#8212;it&#8217;s better to err on the side of optimism and be wrong than err on the side of pessimism and be right, for quality of life. You&#8217;ll be happier if you err on the side of optimism rather than erring on the side of pessimism. So I recommend erring on the side of optimism.</p><p><strong>John Collison</strong></p><p>Here&#8217;s to that.</p><p><strong>Dwarkesh Patel</strong></p><p>Cool. Elon, thanks for doing this.</p><p><strong>John Collison</strong></p><p>Thank you.</p><p><strong>Elon Musk</strong></p><p>All right, thanks guys. All right.</p><p><strong>John Collison</strong></p><p>Great stamina.</p><p><strong>Dwarkesh Patel</strong></p><p>Hopefully this didn&#8217;t count as a pain in the pain tolerance.</p>]]></content:encoded></item><item><title><![CDATA[Hiring scouts to help me find guests]]></title><description><![CDATA[$100/hour, fully remote. Ideal candidate is maybe a grad student/post doc/or working in one of: bio, history, econ, math/physics, AI/hardware.]]></description><link>https://www.dwarkesh.com/p/hiring-scouts-to-help-me-find-guests</link><guid isPermaLink="false">https://www.dwarkesh.com/p/hiring-scouts-to-help-me-find-guests</guid><dc:creator><![CDATA[Dwarkesh Patel]]></dc:creator><pubDate>Thu, 15 Jan 2026 16:02:50 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/bbfbfbf2-988e-4a86-9603-27a999afdc10_360x360.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>My main bottleneck is finding excellent guests. So, I&#8217;m hiring a couple part time scouts to help me find the next David Reich/Sarah Paine/Adam Brown.</p><p>$100/hour, fully remote, work hours are flexible - I expect it&#8217;ll be 5-10 hours a week.</p><p>Ideal candidate is maybe a grad student, or a post doc, or working in one of the fields I wanna find guests in. I&#8217;m looking for people who are really plugged into some discipline and have high taste.</p><p>Beyond just scouting guests, I&#8217;ll want your help assembling curriculums that help me prep for interviews and rapidly get up to speed.</p><p>The application form is <a href="https://airtable.com/appXzMS36pX3XAYV6/pagT9mTdjxxslroks/form">here</a>, and it&#8217;s extremely simple - just pitch me on a guest and tell me a bit about yourself. Please submit by 11:59 PM Pacific, Friday, Jan 23.</p><p>I&#8217;m looking to hire ~one scout for each of the following fields: bio, history, econ, math/physics, AI/hardware.</p><p>However, it&#8217;s very possible I end up hiring more (or fewer), or break apart the domains of knowledge in a different way, based on the range of expertise of the best people who apply.</p><h3><strong>What I&#8217;m looking for in guests</strong></h3><p>I&#8217;m looking for people who are deep experts in at least one field, and who are polymathic enough to think through all kinds of tangential questions in a really interesting way. </p><p>So I&#8217;m selecting for this synthetic ability to connect one&#8217;s expertise to all kinds of important questions about the world - an ability which is often deliberately masked in public academic work. Which means that it can only really come out in conversation.</p><p>That&#8217;s why I want to hire scouts. I need their network and context - they know who the polymathic geniuses are, who gave a fascinating lecture at the last big conference they attended, who can just connect all kinds of interesting ideas in the field together over conversation, etc.</p><p>We get tons of inbound from people who are working on impressive companies or doing interesting research projects. But almost always it&#8217;s a no; while I think their work is important, it&#8217;s self-contained in a way that I worry won&#8217;t lead to interesting broad discussion.</p><p>To get a little more concrete, here&#8217;s what worked well about some of my recent favorite guests:</p><p>Let me talk through why I think some interviews worked especially well, so you can think about what people in fields you&#8217;re familiar with fill a similar mold.</p><ul><li><p><a href="https://youtu.be/XCLODgdCmKA?si=gzt3Kvs2N4v8DTvf">Jacob Kimmel</a>: A lot of people who pitch themselves as guests are capable of only talking about their own research. But the amazing thing about Jacob is that he is an insane polymath. For example, he could explain why evolution didn&#8217;t select for longevity by drawing deep analogies to how gradients flow in ML models. He had all these other random interesting takes, from why humans never evolved their own antibiotics to how there&#8217;s this gene that used to protect us from HIV-like viruses but got repurposed, which hints at some ghost scourge. And then he could zoom out and give a great diagnosis of what&#8217;s bottlenecking pharma progress. I really want to emphasize how that&#8217;s different from other brilliant people I get pitched &#8211; these people are also doing incredible research, but they don&#8217;t have this range of really deep, interesting takes. That part is super crucial.</p></li><li><p><a href="https://youtu.be/Uj6skZIxPuI?si=g30p79rTnhTMZ6n0">David Reich</a>: It&#8217;s actually quite surprising that my second most popular guest of all time is a geneticist of ancient DNA. How did that happen? Here&#8217;s why I think this episode blew up. In high school, you get some vague explanation of human evolution. And you feel like you understand it and can move on with your life. And here comes David, showing you how this very fundamental topic, which you assume was settled and haven&#8217;t bothered thinking about in years, is actually way more murky and surprising than you realized, and how new discoveries are totally overturning our basic understanding of the field (in this case, the how, when, where of human evolution)<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>.</p></li><li><p><a href="https://youtu.be/lXUZvyajciY?si=s8J640JwRtFCdBaX">Andrej Karpathy</a>: It&#8217;s extremely rare to get someone who is expert-level in a technical, fast-moving, and frothy field, but who has no vested interest in a particular company or approach, and who is in a position to just give an unbiased lay of the land. I have a couple questions below about biotech or formal math or robotics in the wake of AI progress - if there&#8217;s a Karpathy-type person in those fields, I&#8217;d be very keen to get a technical lay of the land and vibe check of what claims are credible versus crazy.</p></li></ul><h3><strong>Some recent questions</strong></h3><p>In case it&#8217;s helpful for brainstorming a guest, I&#8217;ve listed out a few big questions that have been on my mind recently. But please feel free to ignore them - there&#8217;s way more interesting questions in the world than the ones I am aware of - feel free to say, &#8220;You might not yet be curious about antibody development/the history of language/the dark ages/battery tech, but the guest I have in mind for that topic is so good that it&#8217;s going to be your next big banger episode.&#8221;</p><h4><strong>Bio</strong></h4><ul><li><p>Dario&#8217;s <a href="https://www.darioamodei.com/essay/machines-of-loving-grace">Machines of Loving Grace</a> argues we&#8217;ll compress a century of bio progress into a few years - that big breakthroughs like CAR-T therapy, mRNA vaccines, cheap genome sequencing, etc  show how in the long run things which seem like data or physical bottlenecks can be solved by better tools to measure/predict/perturb/understand biological system, and these tools are downstream of intelligence. But here&#8217;s what I don&#8217;t fully understand: over the last 3 decades, we&#8217;ve seen a million-fold reduction in genome sequencing costs, 1000-fold decrease in DNA synthesis costs, the development of precise gene editing tools like CRISPR, and the ability to conduct massively parallel experiments through multiplexing techniques. But it doesn&#8217;t seem like we&#8217;re curing diseases or coming up with new treatments at a faster rate now than we were 30 years ago. If anything, drug development is <a href="https://en.wikipedia.org/wiki/Eroom%27s_law">slowing down</a>. I want to find a biology researcher who can think through how plausible a 10x or 100x speedup in new drug discovery actually is. They should obviously know a lot about and have hot takes on what&#8217;s actually bottlenecking progress today, and they should be flexible enough to imagine what might change with much more intelligence.</p></li><li><p>What exactly is the special sauce of the brain that we&#8217;re still missing? <a href="https://youtu.be/_9V_Hbe-N1A?si=wSP8VXHBbzs4hOyV">Adam Marblestone thinks</a> it&#8217;s the curriculum of reward functions and the learning/steering subsystems. Others argue that gradient descent is fundamentally worse than how the brain learns within a lifetime (which is closer to in-context learning in its flexibility and sample efficiency).</p></li></ul><h4><strong>Math/Physics</strong></h4><ul><li><p>I&#8217;ve been really enjoying Strogatz&#8217;s <a href="https://www.amazon.com/Nonlinear-Dynamics-Student-Solutions-Manual/dp/0813349109/">Nonlinear Dynamics and Chaos</a> textbook, and I want to make something podcast-shaped out of it. Strogatz himself has deferred until after he finishes his next book, so I&#8217;m looking for another mathematician on a related topic. I think the right format here isn&#8217;t a normal meandering interview - it&#8217;s something more like a lecture. A mathematician comes in with a specific topic or example we can deep dive on. He posts up at a blackboard, starts explaining a topic, and I interrupt to clarify confusions and ask follow-up questions. The model is something like Terence Tao and Grant Sanderson&#8217;s<a href="https://www.youtube.com/watch?v=FPl_rag0yAo"> cosmic distance ladder video</a>. Who can replicate something similar with me with some independently explainable topic in chaos/nonlinear dynamics or adjacent topics? I&#8217;d be especially keen if someone can present something on how the topics in this textbook tie into ML (see for example<a href="https://sohl-dickstein.github.io/2024/02/12/fractal.html"> Neural network training makes beautiful fractals</a>).</p></li><li><p>What real world impact should we expect from the current batch of AI for math projects? What are the fields of technology where people are going, &#8220;Ah we could totally solve quantum computing (or fusion or AGI) only if we had more theorems!&#8221; But maybe problems in biology and physics and materials and so on reduce down to math in a way I&#8217;m not foreseeing, and automating formal math alone is enough to unlock a bunch of progress. See footnotes for some more questions I wanna ask the right guest on this topic.</p><p>I started reading<a href="https://www.amazon.com/Proofs-Refutations-Mathematical-Discovery-Philosophy/dp/1107534054"> Proofs and Refutations</a>, which is this famous 1976 book by the Hungarian mathematician Imre Lakatos about the philosophy of mathematics. He says math involves a lot of changing definitions and swapping lemmas in order to deal with different counterexamples. This seems fine for a good faith mathematical community, but super reward hackable for these AI-for-math models. Also it involves a lot of realizing how a problem in one domain is really a problem in another, and noticing the meta level pattern - AIs so far have been especially bad at this kind of thing. If math is just proof search within a fixed formal system, then AI can help a lot. But if its dialectical construction and refinement of concepts (based on what tasteful parsimonious definition can withstand counterexamples) , then I feel self play and &#8216;automated cleverness&#8217; alone won&#8217;t do the trick. But maybe automated counterexamples are super useful. I&#8217;m sure for practicing mathematicians there&#8217;s a bunch of stuff that&#8217;s naive or wrong about the above. Would love to chat out what the actual research math process is like, and what good it would do to automate it.</p></li></ul><h4><strong>AI/hardware</strong></h4><ul><li><p>RL progress has been very fast, but it&#8217;s partly the result of going from almost nothing to 1e26 FLOPs training compute in a year (aka like going from GPT-1 to GPT-4.5). It&#8217;s still possible that it has terrible scaling exponents and further progress will be very slow. And also it&#8217;s not clear how much of the progress over the last year comes from inference scaling, which has worse variable economics. But on the other hand, maybe there&#8217;s a ton of low hanging fruit in improving RL - with pretraining, there&#8217;s been 5 years of developing the theory and empirics of optimal batch sizes, learning rates, architectures, etc. As that low hanging fruit is picked, maybe RL progress continues to be fast? The other big question about RL training is how much transfer learning are we seeing - is there all this crazy meta learning that&#8217;s not directly induced by any env and which will enable flexible human-like labor soon? I have no idea. My friends at labs who are actually doing this training obviously wouldn&#8217;t tell me. But I want to actually concretely understand what&#8217;s going on here.</p></li></ul><h4><strong>History</strong></h4><ul><li><p>There&#8217;s the famous<a href="https://en.wikipedia.org/wiki/Joseph_Needham#The_Needham_Question"> Needham question</a>, which asks why China didn&#8217;t industrialize first despite leading the world in population, inventions, and bureaucratic sophistication. I find the standard explanation of how this centralized Ming/Qing regime damped invention and exploration unsatisfying. Or at least I don&#8217;t understand it concretely. It&#8217;s such a big country - how can you retard progress across the whole thing, especially given that state capacity was presumably weaker in the past? Or at least I assume it was - what did a provincial bureaucrat actually do day-to-day? Was there a price system? Private property? How did the state actually interfere with merchants and artisans?</p></li></ul><h4><strong>Economics</strong></h4><ul><li><p>There&#8217;s something unsatisfying about the <a href="https://epoch.ai/blog/explosive-growth-from-ai-a-review-of-the-arguments">arguments</a> that we&#8217;ll see 20%+ explosive economic growth from AI. Even if true, what does that mean? What is actually happening? I thought <a href="https://www.darioamodei.com/essay/machines-of-loving-grace">Machines of Loving Grace</a> was a great account of what plausibly is happening on the human facing side of the singularity - aka the FLOPs that are going towards curing disease. But presumably most of what is happening is investment towards more robots, more compute, etc. My sense of what that side of things looks like is so murky and handwavy. There is a version of Machines of Loving Grace you can do that is somewhat concrete about all the sci fi shit - not just gesturing at the galaxies, but getting specific about the space GPUs and factorio like solar tiling and all the other things I&#8217;m not thinking of which are relevant to understanding 2040. Presumably the right guest is someone who is really strong in engineering/physics and economics and has a penchant for sci-fi and has a lot of concrete ideas here.</p></li><li><p>What should India or Nigeria or for that matter any country not directly in the semiconductor/foundation model supply chain do right now? If the main mechanism of catchup growth goes away (namely, that the underutilized labor of developing countries can rapidly be made more productive with capital and know-how from the developed world), what happens to all these countries that are not China or the US?</p><p></p></li></ul><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Just to give you a sample of some of the surprising findings that he talked through:</p><ul><li><p>70,000 years ago, half a dozen different species of humans (Neanderthals, Denisovans, &#8216;Hobbits&#8217;, etc) lived across Eurasia. And then some small group of modern humans (only 1,000 to 10,000 people) drove all of them to extinction. Everyone native to Eurasia and America is descended from this one tribe.</p></li><li><p>Neanderthals may have gotten 30-70% of their DNA from modern humans. Which implies that maybe non-Africans today are actually &#8220;Neanderthals who became modernized by waves and waves of admixture&#8221; rather than modern humans with a bit of Neanderthal mixed in.</p></li><li><p>Yersinia pestis (bubonic plague bacteria) may have killed a quarter to half of all people in Western Eurasia for thousands of years, starting around 5,000 years ago. And may be central to explaining everything from the Yamnaya expansion to the fall of Rome to the Industrial Revolution.</p></li><li><p>It&#8217;s not clear modern humans were even primarily in Africa during the key period (2 million to 500,000 years ago) when human brains diverged from those of other species. Our lineage may have resided in Eurasia for significant stretches.</p></li></ul><p>Okay I&#8217;ll stop, but you see my point. What are the other fields like human evolution, and the other presenters like David Reich, who will make you go, &#8220;What the fuck, I had no idea.&#8221;</p><p>David being David is actually a huge piece of the puzzle here which I want to replicate. He&#8217;s just incredibly deep and polymathic on what may from the outside look like one field but is in fact very many, from population genetics to archeology to linguistics. And while he&#8217;s intellectually humble enough to make qualifiers, he will (and this is very important) go ahead and give hot takes and start speculating about connections between fields and how different hypotheses relate to each other and so on. He won&#8217;t just stay at, &#8220;Our results show a genetic cline between North and South Indians.&#8221; He&#8217;ll say, &#8220;And we could be wrong here, but this suggests that the caste system which enforced this never otherwise seen levels of endogamy has been incredibly strong for millennia.&#8221;</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[What I've been reading recently - Jan 10, 2026]]></title><description><![CDATA[Nonlinear dynamics and Chaos, Machines of Loving Grace, Max Hodak&#8217;s theory of consciousness, Neural network training makes beautiful fractals]]></description><link>https://www.dwarkesh.com/p/notes-jan-10-2026</link><guid isPermaLink="false">https://www.dwarkesh.com/p/notes-jan-10-2026</guid><dc:creator><![CDATA[Dwarkesh Patel]]></dc:creator><pubDate>Sat, 10 Jan 2026 20:30:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/vimeo/w_728,c_limit,d_video_placeholder.png/903855670" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I was recently chatting with a friend who has a similar job to mine. We were talking about how even though our jobs are fundamentally about learning about stuff, our time so easily gets sucked up by other things. So to hold myself accountable, I&#8217;m gonna try to publish a blog post every two weeks or so where I explain what I&#8217;ve been reading.</p><h2>Max Hodak&#8217;s theory of consciousness</h2><p>I&#8217;m totally gonna butcher this - please excuse. If you wanna get the real deal, go check out his <a href="https://maxhodak.com/nonfiction/2025/12/05/the-binding-problem">summary blog post</a> and his <a href="https://youtu.be/DI6Hu-DhQwE?si=gY5YfwvqwgNGOHNk">full talk</a> on this topic.</p><p>Max is focused on two big sub-questions which together form &#8220;the binding problem&#8221;:</p><ul><li><p>Mode binding: how do color, shape, texture, and motion get combined into a unified visual percept of &#8220;a red cup&#8221;?</p></li><li><p>Moment binding: why do we experience all the neurons firing across our entire brain over the course of 10s of milliseconds as a single quanta of experience?</p></li></ul><p>Max thinks each of these binding sub-problems is related to a brain wave:</p><ul><li><p>Gamma waves - 40 Hz - Fast, local coordination of nearby neurons to get on the same page about what they&#8217;re representing.</p></li><li><p>Alpha waves - 10 Hz - Slower waves that run through the whole brain and unify experience - think of these like the forward pass of the brain.</p><ul><li><p>Two cool things about alpha waves I hadn&#8217;t realized. 1. that neurons ride the peak of this oscillation 2. when alpha waves slow down or speed up (fight or flight reactions, etc), people experience time dilation.</p></li></ul></li></ul><p>Anyways, Max points out that the brain is storing a bunch of structured representations about the world physically, and some feedback controller has to go in and make sure that these representations are correct. This is part of what the alpha waves are doing. And this feedback control and binding is consciousness. I&#8217;m glossing over a bunch of logical connections that I definitely don&#8217;t understand. But I&#8217;ll leave it here.</p><p>I know Max could provide a really good answer, but just talking to myself, I&#8217;m confused on what the reason is to think that feedback control = consciousness? By this logic, does <a href="https://en.wikipedia.org/wiki/Memory_refresh">memory refresh</a> = consciousness too?</p><p>Max thinks that figuring out what&#8217;s up with consciousness will mean discovering new physics. And specifically, physics at the level of the 4 fundamental forces - some property as basic as mass or charge. His logic is that either consciousness has no real impact on the world (it&#8217;s just a byproduct of other stuff the brain does), which would be odd, or it actually has an effect, which would mean it&#8217;s new physics.</p><p>I&#8217;m not sure I buy this. 1. Can&#8217;t it be an effect that&#8217;s best understood at an implication of existing laws of physics - the fact that wood floats on water has an impact on the world, but you don&#8217;t need new physics to explain it 2. Doesn&#8217;t it seem implausible that evolution blindly stumbled upon and is now making good use of a whole undiscovered physical field which we have never managed to actually interact with using our technology, nor seen summoned anywhere else in the universe?</p><h2><a href="https://www.amazon.com/gp/product/0367026503">Nonlinear dynamics and Chaos</a> by Steven Strogatz</h2><p>I&#8217;m only 3 chapters in, so I&#8217;ve only got the building blocks so far. The fundamental idea is this. It&#8217;s often hard to anticipate how a system will evolve just by observing a bunch of different trajectories over time. But it&#8217;s much easier to see what will happen if you plot how the system will evolve from different starting points. The examples get more and more interesting, and because Strogatz focuses on the graphical and geometric interpretations, the motivating problems are super satisfying; the book is really a bunch of 3Blue1Brown videos on a certain topic stapled together.</p><p>Side note: I could not have understood anything here if I didn&#8217;t have LLMs and couldn&#8217;t watch the lectures async. I paused every minute or so (to clarify some confusion with a chatbot or to try and anticipate the next step), and I had the same section of textbook open at the same time.</p><p>I&#8217;m now wondering to myself, &#8220;How the hell did I learn anything in college at all?&#8221; I would be so lost if I was actually taking this course in college and just attending the lectures live.</p><p>In college, I actually did bounce out of a difficult course I feel like I could totally learn today with LLMs and async lectures + my adult executive function.</p><div><hr></div><p>As I was working through these examples (some inspired by actual papers), I kept thinking about what parts the &#8220;automated cleverness&#8221; (Terry Tao&#8217;s term) of today&#8217;s AIs could actually help with.</p><p>It&#8217;s crazy how much understanding you can get about a physical system through mathematics. But that understanding is so dependent on insight and interpretation.</p><p>To give one example, Section 3.7 has a really clever model of an insect outbreak, showing how budworms, birds, and trees play out against each other given different growth rates and other dynamics.</p><p>But first you have to figure out the right dimensionless forms. And that requires judgment about which dimensions actually matter. In the insect model, the choice was to think in terms of R and K and treat the bird population as basically an artifact of those parameters. But you could have done it the other way around&#8212;from the basis of birds.</p><p>Then there&#8217;s how you make the visualization. Once you&#8217;ve got the dynamics in dimensionless form, you could just graph the equation and find the fixed points. But the result would be almost impossible to interpret. Graph it a different way, though, and suddenly the intercepts align with your intuition. You can actually <em>see</em> the three regimes: where carrying capacity is so low the population never gets going, where birds keep things in check, and where the outbreak has outgrown the birds&#8217; ability to control it.</p><p>This kind of insight is inseparable from understanding what you&#8217;re even trying to learn about the system. And I&#8217;m skeptical today&#8217;s AI helps much here. When these methods were first developed, the right forms and interpretations weren&#8217;t obvious. The mathematician who wrote the original paper had to come up with new insights about <em>how to think</em> about the problem.</p><p>Maybe models are now good enough to apply these methods to new systems that fit the same template. But that just means the few mathematicians who invent genuinely new frameworks are the only ones who stay relevant.</p><h2><a href="https://www.darioamodei.com/essay/machines-of-loving-grace">Machines of Loving Grace</a> by Dario Amodei</h2><p>Starting with the biology section: Dario argues that we&#8217;ll get a century of bio progress in a few years. His argument:</p><ul><li><p>Most bio progress is driven by breakthrough discoveries which give you whole new primitives for what you can measure, change, or predict (CAR-T therapy, mRNA vaccines, CRISPR, genome sequencing costs declining so much, etc).</p></li><li><p>These discoveries seem to have been made in scrappy haphazard ways, often years after they were initially possible, and often by people responsible for other breakthroughs as well. All 3 of these observations hint that they are bottlenecked by intelligence.</p></li><li><p>Dario acknowledges that data is a huge bottleneck for bio. But the tools we have for collecting data can also be expanded by intelligence. Human researchers came up with multiplexing and AlphaFold and Perturb-Seq - the AI researchers will come up with even more.</p></li></ul><p>Here&#8217;s the counterargument. The kinds of human researcher breakthroughs he uses as examples of what AI could do more of haven&#8217;t had a huge impact on health. Over the last 3 decades, we&#8217;ve seen a million-fold reduction in genome sequencing costs, 1000-fold decrease in DNA synthesis costs, the development of precise gene editing tools like CRISPR, and the ability to conduct massively parallel experiments through multiplexing techniques. But it doesn&#8217;t seem like we&#8217;re curing diseases or coming up with new treatments at a faster rate now than we were 30 years ago. If anything, drug development is slowing down. Why think that AI will be able to fundamentally change this dynamic?</p><p>Relatedly, Jacob Trefethan has an excellent <a href="https://blog.jacobtrefethen.com/ai-san-francisco/">blog post</a> makes the the argument that AI won&#8217;t speed up medical progress that much (he also steelmans the opposite point in <a href="https://blog.jacobtrefethen.com/ai-optimism/">this other post</a>). Jacob points out that making a drug to cure something like Alzheimer&#8217;s is really hard. Raw understanding of some of the disease life cycle (which more intelligence could give you more of) is not enough. We understand that Alzheimer&#8217;s is clearly linked to Amyloid beta, and there are now many different drugs trying to remove amyloid plaques which have all not worked. Even if we get more insights like the Amyloid beta thing from AI scientists, that alone will not be enough to identify the correct targets. You just have to do a bunch of experiments on live humans.</p><p>This is why Dario&#8217;s point about clinical trials falls flat. He argues that clinical trials are currently slow because we just don&#8217;t know whether a given drug will actually work. But if we had much greater confidence, like we did with the mRNA vaccines for COVID, then we could test and approve drugs much faster. However, I don&#8217;t see why we should think that modulo the full hyperrealistic simulation of the human body, we <em>could</em> tell ex ante which drugs are gonna work. I don&#8217;t yet buy the argument that a million George Church clones in a datacenter could derisk all the drug trials</p><p>Quick notes on other parts of the essay:</p><ul><li><p>Overall I find it pretty impressive that a tech CEO is this generally thoughtful.</p></li><li><p>The poverty and econ section doesn&#8217;t address that the main mechanism of catchup growth goes away post AGI; namely developing countries have lots of underutilized labor which is bottlenecking production, and because the marginal product of labor is high in the world today, those countries can get rich fast. So how exactly are these other countries catching up?</p></li><li><p>The key point that underlies his framework that intelligence can drive a century of progress in 5-10 years : &#8220;Things that are hard constraints in the short run may become more malleable to intelligence in the long run. For example, intelligence might be used to develop a new experimental paradigm that allows us to learn <em>in vitro</em> what used to require live animal experiments, or to build the tools needed to collect new data (e.g. the bigger particle accelerator), or to (within ethical limits) find ways around human-based constraints (e.g. helping to improve the clinical trial system, helping to create new jurisdictions where clinical trials have less bureaucracy, or improving the science itself to make human clinical trials less necessary or cheaper).&#8221;</p><ul><li><p>it&#8217;s interesting to consider why this isn&#8217;t true for factors of production today. We live in a (relatively) capital-abundant and labor-scarce world. That is reflected in the labor share of income being 2x as high as the capital share of income. But this has been true for centuries upon centuries. Contra Piketty in &#8220;Capital in the 21st Century&#8221;, all these capital holders have not been able to get some runaway capital accumulation process going by figuring out a way around labor constraints. Why think that intelligence will be any different than capital in its ability to get around other factors of production? maybe the argument is that intelligence can actually help generate the other factors of production in a way that capital can&#8217;t.</p></li></ul></li></ul><h2><a href="https://sohl-dickstein.github.io/2024/02/12/fractal.html">Neural network training makes beautiful fractals</a> by Jascha Sohl-Dickstein</h2><p>Absolutely fascinating <a href="https://sohl-dickstein.github.io/2024/02/12/fractal.html">blog post</a>.</p><div id="vimeo-903855670" class="vimeo-wrap" data-attrs="{&quot;videoId&quot;:&quot;903855670&quot;,&quot;videoKey&quot;:&quot;&quot;,&quot;belowTheFold&quot;:true}" data-component-name="VimeoToDOM"><div class="vimeo-inner"><iframe src="https://player.vimeo.com/video/903855670?autoplay=0" frameborder="0" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" loading="lazy"></iframe></div></div><p>You want to train your model at the highest possible learning rate under which it still converges. But the boundary of convergence versus divergence is fractal, which makes these hyperparameters really hard to optimize for via gradient descent.</p><p>Now you can ask the question: evolution somehow found the right hyperparameters to train our brains. How did evolution solve this wicked problem? Presumably because gradient free optimization fares better against these kinds of fractal landscapes - if you optimize for the part of the region where the average speed of convergence is high (rather than just take the gradient from a specific point that&#8217;s bounded in an unpredictable way by fractals), seems like you could do much better.</p><p>Backing up, why is the meta-loss landscape fractal in the first place? Jascha&#8217;s explanation is that fractals often emerge when iteratively applying a function. Gradient descent on the parameters is one such function that you iterate across training steps. But then the follow up question is this. There&#8217;s lots of other iterative functions you could think of, even within the context of neural networks. Do they all lead to fractals? For example:</p><ul><li><p>In chain of thought, you apply a model to a string, which makes a new string, to which you apply the model, etc.</p></li><li><p>RNNs keep applying the same parameters to the hidden state.</p></li></ul><p>Over conversation, an AI researcher friend revealed that CoT and RNNs both have variance problems that could well be explained by these fractal like dynamics. Though I only understand this claim at a hand-wavy level.</p>]]></content:encoded></item><item><title><![CDATA[Adam Marblestone — AI is missing something fundamental about the brain]]></title><description><![CDATA[The brain's secret sauce is its reward functions, not its architecture.]]></description><link>https://www.dwarkesh.com/p/adam-marblestone</link><guid isPermaLink="false">https://www.dwarkesh.com/p/adam-marblestone</guid><dc:creator><![CDATA[Dwarkesh Patel]]></dc:creator><pubDate>Tue, 30 Dec 2025 17:07:17 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/182960540/966279b9f6eb9089330cafe330ecb0fe.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><a href="https://twitter.com/AdamMarblestone">Adam Marblestone</a> is CEO of <a href="https://www.convergentresearch.org/">Convergent Research</a>. He&#8217;s had a very interesting past life: he was a research scientist at Google Deepmind on their neuroscience team and has worked on everything from brain-computer interfaces to quantum computing to nanotech and even formal mathematics.</p><p>In this episode, we discuss how the brain learns so much from so little, what the AI field can learn from neuroscience, and the answer to Ilya&#8217;s question: how does the genome encode abstract reward functions? Turns out, they&#8217;re all the same question.</p><p>Watch on <a href="https://youtu.be/_9V_Hbe-N1A">YouTube</a>; listen on <a href="https://podcasts.apple.com/us/podcast/adam-marblestone-ai-is-missing-something-fundamental/id1516093381?i=1000743205259">Apple Podcasts</a> or <a href="https://open.spotify.com/episode/5RD8lxJh0mGSlpEWWExQNG?si=srfZ9QBgRFqvOtJGX8EGqg">Spotify</a>.</p><div id="youtube2-_9V_Hbe-N1A" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;_9V_Hbe-N1A&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/_9V_Hbe-N1A?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h3>Sponsors</h3><ul><li><p><a href="https://gemini.google.com">Gemini 3 Pro</a> recently helped me run an experiment to test multi-agent scaling: basically, if you have a fixed budget of compute, what is the optimal way to split it up across agents? Gemini was my colleague throughout the process &#8212; honestly, I couldn&#8217;t have investigated this question without it. Try Gemini 3 Pro today <a href="https://gemini.google.com">gemini.google.com</a></p></li><li><p><a href="https://labelbox.com/dwarkesh">Labelbox</a> helps you train agents to do economically-valuable, real-world tasks. Labelbox&#8217;s network of subject-matter experts ensures you get hyper-realistic RL environments, and their custom tooling lets you generate the highest-quality training data possible from those environments. Learn more at <a href="https://labelbox.com/dwarkesh">labelbox.com/dwarkesh</a></p></li></ul><p>To sponsor a future episode, visit <a href="https://www.dwarkesh.com/advertise">dwarkesh.com/advertise</a>.</p><h2>Further reading</h2><ul><li><p><a href="https://www.lesswrong.com/s/HzcM2dkCq7fwXBej8">Intro to Brain-Like-AGI Safety</a> - Steven Byrnes&#8217;s theory of the learning vs steering subsystem; referenced throughout the episode.</p></li></ul><ul><li><p><em><a href="https://www.abriefhistoryofintelligence.com/book">A Brief History of Intelligence</a></em> - Great book by Max Bennett on connections between neuroscience and AI</p></li><li><p>Adam&#8217;s <a href="https://longitudinal.blog/">blog</a>, and Convergent Research&#8217;s <a href="https://www.essentialtechnology.blog/">blog on essential technologies</a>.</p></li><li><p><a href="http://yann.lecun.com/exdb/publis/pdf/lecun-06.pdf">A Tutorial on Energy-Based Learning</a> by Yann LeCun</p></li></ul><ul><li><p><a href="https://arxiv.org/abs/1907.06374">What Does It Mean to Understand a Neural Network?</a> - Kording &amp; Lillicrap</p></li><li><p><a href="https://www.e11.bio/">E11 Bio</a> and their brain connectomics approach</p></li><li><p>Sam Gershman on <a href="https://gershmanlab.com/pubs/GershmanUchida19.pdf">what dopamine is doing in the brain</a></p></li><li><p><a href="https://www.reddit.com/r/reinforcementlearning/comments/9pwy2f/wbe_and_drl_a_middle_way_of_imitation_learning/">Gwern&#8217;s proposal</a> on training models on the brain&#8217;s hidden states</p></li></ul><p>Relevant episodes: <a href="https://www.dwarkesh.com/p/ilya-sutskever-2">Ilya Sutskever</a>, <a href="https://www.dwarkesh.com/p/richard-sutton">Richard Sutton</a>, <a href="https://www.dwarkesh.com/p/andrej-karpathy">Andrej Karpathy</a></p><h2>Timestamps</h2><p>(00:00:00) &#8211; The brain&#8217;s secret sauce is the reward functions, not the architecture</p><p>(00:22:20) &#8211; Amortized inference and what the genome actually stores</p><p>(00:42:42) &#8211; Model-based vs model-free RL in the brain</p><p>(00:50:31) &#8211; Is biological hardware a limitation or an advantage?</p><p>(01:03:59) &#8211; Why a map of the human brain is important</p><p>(01:23:28) &#8211; What value will automating math have?</p><p>(01:38:18) &#8211; Architecture of the brain</p><h2>Transcript</h2><h3>00:00:00 &#8211; The brain&#8217;s secret sauce is the reward functions, not the architecture</h3><p><strong>Dwarkesh Patel</strong></p><p>The big million-dollar question that I have, that I&#8217;ve been trying to get the answer to through all these interviews with AI researchers: How does the brain do it? We&#8217;re throwing way more data at these <a href="https://en.wikipedia.org/wiki/Large_language_model">LLMs</a> and they still have a small fraction of the total capabilities that a human does. So what&#8217;s going on?</p><p><strong>Adam Marblestone</strong></p><p>This might be the quadrillion-dollar question or something like that. You can make an argument that this is the most important question in science. I don&#8217;t claim to know the answer. I also don&#8217;t think that the answer will necessarily come even from a lot of smart people thinking about it as much as they are. My overall meta-level take is that we have to empower the field of neuroscience to just make neuroscience a more powerful field technologically and otherwise, to actually be able to crack a question like this.</p><p>Maybe the way that we would think about this now with modern AI, neural nets, deep learning, is that there are certain key components of that. There&#8217;s the architecture. There&#8217;s maybe hyperparameters of how many layers you have or properties of that architecture. There is the learning algorithm itself. How do you train it? <a href="https://en.wikipedia.org/wiki/Backpropagation">Backprop</a>, <a href="https://developers.google.com/machine-learning/crash-course/linear-regression/gradient-descent">gradient descent</a>, is it something else? How is it initialized? If we take the learning part of the system, it still may have some initialization of the <a href="https://www.geeksforgeeks.org/deep-learning/the-role-of-weights-and-bias-in-neural-networks/">weights</a>. And then there are also cost functions. What is it being trained to do? What&#8217;s the <a href="https://en.wikipedia.org/wiki/Reinforcement_learning">reward signal</a>? What are the <a href="https://www.ibm.com/think/topics/loss-function">loss functions</a>, <a href="https://en.wikipedia.org/wiki/Supervised_learning">supervision signals</a>?</p><p>My personal hunch within that framework is that the field has neglected the role of these very specific loss functions, very specific cost functions. <a href="https://en.wikipedia.org/wiki/Machine_learning">Machine learning</a> tends to like mathematically simple loss functions. Predict the next <a href="https://blogs.nvidia.com/blog/ai-tokens-explained/">token</a>, <a href="https://en.wikipedia.org/wiki/Cross-entropy#Cross-entropy_loss_function_and_logistic_regression">cross-entropy</a>, these simple computer scientist loss functions. I think evolution may have built a lot of complexity into the loss functions actually, many different loss functions for different areas turned on at different stages of development. A lot of <a href="https://en.wikipedia.org/wiki/Python_(programming_language)">Python</a> code, basically, generating a specific curriculum for what different parts of the brain need to learn.</p><p>Because evolution has seen many times what was successful and unsuccessful, and evolution could encode the knowledge of the learning curriculum. In the machine learning framework, maybe we can come back and we can talk about where do the loss functions of the brain come from? Can different loss functions lead to different efficiency of learning?</p><p><strong>Dwarkesh Patel</strong></p><p>People say the <a href="https://en.wikipedia.org/wiki/Cerebral_cortex">cortex</a> has got the universal human learning algorithm, the special sauce that humans have. What&#8217;s up with that?</p><p><strong>Adam Marblestone</strong></p><p>This is a huge question and we don&#8217;t know. I&#8217;ve seen models where the cortex&#8230; <a href="https://en.wikipedia.org/wiki/Neocortex#/media/File:Gray754.png">The cortex typically has this six-layered structure</a>, layers in a slightly different sense than layers of a <a href="https://en.wikipedia.org/wiki/Neural_network_(machine_learning)">neural net</a>. Any one location in the cortex has six physical layers of tissue as you go in layers of the sheet. And those areas then connect to each other and that&#8217;s more like the layers of a network.</p><p>I&#8217;ve seen versions of that where what you&#8217;re trying to explain is just, &#8220;How does it approximate backprop?&#8221; And what is the cost function for that? What is the network being asked to do, if you are trying to say it&#8217;s something like backprop? Is it doing backprop on <a href="https://research.google/pubs/mechanics-of-next-token-prediction-with-transformers/">next token prediction</a> or is it doing backprop on <a href="https://www.ibm.com/think/topics/image-classification">classifying images</a> or what is it doing? And no one knows. But one thought about it, one possibility about it, is that it&#8217;s just this incredibly general prediction engine. So any one area of the cortex is just trying to predict&#8230; Basically can it learn to predict any subset of all the variables it sees from any other subset? Omnidirectional <a href="https://cloud.google.com/discover/what-is-ai-inference">inference</a>, or omnidirectional prediction.</p><p>Whereas an LLM is just seeing everything in the <a href="https://www.ibm.com/think/topics/context-window">context window</a> and then it computes a very particular conditional probability which is, &#8220;Given all the last thousands of things, what are the probabilities for the next token.&#8221; But it would be weird for a large language model to say &#8220;the quick brown fox blank blank the lazy dog&#8221; and fill in the middle versus doing the next token, if it&#8217;s doing just forward. It can learn how to do that stuff at this emergent level of the context window and everything, but natively it&#8217;s just predicting the next token.</p><p>What if the cortex is natively made so that any area of cortex can predict any pattern in any subset of its inputs given any other missing subset? That is a little bit more like &#8220;<a href="https://arxiv.org/abs/2502.05244">probabilistic AI</a>&#8221;. A lot of the things I&#8217;m saying, by the way, are extremely similar to what <a href="https://en.wikipedia.org/wiki/Yann_LeCun">Yann LeCun</a> would say. He&#8217;s really interested in these <a href="http://yann.lecun.com/exdb/publis/pdf/lecun-06.pdf">energy-based models</a> and something like that is like, the joint distribution of all the variables. What is the likelihood or unlikelihood of just any combination of variables?</p><p>If I <a href="https://en.wikipedia.org/wiki/Clamp_(function)">clamp</a> some of them and I say that definitely these variables are in these states, then I can compute, with probabilistic sampling for example&#8212;conditioned on these being set in this state, and these could be any arbitrary subset of variables in the model&#8212;can I predict what any other subset is going to do and sample from any other subset given clamping this subset? And I could choose a totally different subset and sample from that subset. So it&#8217;s omnidirectional inference.</p><p>And so there could be some parts of the cortex, there might be association areas of cortex that predict vision from audition. There might be areas that predict things that the more innate part of the brain is going to do. Because remember, this whole thing is riding on top of a lizard brain and lizard body, if you will. And that thing is a thing that&#8217;s worth predicting too. You&#8217;re not just predicting do I see this or do I see that. Is this muscle about to tense? Am I about to have a reflex where I laugh? Is my heart rate about to go up? Am I about to activate this instinctive behavior?</p><p><strong>Dwarkesh Patel</strong></p><p>Based on my higher-level understanding&#8230; Like I can match somebody has told me there&#8217;s a spider on my back to this lizard part that would activate if I was literally seeing a spider in front of me. You learn to associate the two so that even just from somebody hearing you say &#8220;There&#8217;s a spider on your back&#8221;</p><p><strong>Adam Marblestone</strong></p><p>Well, let&#8217;s come back to this. This is partly having to do with <a href="https://sjbyrnes.com/">Steve Byrnes</a>&#8217; theories, which I&#8217;m recently obsessed about. But on your <a href="https://www.dwarkesh.com/p/ilya-sutskever-2">podcast with Ilya</a>, he said, &#8220;Look, I&#8217;m not aware of any good theory of how evolution encodes high-level desires or intentions.&#8221; I think this is very connected to all of these questions about the loss functions and the cost functions that the brain would use.<strong> </strong>And it&#8217;s a really profound question, right?</p><p>Let&#8217;s say that I am embarrassed for saying the wrong thing on your podcast because I&#8217;m imagining that Yann LeCun is listening and he says, &#8220;That&#8217;s not my theory. You described energy-based models really badly.&#8221; That&#8217;s going to activate in me innate embarrassment and shame, and I&#8217;m going to want to go hide and whatever. That&#8217;s going to activate these innate reflexes. That&#8217;s important because I might otherwise get killed by Yann LeCun&#8217;s marauding army of other&#8230;</p><p><strong>Dwarkesh Patel</strong></p><p>The French AI researchers are coming for you, Adam.</p><p><strong>Adam Marblestone</strong></p><p>So it&#8217;s important that I have that instinctual response. But of course, evolution has never seen Yann LeCun or known about energy-based models or known what an important scientist or a podcast is. Somehow the brain has to encode this desire to not piss off really important people in the tribe or something like this in a very robust way, without knowing in advance all the things that the Learning Subsystem of the brain, the part that is learning cortex and other parts&#8230; The cortex is going to learn this <a href="https://youtu.be/hguIUmMsvA4">world model</a>. It&#8217;s going to include things like Yann LeCun and podcasts. And evolution has to make sure that those neurons, whatever the Yann-LeCun-being-upset-with-me neurons, get properly wired up to the shame response or this part of the reward function.<strong> </strong>And this is important, right?</p><p>Because if we&#8217;re going to be able to seek status in the tribe or learn from knowledgeable people, as you said, or things like that, exchange knowledge and skills with friends but not with enemies&#8230; We have to learn all this stuff. It has to be able to robustly wire these learned features of the world, learned parts of the world model, up to these innate reward functions, and then actually use that to then learn more. Because next time I&#8217;m not going to try to piss off Yann LeCun if he emails me that I got this wrong. We&#8217;re going to do further learning based on that.</p><p>In constructing the reward function, it has to use learned information. But how can evolution, which didn&#8217;t know about Yann LeCun, do that? The basic idea that Steve Byrnes is proposing is that part of the cortex, or other areas like the <a href="https://en.wikipedia.org/wiki/Amygdala">amygdala</a> that learn, what they&#8217;re doing is they&#8217;re modeling the <a href="https://www.lesswrong.com/posts/hE56gYi5d68uux9oM/intro-to-brain-like-agi-safety-3-two-subsystems-learning-and">Steering Subsystem</a>. The Steering Subsystem is the part with these more innately programmed responses and the innate programming of these series of reward functions, cost functions, bootstrapping functions that exist.</p><p><a href="https://www.simplypsychology.org/amygdala.html?utm_source=chatgpt.com">There are parts of the amygdala</a>, for example, that are able to monitor what those parts do and predict what those parts do. How do you find the neurons that are important for social status? Well, you have some innate heuristics of social status, for example, or you have some innate heuristics of friendliness that the Steering Subsystem can use. And the Steering Subsystem actually has its own sensory system, which is crazy. We think of vision as being something that the cortex does. But there&#8217;s also a Steering Subsystem, subcortical visual system called the <a href="https://en.wikipedia.org/wiki/Superior_colliculus">superior colliculus</a> with innate ability to detect faces, for example, or<strong> </strong>threats.</p><p>So there&#8217;s a visual system that has innate heuristics and the Steering Subsystem has its own responses. There&#8217;ll be part of the amygdala or part of the cortex that is learning to predict those responses. What are the neurons that matter in the cortex for social status or for friendship? They&#8217;re the ones that predict those innate heuristics for friendship. You train a predictor in the cortex and you say, &#8220;Which neurons are part of the predictor?&#8221;<strong> </strong>Those are the ones that, now you&#8217;ve actually managed to wire it up.</p><p><strong>Dwarkesh Patel</strong></p><p>This is fascinating. I feel like I still don&#8217;t understand&#8230; I understand how the cortex could learn how this primitive part of the brain would respond to&#8230; Obviously it has these labels on, &#8220;here&#8217;s literally a picture of a spider, and this is bad, be scared of this.&#8221; The cortex learns that this is bad because the innate part tells it that. But then it has to generalize to, &#8220;Okay, the spider&#8217;s on my back. And somebody&#8217;s telling me the spider&#8217;s on your back. That&#8217;s also bad.&#8221;</p><p><strong>Adam Marblestone</strong></p><p>Yes.</p><p><strong>Dwarkesh Patel</strong></p><p>But it never got supervision on that. So how does it&#8230;?</p><p><strong>Adam Marblestone</strong></p><p>Well, it&#8217;s because the <a href="https://www.lesswrong.com/posts/hE56gYi5d68uux9oM/intro-to-brain-like-agi-safety-3-two-subsystems-learning-and">Learning Subsystem</a> is a powerful learning algorithm that does have generalization, that is capable of generalization. The Steering Subsystem, these are the innate responses. You&#8217;re going to have some built into your Steering Subsystem, these lower brain areas: <a href="https://en.wikipedia.org/wiki/Hypothalamus">hypothalamus</a>, <a href="https://en.wikipedia.org/wiki/Brainstem">brainstem</a>, et cetera. Again, they have their own primitive sensory systems.</p><p>So there may be an innate response. If I see something that&#8217;s moving fast toward my body that I didn&#8217;t previously see was there and is small and dark and high contrast, that might be an insect skittering onto my body. I am going to flinch. There are these innate responses. There&#8217;s going to be some group of neurons, let&#8217;s say, in the hypothalamus, that is the I-am-flinching or I-just-flinched neurons in the hypothalamus.</p><p>When you flinch, first of all, it&#8217;s a negative contribution to the reward function. You didn&#8217;t want that to happen, perhaps. But that&#8217;s a reward function that doesn&#8217;t have any generalization in it. I&#8217;m going to avoid that exact situation of the thing skittering toward me. Maybe I&#8217;m going to avoid some actions that lead to the thing skittering. That&#8217;s a generalization you can get, what Steve calls downstream of the reward function. I&#8217;m going to avoid the situation where the spider was skittering toward me, but you&#8217;re also going to do something else.</p><p>There&#8217;s going to be a part of your amygdala, say, that is saying, &#8220;Okay, a few milliseconds, hundreds of milliseconds or seconds earlier, could I have predicted that flinching response?&#8221; It&#8217;s going to be a group of neurons that is essentially a classifier of, &#8220;Am I about to flinch?&#8221; And I&#8217;m going to have classifiers for that for every important Steering Subsystem variable that evolution needs to take care of. Am I about to flinch? Am I talking to a friend? Should I laugh now? Is the friend high status? Whatever variables the hypothalamus, brainstem, contains&#8230; Am I about to taste salt?</p><p>It&#8217;s going to have all these variables and for each one it&#8217;s going to have a predictor. It&#8217;s going to train that predictor. Now the predictor that it trains, that can have some generalization. The reason it can have some generalization is because it just has a totally different input. Its input data might be things like the word &#8220;spider&#8221;, but the word &#8220;spider&#8221; can activate in all sorts of situations that lead to the word &#8220;spider&#8221; activating in your world model. If you have a complex world model with really complex features that inherently gives you some generalization. It&#8217;s not just the thing skittering toward me, it&#8217;s even the word &#8220;spider&#8221; or the concept of &#8220;spider&#8221; is going to cause that to trigger. This predictor can learn that. Whatever spider neurons are in my world model, which could even be a book about spiders or somewhere, a room where there are spiders or whatever that is&#8230;</p><p><strong>Dwarkesh Patel</strong></p><p>The amount of heebie-jeebies that this conversation is eliciting in the audience&#8230;</p><p><strong>Adam Marblestone</strong></p><p>Now I&#8217;m activating your Steering Subsystem, your Steering Subsystem spider hypothalamus subgroup of neurons of skittering insects are activating based on these very abstract concepts in the conversation.</p><p><strong>Dwarkesh Patel</strong></p><p>If you keep going, I&#8217;m going to put in a trigger warning.</p><p><strong>Adam Marblestone</strong></p><p>That&#8217;s because you learned this. The cortex inherently has the ability to generalize because it&#8217;s just predicting based on these very abstract variables and all these integrated information that it has. Whereas the Steering Subsystem only can use whatever the superior colliculus and a few other sensors can spit out.</p><p><strong>Dwarkesh Patel</strong></p><p>By the way, it&#8217;s remarkable that the person who&#8217;s made this connection between different pieces of neuroscience, Steve Byrnes, is a former physicist. For the last few years, he&#8217;s been trying to synthesize&#8212;</p><p><strong>Adam Marblestone</strong></p><p>He&#8217;s an AI safety researcher. He&#8217;s just synthesizing. This comes back to the academic incentives thing. I think that this is a little bit hard to say. What is the exact next experiment? How am I going to publish a paper on this? How am I going to train my grad student to do this? It&#8217;s very speculative. But there&#8217;s a lot in the neuroscience literature and Steve has been able to pull this together. And I think that Steve has an answer to Ilya&#8217;s question essentially, which is, how does the brain ultimately code for these higher-level desires and link them up to the more primitive rewards?</p><p><strong>Dwarkesh Patel</strong></p><p>Very naive question, but why can&#8217;t we achieve this omnidirectional inference by just training the model to not just map from a token to next token, but remove the masks in the training so it maps every token to every token, or come up with more labels between video and audio and text so that it&#8217;s forced to map one to each one?</p><p><strong>Adam Marblestone</strong></p><p>I mean, that may be the way. It&#8217;s not clear to me. Some people think that there&#8217;s a different way that it does probabilistic inference or a different learning algorithm that isn&#8217;t backprop. There might be other ways of learning, energy-based models or other things like that, that you can imagine that is involved in being able to do this and that the brain has that.</p><p>But I think there&#8217;s a version of it where what the brain does is crappy versions of backprop to learn to predict through a few layers and that it&#8217;s kind of like a <a href="https://en.wikipedia.org/wiki/Multimodal_learning">multimodal</a> <a href="http://foundation_model">foundation</a> model. LLMs are maybe just predicting the next token. But <a href="https://www.nvidia.com/en-us/glossary/vision-language-models/">vision models</a> maybe are trained in learning to fill in the blanks or reconstruct different pieces or combinations. But I think that it does it in an extremely flexible way.</p><p>If you train a model to just fill in this blank at the center, okay, that&#8217;s great. But what if you didn&#8217;t train it to fill in this other blank over to the left? Then it doesn&#8217;t know how to do that. It&#8217;s not part of its repertoire of predictions that are amortized into the network. Whereas with a really powerful inference system, you could choose at test time, what is the subset of variables it needs to infer and which ones are clamped?</p><p><strong>Dwarkesh Patel</strong></p><p>Okay, two sub-questions. One, it makes you wonder whether the thing that is lacking in artificial neural networks is less about the reward function and more about the <a href="https://en.wikipedia.org/wiki/Autoencoder">encoder</a> or the <a href="https://en.wikipedia.org/wiki/Embedding_(machine_learning)">embedding</a>&#8230; Maybe the issue is that you&#8217;re not representing video and audio and text in the right <a href="https://en.wikipedia.org/wiki/Latent_space">latent</a> abstraction such that they could intermingle and conflict.</p><p>Maybe this is also related to why LLMs seem bad at drawing connections between different ideas. Are the ideas represented at a level of generality at which you could notice different connections?</p><p><strong>Adam Marblestone</strong></p><p>Well, the problem is these questions are all commingled. If we don&#8217;t know if it&#8217;s doing a backprop-like learning, and we don&#8217;t know if it&#8217;s doing energy-based models, and we don&#8217;t know how these areas are even connected in the first place, it&#8217;s very hard to really get to the ground truth of this. But yeah, it&#8217;s possible.</p><p>I think that people have done some work. My friend <a href="https://scholar.google.com/citations?user=4ZnsOa8AAAAJ&amp;hl=en">Joel Dapello</a> actually <a href="https://proceedings.neurips.cc/paper_files/paper/2020/file/98b17f068d5d9b7668e19fb8ae470841-Paper.pdf">did something some years ago</a> where he put a model&#8212;I think it was a model of <a href="https://en.wikipedia.org/wiki/Visual_cortex#Primary_visual_cortex_(V1)">V1</a>, specifically how the early visual cortex represents images&#8212;as an input into a <a href="https://en.wikipedia.org/wiki/Convolutional_neural_network">convnet</a> and that improves some things. It could be differences. The retina is also doing motion detection and certain things are getting filtered out. There may be some preprocessing of the sensory data. There may be some clever combinations of which modalities are predicting which or so on, that lead to better representation. There may be much more clever things than that.</p><p>Some people certainly do think that there&#8217;s inductive biases built in the architecture that will shape the representations differently or that there are clever things that you can do. <a href="https://astera.org/">Astera</a>, which is the same organization that employs Steve Byrnes, <a href="https://astera.org/neuroscientist-doris-tsao-joins-astera-to-lead-its-new-neuroscience-program/">just launched this neuroscience project based on Doris Tsao&#8217;s work</a>. She has some ideas about how you can build vision systems that basically require less training. They build into the assumptions of the design of the architecture things like objects are bounded by surfaces and surfaces have certain types of shapes and relationships of how they occlude each other and stuff like that. It may be possible to build more assumptions into the network. Evolution may have also put some changes of architecture. It&#8217;s just I think that also the cost functions and so on may be a key thing that it does.</p><h3>00:22:20 &#8211; Amortized inference and what the genome actually stores</h3><p><strong>Dwarkesh Patel</strong></p><p>I want to talk about this idea that you just glanced off of which was <a href="https://web.stanford.edu/~ngoodman/papers/amortized_inference.pdf">amortized inference</a>. Maybe I should try to explain what I think it means, because I think it&#8217;s probably wrong and this will help you correct me.</p><p><strong>Adam Marblestone</strong></p><p>It&#8217;s been a few years for me too.</p><p><strong>Dwarkesh Patel</strong></p><p>Right now, the way the models work is that you have an input, it maps it to an output, and this is amortizing a process, the real process, which we think is what intelligence is. It&#8217;s that you have some prior over how the world could be, what are the causes that make the world the way that it is. And then when you see some observation, you should be like, &#8220;Okay, here&#8217;s all the ways the world could be. This cause explains what&#8217;s happening best.&#8221;</p><p>Now, doing this calculation over every possible cause is computationally intractable. So then you just have to sample like, &#8220;Oh, here&#8217;s a potential cause. Does this explain this observation? No, forget it. Let&#8217;s keep sampling.&#8221; And then eventually you get the cause, then the cause explains the observation, and then this becomes your posterior.</p><p><strong>Adam Marblestone</strong></p><p>That&#8217;s actually pretty good. <a href="https://en.wikipedia.org/wiki/Bayesian_inference">Bayesian inference</a> in general is of this very intractable thing. The algorithms that we have for doing that tend to require taking a lot of samples, <a href="https://en.wikipedia.org/wiki/Monte_Carlo_method">Monte Carlo methods</a>, taking a lot of samples. And taking samples takes time. This is like the original <a href="https://en.wikipedia.org/wiki/Boltzmann_machine">Boltzmann machines</a> and stuff. They&#8217;re using techniques like this, and still it&#8217;s used with <a href="https://en.wikipedia.org/wiki/Probabilistic_programming">probabilistic programming</a>, other types of methods often. The Bayesian inference problem, which is basically the problem of perception, given some model of the world and given some data, how should I update my&#8230; What are the missing variables in my internal model?</p><p><strong>Dwarkesh Patel</strong></p><p>And I guess the idea is that neural networks are hopefully&#8230; Obviously, mechanistically, the neural network is not starting with, &#8220;Here is my model of the world, and I&#8217;m going to try to explain this data.&#8221; But the hope is that instead of starting with, &#8220;Hey, does this cause explain this observation? No. Did this cause explain this observation? Yes.&#8221; What you do is just like observation&#8230;</p><p><strong>Adam Marblestone</strong></p><p>What&#8217;s the cause that the neural net thinks is the best one?</p><p><strong>Dwarkesh Patel</strong></p><p>Observation to cause. So the <a href="https://en.wikipedia.org/wiki/Feedforward_neural_network">feedforward</a> goes observation to cause to then the output that&#8230;</p><p><strong>Adam Marblestone</strong></p><p>You don&#8217;t have to evaluate all these energy values or whatever and sample around to make them higher and lower. You just say, approximately that process would result in this being the top one or something like that.</p><p><strong>Dwarkesh Patel</strong></p><p>Exactly. One way to think about it might be that test-time compute, inference-time compute is actually doing this sampling again. You literally read its <a href="https://research.google/blog/language-models-perform-reasoning-via-chain-of-thought/">chain of thought</a>. It&#8217;s actually doing this toy example we&#8217;re talking about where it&#8217;s like, &#8220;Oh, can I solve this problem by doing X? Nah, I need a different approach.&#8221; This raises the question. I mean, over time it is the case that the capabilities which required inference-time compute to elicit, get distilled into the model. So you&#8217;re amortizing the thing which previously you needed to do these rollouts, these Monte Carlo rollouts, to figure out.</p><p>In general, maybe there&#8217;s this principle that digital minds which can be copied, have different tradeoffs which are relevant, from biological minds which cannot. So in general, it should make sense to amortize more things because you can literally copy the amortization, or copy the things that you have sort of built in.</p><p>This is a tangential question where it might be interesting to speculate about. In the future, as these things become more intelligent and the way we train them becomes more economically rational, what will make sense to amortize into these minds, which evolution did not think was worth amortizing into biological minds? You have to retrain every time.</p><p><strong>Adam Marblestone</strong></p><p>First of all, I think the probabilistic AI people would be like, of course you need test-time compute, because this inference problem is really hard and the only ways we know how to do it involve lots of test-time compute. Otherwise it&#8217;s just this crappy approximation that&#8217;s never going to&#8230; You have to do infinite data or something to make this. I think some of the probabilistic people will be like, &#8220;No, it&#8217;s inherently probabilistic and amortizing it in this way just doesn&#8217;t make sense.&#8221; They might then also point to the brain and say, &#8220;Okay, well the brain, the neurons are stochastic and they&#8217;re sampling and they&#8217;re doing things. So maybe the brain actually is doing more like the non-amortized inference, the real inference.&#8221;</p><p>But it&#8217;s also strange how perception can work in just milliseconds or whatever. It doesn&#8217;t seem like it uses that much sampling. So it&#8217;s also clearly doing some baking things into approximate forward passes or something like that to do this. In the future, I don&#8217;t know. Is it already a trend to some degree that things that people were having to use test-time compute for, are getting used to train back the base model? Now it can do it in one pass.</p><p>Maybe evolution did or didn&#8217;t do that. I think evolution still has to pass everything through the genome to build the network and the environment in which humans are living is very dynamic. So maybe, if we believe this is true, there&#8217;s a Learning Subsystem per Steve Byrnes, and a Steering Subsystem, that the Learning Subsystem doesn&#8217;t have a lot of pre-initialization or pretraining. It has a certain architecture, but then within lifetime it learns. Then evolution didn&#8217;t actually amortize that much into that network. It amortized it instead into a set of innate behaviors in a set of these bootstrapping cost functions, or ways of building up very particular reward signals.</p><p><strong>Dwarkesh Patel</strong></p><p>This framework helps explain this mystery that people have pointed out and I&#8217;ve asked a few guests about, which is that if you want to analogize evolution to pretraining, well how do you explain the fact that so little information is conveyed through the genome? So 3 gigabytes is the size of the total <a href="https://en.wikipedia.org/wiki/Human_genome">human genome</a>. Obviously a small fraction of that is actually relevant to coding the brain.</p><p>Previously people made this analogy, that actually evolution has found the hyperparameters of the model, the numbers which tell you how many layers there should be, the architecture, basically, how things should be wired together. But if a big part of the story is that increased sample efficiency aids learning, generally makes systems more performant, is the reward function, is the loss function&#8212;and if evolution found those loss functions that aid learning&#8212;then it actually makes sense how you can build an intelligence with so little information. Because the reward function, in Python the reward function is literally a line. So you just have a thousand lines like this, and that doesn&#8217;t take up that much space.</p><p><strong>Adam Marblestone</strong></p><p>Yes. It also gets to do this generalization thing with the thing I was describing where we were talking about the spider, where it learns just the word &#8220;spider&#8221; which triggers the spider reflex or whatever. It gets to exploit that too. It gets to build a reward function that actually has a bunch of generalization in it just by specifying these innate spider stuff and the <a href="https://www.lesswrong.com/posts/qNZSBqLEh4qLRqgWW/intro-to-brain-like-agi-safety-6-big-picture-of-motivation">Thought Assessors</a>, as Steve calls them, that do the learning.</p><p>That&#8217;s potentially a really compact solution to building up these more complex reward functions too, that you need. It doesn&#8217;t have to anticipate everything about the future of the reward function. It just has to anticipate what variables are relevant and what are heuristics for finding what those variables are. And then it has to have a very compact specification for the learning algorithm and basic architecture of the Learning Subsystem. And then it has to specify all this Python code of all the stuff about the spiders and all the stuff about friends, and all the stuff about your mother, and all the stuff about mating and social groups and joint eye contact. It has to specify all that stuff.</p><p>So is this really true? I think that there is some evidence for it. <a href="https://www.broadinstitute.org/bios/fei-chen">Fei Chen</a> and <a href="https://www.broadinstitute.org/bios/evan-macosko">Evan Macosko</a> and various other researchers have been doing these <a href="https://www.humancellatlas.org/learn-more/about-the-human-cell-atlas/#event-launch-of-the-human-cell-atlas">single-cell atlases</a>. One of the things that scaling up neuroscience technology&#8212;again, this is one of my obsessions&#8212;has done through the <a href="https://en.wikipedia.org/wiki/BRAIN_Initiative">BRAIN Initiative</a>, a big neuroscience funding program, is they&#8217;ve basically gone through different areas, especially of the mouse brain, and mapped where the different cell types are? How many different types of cells are there in different areas of cortex? Are they the same across different areas? Then you look at these subcortical regions, which are more like the Steering Subsystem or reward-function-generating regions. How many different types of cells do they have? And which neuron types do they have?</p><p>We don&#8217;t know how they&#8217;re all connected and exactly what they do or what the <a href="https://en.wikipedia.org/wiki/Neural_circuit">circuits</a> are or what they mean, but you can just quantify how many different kinds of cells there are with sequencing the <a href="https://en.wikipedia.org/wiki/RNA">RNA</a>. And there are a lot more weird and diverse and bespoke cell types in the Steering Subsystem, basically, than there are in the Learning Subsystem. Like the cortical cell types, it seems like there&#8217;s enough to build a learning algorithm up there and specify some hyperparameters. And in this Steering Subsystem, there&#8217;s like a gazillion, thousands of really weird cells, which might be like the one for the spider flinch reflex and the one for I&#8217;m-about-to-taste-salt.</p><p><strong>Dwarkesh Patel</strong></p><p>Why would each reward function need a different cell type?</p><p><strong>Adam Marblestone</strong></p><p>Well, this is where you get innately wired circuits. In the learning algorithm part, in the Learning Subsystem, you specify the initial architecture, you specify a learning algorithm. All the juice is happening through plasticity of the <a href="https://en.wikipedia.org/wiki/Synapse">synapses</a>, changes of the synapses within that big network. But it&#8217;s a relatively repeating architecture, how it&#8217;s initialized. It&#8217;s just like how the amount of Python code needed to make an eight-layer <a href="https://en.wikipedia.org/wiki/Transformer_(deep_learning)">transformer</a> is not that different from one that makes a three-layer transformer. You&#8217;re just replicating.</p><p>Whereas all this Python code for the reward function, if superior colliculus sees something that&#8217;s skittering and you&#8217;re feeling goosebumps on your skin or whatever, then trigger spider reflex, that&#8217;s just a bunch of bespoke, species-specific, situation-specific crap. The cortex doesn&#8217;t know about spiders, it just knows about layers.</p><p><strong>Dwarkesh Patel</strong></p><p>But you&#8217;re saying that the only way to write this reward function is to have a special cell type?</p><p><strong>Adam Marblestone</strong></p><p>Yeah, well, I think so. I think you either have to have special cell types or you have to somehow otherwise get special wiring rules that evolution can say this neuron needs to wire to this neuron, without any learning. And the way that that is most likely to happen, I think, is that those cells express different receptors and proteins that say, &#8220;Okay, when this one comes in contact with this one, let&#8217;s form a synapse.&#8221; So it&#8217;s genetic wiring, and those need cell types to do it.</p><p><strong>Dwarkesh Patel</strong></p><p>I&#8217;m sure this would make a lot more sense if I knew 101 neuroscience, but it seems like there&#8217;s still a lot of complexity, or generality rather, in the Steering Subsystem. So if the <a href="https://www.alignmentforum.org/posts/hE56gYi5d68uux9oM/intro-to-brain-like-agi-safety-3-two-subsystems-learning-and">Steering Subsystem has its own visual system that&#8217;s separate from the visual cortex</a>, different features still need to plug into that vision system. So the spider thing needs to plug into it and also the love thing needs to plug into it, et cetera, et cetera. So it seems complicated.</p><p><strong>Adam Marblestone</strong></p><p>It&#8217;s still complicated. That&#8217;s all the more reason why a lot of the genomic real estate on the genome, and in terms of these different cell types and so on, would go into wiring up the Steering Subsystem, pre-wiring it.</p><p><strong>Dwarkesh Patel</strong></p><p>Can we tell how much of the genome is clearly working? So I guess you could tell how many are relevant to producing the RNA that manifest or the <a href="https://en.wikipedia.org/wiki/Epigenetics">epigenetics</a> that manifest in different cell types in the brain. Right?</p><p><strong>Adam Marblestone</strong></p><p>Yeah. This is what the cell types help you get at. I don&#8217;t think it&#8217;s exactly like, &#8220;Oh, this percent of the genome is doing this&#8221;, but you could say, &#8220;Okay, in all these Steering Subsystem subtypes, how many different genes are involved in specifying which is which and how they wire? And how much genomic real estate do those genes take up versus the ones that specify visual cortex versus auditory cortex? You&#8217;re just reusing the same genes to do the same thing twice. Whereas the spider reflex hooking up&#8230; Yes, you&#8217;re right. They have to build a vision system and they have to build some auditory systems and touch systems and navigation-type systems.</p><p>Even feeding into the <a href="https://en.wikipedia.org/wiki/Hippocampus">hippocampus</a> and stuff like that, there&#8217;s <a href="https://en.wikipedia.org/wiki/Head_direction_cell">head direction cells</a>. Even the fly brain <a href="https://www.rockefeller.edu/news/35380-how-fruit-flies-control-their-brains-steering-wheel/">has innate circuits that figure out its orientation</a> and help it navigate in the world. It uses vision, figures out its optical flow of how it&#8217;s flying and how its flight is related to the wind direction. It has all these innate stuff that I think in the mammal brain we would all lump that into the Steering Subsystem. There&#8217;s a lot of work. So all the genes that basically go into specifying all the things a fly has to do, we&#8217;re going to have stuff like that too, just all in the Steering Subsystem.</p><p><strong>Dwarkesh Patel</strong></p><p>But do we have some estimate of like, &#8220;Here&#8217;s how many <a href="https://en.wikipedia.org/wiki/Nucleotide">nucleotides</a>, here are many <a href="https://www.genome.gov/genetics-glossary/Megabase-Mb">megabases</a> it takes to&#8212;&#8221;</p><p><strong>Adam Marblestone</strong></p><p>I don&#8217;t know. I mean, I think you might be able to talk to biologists about this. I mean, we have a lot in common with <a href="https://en.wikipedia.org/wiki/Yeast">yeast</a> from a genes perspective. Yeast is still used as a model for some amount of drug development and stuff like that in biology. And so much of the genome is just going towards you having a cell at all, it can recycle waste, it can get energy, it can replicate.</p><p>And then what do we have in common with a mouse? So we do know at some level that the differences between us and a chimpanzee or something&#8212;and that includes the social instincts and the more advanced differences in cortex and so on&#8212;it&#8217;s a tiny number of genes that go into this additional amount of making the eight-layer transformer instead of the six-layer transformer or tweaking that reward function.</p><p><strong>Dwarkesh Patel</strong></p><p>This would help explain why the hominid brain exploded in size so fast. Presumably, tell me if this is correct, but under this story, social learning or some other thing increased the ability to learn from the environment. It increased our sample efficiency. Instead of having to go and kill the boar yourself and figure out how to do that, you can just be like, &#8220;The elder told me this is how you make a spear.&#8221; Now it increases the incentive to have a bigger cortex, which can learn these things.</p><p><strong>Adam Marblestone</strong></p><p>Yes and that can be done with a relatively few genes, because it&#8217;s really replicating what the mouse already has, making more of it. It&#8217;s maybe not exactly the same and there may be tweaks, from a genome perspective, you don&#8217;t have to reinvent all this stuff.</p><p><strong>Dwarkesh Patel</strong></p><p>So then how far back in the history of the evolution of the brain does the cortex go back? Is the idea that the cortex has always figured out this omnidirectional inference thing, that&#8217;s been a solved problem for a long time? Then the big unlock with primates is that we got the reward function, which increased the returns to having omnidirectional inference?</p><p><strong>Adam Marblestone</strong></p><p>It&#8217;s a good question.</p><p><strong>Dwarkesh Patel</strong></p><p>Or is the omnidirectional inference also something that took a while to unlock?</p><p><strong>Adam Marblestone</strong></p><p>I&#8217;m not sure that there&#8217;s agreement about that. I think there might be specific questions about language. Are there tweaks, whether that&#8217;s through auditory and memory, some combination auditory memory regions? There may also be macro-wiring where you need to wire auditory regions into memory regions or something like that, and into some of these social instincts to get language, for example, to happen. But that might also be a small number of gene changes to be able to say, &#8220;Oh, I just need from my <a href="https://en.wikipedia.org/wiki/Temporal_lobe">temporal lobe</a> over here, going over to the <a href="https://en.wikipedia.org/wiki/Auditory_cortex">auditory cortex</a>, something.&#8221;</p><p>There is some evidence for the <a href="https://en.wikipedia.org/wiki/Broca%27s_area">Broca&#8217;s area</a>, <a href="https://en.wikipedia.org/wiki/Wernicke%27s_area">Wernicke&#8217;s area</a>. They&#8217;re connected with the <a href="https://en.wikipedia.org/wiki/Hippocampus">hippocampus</a> and so on and <a href="https://en.wikipedia.org/wiki/Prefrontal_cortex">prefrontal cortex</a>. So there&#8217;s like some small number of genes maybe for enabling humans to really properly do language. That could be a big one. But is it that something changed about the cortex and it became possible to do these things? Or is that that potential was already there, but there wasn&#8217;t the incentive to expand that capability and then use it, wire it to these social instincts and use it more? I would lean somewhat toward the latter. I think a mouse has a lot of similarity in terms of cortex as a human.</p><p><strong>Dwarkesh Patel</strong></p><p>Although there&#8217;s <a href="https://www.vanderbilt.edu/psychological_sciences/bio/suzana-herculano">Suzana Herculano-Houzel</a>&#8216;s work on how <a href="https://www.pnas.org/doi/10.1073/pnas.0611396104">the number of neurons scales better with weight with primate brains</a> than it does with rodent brains. So does that suggest that there actually was some improvement in the scalability of the cortex?</p><p><strong>Adam Marblestone</strong></p><p>Maybe, maybe. I&#8217;m not super deep on this. There may have been changes in architecture, changes in the folding, changes in neuron properties and stuff that somehow slightly tweak this. But there&#8217;s still a scaling. either way.</p><p><strong>Dwarkesh Patel</strong></p><p>That&#8217;s right.</p><p><strong>Adam Marblestone</strong></p><p>So I&#8217;m not saying there isn&#8217;t something special about humans in the architecture of the Learning Subsystem at all. But yeah I think it&#8217;s pretty widely thought that this is expanded. But then the question is, &#8220;Okay, well, how does that fit in also with the Steering Subsystem changes and the instincts that make use of this and allow you to bootstrap using this effectively?&#8221;</p><p>But just to say a few other things, even the fly brain has some amount, even very far back&#8230; I mean, I think you&#8217;ve read this great book, <em><a href="https://amzn.to/3YeZGkx">A Brief History of Intelligence</a></em>, right? I think this is a really good book. Lots of AI researchers think this is a really good book it seems.</p><p>You have some amount of learning going back all the way to anything that has a brain. Basically you have something like primitive <a href="https://en.wikipedia.org/wiki/Reinforcement_learning">reinforcement learning</a>, going back at least to vertebrates. Imagine a zebrafish. Then you have these other branches. Birds may have reinvented something cortex-like. It doesn&#8217;t have the six layers, but they have something a little bit cortex-like. So some of those things after reptiles, in some sense birds and mammals both made a somewhat cortex-like, but differently organized thing.</p><p>But even a fly brain has associative learning centers that actually do things that maybe look a little bit like this <a href="https://www.lesswrong.com/posts/qNZSBqLEh4qLRqgWW/intro-to-brain-like-agi-safety-6-big-picture-of-motivation">Thought Assessor concept</a> from Byrnes, where there&#8217;s a specific dopamine signal to train specific subgroups of neurons in the fly mushroom body to associate different sensory information with, &#8220;Am I going to get food now?&#8221; or &#8220;Am I going to get hurt now?&#8221;</p><p><strong>Dwarkesh Patel</strong></p><p>Brief tangent. I remember reading in one blog post that <a href="https://www.beren.io/aboutme/">Beren Millidge</a> wrote that the <a href="https://www.beren.io/2022-08-06-The-scale-of-the-brain-vs-machine-learning/">parts of the cortex which are associated with audio and vision have scaled disproportionately</a> between other primates and humans, whereas the parts associated, say, with odor have not. And I remember him saying something like that this is explained by that kind of data having worse scaling law properties. Maybe he meant this, but I think another interpretation of actually what&#8217;s happening there is that these social reward functions that are built into the Steering Subsystem needed to make use more of being able to see your elders and see what the visual cues are and hear what they&#8217;re saying. And in order to make sense of these cues which guide learning, you needed to activate the vision and audio more than odor.</p><p><strong>Adam Marblestone</strong></p><p>I mean, there&#8217;s all this stuff. I feel like it&#8217;s come up in your shows before, actually. But like even the design of the human eye where you have the pupil and the white and everything, we are designed to be able to establish relationships based on joint eye contact. Maybe this came up in the <a href="https://www.dwarkesh.com/p/richard-sutton">Sutton episode</a>. I can&#8217;t remember. But yeah, we have to bootstrap to the point where we can detect eye contact and where we can communicate by language. That&#8217;s like what the first couple years of life are trying to do.</p><h3>00:42:42 &#8211; Model-based vs model-free RL in the brain</h3><p><strong>Dwarkesh Patel</strong></p><p>Okay, I want to ask you about <a href="https://en.wikipedia.org/wiki/Reinforcement_learning">RL</a>. So currently, the way these LLMs are trained, if they solve the unit test or solve a math problem, that whole trajectory, every token in that trajectory is upweighted. What&#8217;s going on with humans? Are there different types of model-based versus model-free that are happening in different parts of the brain?</p><p><strong>Adam Marblestone</strong></p><p>Yeah, I mean, this is another one of these things. Again, all my answers to these questions, any specific thing I say, it&#8217;s all just saying that directionally we can explore around this. I find this interesting, maybe I feel like the literature points in these directions in some very broad way. What I actually want to do is go and map the entire mouse brain and figure this out comprehensively and make neuroscience a ground-truth science. So I don&#8217;t know, basically.</p><p>But first of all, I think with <a href="https://www.dwarkesh.com/p/ilya-sutskever-2">Ilya on the podcast</a>, he was like, &#8220;It&#8217;s weird that you don&#8217;t use value functions, right?&#8221; You use the dumbest form of RL basically. Of course these people are incredibly smart and they&#8217;re optimizing for how to do it on <a href="https://en.wikipedia.org/wiki/Graphics_processing_unit">GPUs</a> and it&#8217;s really incredible what they&#8217;re achieving. But conceptually it&#8217;s a really dumb form of RL, even compared to what was being done 10 years ago. Even the <a href="https://deepmind.google/blog/agent57-outperforming-the-human-atari-benchmark/">Atari game-playing stuff</a> was using <a href="https://en.wikipedia.org/wiki/Q-learning">Q-learning</a>, which is basically a kind of <a href="https://en.wikipedia.org/wiki/Temporal_difference_learning">temporal difference learning</a>. The temporal difference learning basically means you have some kind of a value function of what action I choose now doesn&#8217;t just tell me literally what happens immediately after this. It tells me what is the long-run consequence of that for my expected total reward or something like that.</p><p>So you would have value functions like&#8230; The fact that we don&#8217;t have value functions at all in the LLMs is crazy. I think because Ilya said it, I can say it. I know one one-hundredth of what he does about AI, but it&#8217;s kind of crazy that this is working.</p><p>But in terms of the brain, I think there are some parts of the brain that are thought to do something that&#8217;s very much like <a href="https://en.wikipedia.org/wiki/Model-free_(reinforcement_learning)">model-free RL</a>, that&#8217;s parts of the <a href="https://en.wikipedia.org/wiki/Striatum">striatum</a> and <a href="https://en.wikipedia.org/wiki/Basal_ganglia">basal ganglia</a>. It is thought that they have a certain finite relatively small action space. The types of actions they could take, first of all, might be like, &#8220;Tell the brainstem and <a href="https://en.wikipedia.org/wiki/Spinal_cord">spinal cord</a> to do this motor action? Yes or no.&#8221; Or it might be more complicated cognitive-type actions like, &#8220;Tell the <a href="https://en.wikipedia.org/wiki/Thalamus">thalamus</a> to allow this part of the cortex to talk to this other part,&#8221; or &#8220;Release the memory that&#8217;s in the hippocampus and start a new one or something.&#8221; But there&#8217;s some finite set of actions that come out of the basal ganglia, and that it&#8217;s just a very simple RL.</p><p>So there are probably parts of other brains and our brain that are just doing very <a href="https://youtu.be/ZLhjo3Jz2_Q">simple naive-type RL algorithms</a>. Layering one thing on top of that is that some of the major work in neuroscience, like <a href="https://en.wikipedia.org/wiki/Peter_Dayan">Peter Dayan&#8217;s</a> work, and a bunch of work that is part of why I think <a href="https://en.wikipedia.org/wiki/Google_DeepMind">DeepMind</a> did the temporal difference learning stuff in the first place. They were very interested in neuroscience. There&#8217;s a lot of neuroscience evidence that the dopamine is giving this reward prediction error signal, rather than just reward, &#8220;yes or no, a gazillion time steps in the future.&#8221; It&#8217;s a prediction error and that&#8217;s consistent with learning these value functions.</p><p>So there&#8217;s that and then there&#8217;s maybe higher-order stuff. We have the cortex making this world model. Well, one of the things the cortex world model can contain is a model of when you do and don&#8217;t get rewards. Again, it&#8217;s predicting what the Steering Subsystem will do. It could be predicting what the basal ganglia will do. You have a model in your cortex that has more generalization and more concepts and all this stuff that says, &#8220;Okay, these types of plans, these types of actions will lead in these types of circumstances to reward.&#8221; So I have a model of my reward.</p><p>Some people also think that you can go the other way. So this is part of the inference picture. There&#8217;s this idea of RL as inference. You could say, &#8220;Well, conditional on my having a high reward, sample a plan that I would have had to get there.&#8221; That&#8217;s inference of the plan part from the reward part. I&#8217;m clamping the reward as high and inferring the plan, sampling from plans that could lead to that. So if you have this very general cortical thing, it can just do. If you have this very general model-based system and the model, among other things, includes plans and rewards, then you just get it for free, basically.</p><p><strong>Dwarkesh Patel</strong></p><p>So in neural network parlance, there&#8217;s a value head associated to the omnidirectional inference that&#8217;s happening in the&#8212;</p><p><strong>Adam Marblestone</strong></p><p>Yes, or there&#8217;s a value input.</p><p><strong>Dwarkesh Patel</strong></p><p>Oh, okay. Interesting.</p><p><strong>Adam Marblestone</strong></p><p>Yeah and it can predict. One of the almost sensory variables it can predict is what rewards it&#8217;s going to get.</p><p><strong>Dwarkesh Patel</strong></p><p>By the way, speaking about amortizing things, obviously value is like amortized rollouts of looking up reward.</p><p><strong>Adam Marblestone</strong></p><p>Yeah, something like that. It&#8217;s like a statistical average or prediction of it.</p><p><strong>Dwarkesh Patel</strong></p><p>Tangential thought. <a href="https://www.dwarkesh.com/p/joseph-henrich">Joe Henrich</a> and others have this idea for the way human societies have learned to do things like, how do you figure out that this kind of bean, which actually just almost always poisons you, is edible if you do this ten-step incredibly complicated process, any one of which if you fail, at the bean will be poisonous? How do you figure out how to hunt this seal in this particular way, with this particular weapon, at this particular time of the year, et cetera? There&#8217;s no way but just like trying shit over generations. And it strikes me this is actually very much like model-free RL happening at a civilizational level. No, not exactly.</p><p><strong>Adam Marblestone</strong></p><p>Evolution is the simplest algorithm in some sense. If we believe that all of this can come from evolution, the outer loop can be extremely not foresighted.</p><p><strong>Dwarkesh Patel</strong></p><p>Right, that&#8217;s interesting. Just hierarchies of&#8230; Evolution: model-free&#8230;</p><p><strong>Adam Marblestone</strong></p><p>So what does that tell you? Maybe the simple algorithms can just get you anything if you do it enough.</p><p><strong>Dwarkesh Patel</strong></p><p>Right.</p><p><strong>Adam Marblestone</strong></p><p>Yeah, I don&#8217;t know.</p><p><strong>Dwarkesh Patel</strong></p><p>So, evolution: model-free. Basal ganglia: model-free. Cortex: model-based. Culture: model-free potentially. I mean you pay attention to your elders or whatever.</p><p><strong>Adam Marblestone</strong></p><p>Maybe there&#8217;s like group selection or whatever of these things is like more model-free. But now I think culture, well, it<strong> </strong>stores some of the model.</p><h3>00:50:31 &#8211; Is biological hardware a limitation or an advantage?</h3><p><strong>Dwarkesh Patel</strong></p><p>Stepping back, is it a disadvantage or an advantage for humans that we get to use biological hardware, in comparison to computers as they exist now? What I mean by this question is, if there&#8217;s &#8220;the algorithm&#8221;, would the algorithm just qualitatively perform much worse or much better if inscribed in the hardware of today? The reason to think it might&#8230;. Here&#8217;s what I mean. Obviously the brain has had to make a bunch of tradeoffs which are not relevant to computing hardware. It has to be much more energetically efficient. Maybe as a result it has to run on slower speeds so that there can be a smaller voltage gap. So the brain runs at 200 hertz, it has to run on 20 watts. On the other hand, with robotics we&#8217;ve clearly experienced that fingers are way more nimble than we can make motors so far. So maybe there&#8217;s something in the brain that is the equivalent of cognitive dexterity, which is maybe due to the fact that we can do unstructured <a href="https://blogs.nvidia.com/blog/sparsity-ai-inference/">sparsity</a>. We can co-locate the memory and the compute.</p><p><strong>Adam Marblestone</strong></p><p>Yes.</p><p><strong>Dwarkesh Patel</strong></p><p>Where does this all net out? Are you like, &#8220;Fuck, we would be so much smarter if we didn&#8217;t have to deal with these brains.&#8221; Or are you like&#8212;</p><p><strong>Adam Marblestone</strong></p><p>I think in the end we will get the best of both worlds somehow. I think an obvious downside of the brain is it cannot be copied. You don&#8217;t have external read-write access to every neuron and synapse, whereas you do. I can just edit something in the weight matrix in Python or whatever and load that up and copy that. In principle. So the fact that it can&#8217;t be copied and random-accessed is very annoying. But otherwise maybe it has a lot of advantages. It also tells you that you want to somehow do the co-design of the algorithm. It maybe even doesn&#8217;t change it that much from all of what we discussed, but you want to somehow do this co-design.</p><p>So yeah, how do you do it with really slow low-voltage switches? That&#8217;s going to be really important for energy consumption. Co-locating memory and compute. I think that hardware companies will probably just try to co-locate memory and compute. They will try to use lower voltages, allow some stochastic stuff.</p><p>There are some people that think that all this probabilistic stuff that we were talking about&#8212;&#8220;Oh, it&#8217;s actually energy-based models, so on&#8221;&#8212;it is doing lots of sampling. It&#8217;s not just amortizing everything. The neurons are also very natural for that because they&#8217;re naturally stochastic. So you don&#8217;t have to do a random number generator in a bunch of Python code basically to generate a sample. The neuron just generates samples and it can tune what the different probabilities are and learn those tunings. So it could be that it&#8217;s very co-designed with some kind of inference method or something.</p><p><strong>Dwarkesh Patel</strong></p><p>It&#8217;d be hilarious&#8230;. I mean the message I&#8217;m taking from this interview is that like all these people that folks make fun of on Twitter, <a href="https://x.com/ylecun?lang=en">Yann LeCun</a> and <a href="https://x.com/BasedBeff">Beff Jezos</a> and whatever, I don&#8217;t know maybe they got it right.</p><p><strong>Adam Marblestone</strong></p><p>That is actually one read of it. Granted, I haven&#8217;t really worked on AI at all since LLMs took off, so I&#8217;m just out of the loop. But I&#8217;m surprised and I think it&#8217;s amazing how the scaling is working and everything. But yeah, I think Yann LeCun and Beff Jezos are kind of onto something about the probabilistic models or at least possibly. In fact that&#8217;s what all the neuroscientists and all the AI people thought until 2021 or something.</p><p><strong>Dwarkesh Patel</strong></p><p>Right. So there&#8217;s a bunch of cellular stuff happening in the brain that is not just about neuron-to-neuron synaptic connections. How much of that is functionally doing more work than the synapses themselves are doing versus it&#8217;s just a bunch of kludge that you have to do in order to make the synaptic thing work. So with a digital mind, you can nudge the synapse, sorry the parameter, extremely easily. But with a cell to modulate a synapse according to the gradient signal, it just takes all this crazy machinery. So is it actually doing more than it takes extremely little code to do?</p><p><strong>Adam Marblestone</strong></p><p>I don&#8217;t know, but I&#8217;m not a believer in the radical, &#8220;Oh, actually memory is not synapses mostly, or learning is mostly genetic changes&#8221; or something like that. I think it would just make a lot of sense, I think you put it really well for it to be more like the second thing you said. Let&#8217;s say you want to do <a href="https://arxiv.org/abs/1602.07868">weight normalization</a> across all the weights coming out of your neuron or into your neuron. Well, you probably have to somehow tell the nucleus of the cell about this and then have that send everything back out to the synapses or something. So there&#8217;s going to be a lot of cellular changes. Or let&#8217;s say that you just had a lot of plasticity and you&#8217;re part of this memory. Now that&#8217;s got consolidated into the cortex or whatever. Now we want to reuse you as a new one that can learn again.</p><p>There&#8217;s going to be a ton of cellular changes, so there&#8217;s going to be tons of stuff happening in the cell. But algorithmically, it&#8217;s not really adding something beyond these algorithms. It&#8217;s just implementing something that in a digital computer is very easy for us to go and just find the weights and change them. In a cell, it just literally has to do all this with molecular machines itself without any central controller. It&#8217;s kind of incredible.</p><p>There are some things that cells do, I think, that seem more convincing. One of the things the <a href="https://en.wikipedia.org/wiki/Cerebellum">cerebellum</a> has to do is predict over time. What is the time delay? Let&#8217;s say that I see a flash and then some number of milliseconds later, I&#8217;m going to get a puff of air in my eyelid or something. The cerebellum can be very good at predicting what&#8217;s the timing between the flash and the air puff, so that now your eye will just close automatically. The cerebellum is involved in that type of reflex, learned reflex.</p><p>There are some cells in the cerebellum where it seems like the cell body is playing a role in storing that time constant, changing that time constant of delay, versus that all being somehow done with like, &#8220;I&#8217;m going to make a longer ring of synapses to make that delay longer.&#8221; No, the cell body will just store that time delay for you. So there are some examples, but I&#8217;m not a believer out of the box in essentially this theory that what&#8217;s happening is changes in connections between neurons and that that&#8217;s the main algorithmic thing that&#8217;s going on. I think there&#8217;s very good reason to still believe that it&#8217;s that rather than some crazy cellular stuff.</p><p><strong>Dwarkesh Patel</strong></p><p>Going back to this whole perspective of how our intelligence is not just this omnidirectional inference thing that builds a world model, but really this system that teaches us what to pay attention to what the important salient factors are to learn from, et cetera. I want to see if there&#8217;s some intuition we can drive from this about what different kinds of intelligences might be like. So it seems like <a href="https://en.wikipedia.org/wiki/Artificial_general_intelligence">AGI</a> or <a href="https://en.wikipedia.org/wiki/Superintelligence">superhuman intelligence</a> should still have this ability to learn a world model that&#8217;s quite general, but then it might be incentivized to pay attention to different things that are relevant for the modern post-<a href="https://en.wikipedia.org/wiki/Technological_singularity">singularity</a> environment. How different should we expect different intelligences to be?</p><p><strong>Adam Marblestone</strong></p><p>I think one way to think about this question is, is it actually possible to make the <a href="https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer">paperclip maximizer</a> or whatever? If you try to make the paperclip maximizer, does it end up just not being smart or something like that because the only reward function it had was to make paperclips? I&#8217;d say, can you do that? I don&#8217;t know. If I channel Steve Byrnes more, I think he&#8217;s very concerned that the minimum viable things in the Steering Subsystem that you need to get something smart is way less than the minimum viable set of things you need for it to have human-like social instincts and ethics and stuff like that.</p><p>So a lot of what you want to know about the Steering Subsystem is actually the specifics of how you do <a href="https://en.wikipedia.org/wiki/AI_alignment">alignment</a> essentially, or what human behavior and social instincts is versus just what you need for capabilities. We talked about it in a slightly different way because we were sort of saying, &#8220;Well, in order for humans to learn socially, they need to make eye contact and learn from others.&#8221; But we already know from LLMs that depending on your starting point, you can learn language without that stuff. So I think that it probably is possible to make super powerful model-based RL optimizing systems and stuff like that that don&#8217;t have most of what we have in the human brain reward functions and as a consequence might want to maximize paperclips. And that&#8217;s a concern.</p><p><strong>Dwarkesh Patel</strong></p><p>But you&#8217;re pointing out that in order to make a competent paperclip maximizer, the kind of thing that can build spaceships and learn physics and whatever, it needs to have some drives which elicit learning, including say curiosity and exploration.</p><p><strong>Adam Marblestone</strong></p><p>Yeah, curiosity, interest in others, interest in social interactions. But that&#8217;s pretty minimal I think. And that&#8217;s true for humans, but it might be less true for something that&#8217;s already pretrained as an LLM or something. So most of why we want to know the Steering Subsystem, I think if I&#8217;m channeling Steve, is alignment reasons.</p><p><strong>Dwarkesh Patel</strong></p><p>How confident are we that we even have the right algorithmic conceptual vocabulary to think about what the brain is doing? What I mean by this is that there was one big contribution to AI from neuroscience which was this idea of the <a href="https://en.wikipedia.org/wiki/Neural_network_(machine_learning)#History">neuron</a> in the 1950s, just this original contribution. But then it seems like a lot of what we&#8217;ve learned afterwards about what the high-level algorithm the brain is implementing, from the backprop to if there&#8217;s something analogous to backprop happening in the brain to &#8220;Oh is <a href="https://en.wikipedia.org/wiki/Visual_cortex">V1</a> doing something like <a href="https://en.wikipedia.org/wiki/Convolutional_neural_network">CNNs</a>&#8221; to <a href="https://en.wikipedia.org/wiki/Temporal_difference_learning">TD learning</a> and <a href="https://en.wikipedia.org/wiki/Bellman_equation">Bellman equations</a>, <a href="https://en.wikipedia.org/wiki/Actor-critic_algorithm">actor-critic</a>, whatever&#8230; It seems inspired by this dynamic where we come up with some idea, maybe we can make AI neural networks work this way, and then we notice that something in the brain also works that way. So why not think there&#8217;s more things like this.</p><p><strong>Adam Marblestone</strong></p><p>There may be. I think the reason that I think that we might be onto something is that the AIs we&#8217;re making based on these ideas are working surprisingly well. There&#8217;s also a bunch of just empirical stuff. Convolutional neural nets and variants of convolutional neural nets. I&#8217;m not sure what the absolute latest is, but compared to other models in computational neuroscience of what the visual system is doing, they are just more predictive. You can just score, even pretrained on cat pictures and stuff, CNNs, what is the representational similarity that they have on some arbitrary other image compared to the brain activations measured in different ways? <a href="https://mcgovern.mit.edu/profile/james-dicarlo/">Jim DiCarlo&#8217;s</a> lab has this brain score and the AI model is actually&#8230; There seems to be some relevance there. Neuroscience doesn&#8217;t necessarily have something better than that.</p><p>So yes, that&#8217;s just recapitulating what you&#8217;re saying, that the best computational neuroscience theories we have seem to have been invented largely as a result of AI models and finding things that work. So find backprop works and then saying, &#8220;Can we approximate backprop with cortical circuits?&#8221; or something. There&#8217;s been things like that.</p><p>Now, some people totally disagree with this. <a href="https://med.nyu.edu/faculty/gyorgy-buzsaki">Gy&#246;rgy Buzs&#225;ki</a> is a neuroscientist who has a book called <em><a href="https://amzn.to/4jhIggE">The Brain from the Inside Out</a></em> where he basically says all our psychology concepts, AI concepts, all this stuff is just made-up stuff. What we actually have to do is figure out what is the actual set of primitives that the brain actually uses. And our vocabulary is not going to be adequate to that. We have to start with the brain and make new vocabulary rather than saying backprop and then try to apply that to the brain or something like that. He studies a lot of oscillations and stuff in the brain as opposed to individual neurons and what they do.</p><p>I don&#8217;t know. I think that there&#8217;s a case to be made for that. And from a research program design perspective, one thing we should be trying to do is just simulate a tiny worm or a tiny zebrafish, almost as biophysical or as bottom-up as possible. Like get <a href="https://en.wikipedia.org/wiki/Connectome">connectome</a>, molecules, activity and just study it as a physical dynamical system and look at what it does.</p><p>But I don&#8217;t know, it just feels like AI is really good fodder for computational neuroscience. Those might actually be pretty good models. We should look at that. I both think that there should be a part of the research portfolio that is totally bottom-up and not trying to apply our vocabulary that we learn from AI onto these systems, and that there should be another big part of this that&#8217;s trying to reverse engineer it using that vocabulary or variant of that vocabulary. We should just be pursuing both. My guess is that the reverse engineering one is actually going to work-ish or something. Like we do see things like TD learning, which <a href="https://en.wikipedia.org/wiki/Richard_S._Sutton">Sutton</a> also invented separately.</p><p><strong>Dwarkesh Patel</strong></p><p>That must be a crazy feeling to just like&#8212;</p><p><strong>Adam Marblestone</strong></p><p>Yeah, that&#8217;s crazy.</p><p><strong>Dwarkesh Patel</strong></p><p>This equation I wrote down is like in the brain.</p><p><strong>Adam Marblestone</strong></p><p><a href="https://deepmind.google/blog/dopamine-and-temporal-difference-learning-a-fruitful-relationship-between-neuroscience-and-ai/">It seems like the dopamine is doing some of that</a>, yeah.</p><h3>01:03:59 &#8211; Why a map of the human brain is important</h3><p><strong>Dwarkesh Patel</strong></p><p>So let me ask you about this. You guys are funding different groups that are trying to figure out what&#8217;s up in the brain. If we had a perfect representation, however you define it, of the brain, why think it would actually let us figure out the answer to these questions? We have neural networks which are way more interpretable, not just because we understand what&#8217;s in the weight matrices, but because there are weight matrices. There are these boxes with numbers in them. Even then we can tell very basic things. We can kind of see circuits for very basic pattern matching of following one token with another. I feel like we don&#8217;t really have an explanation of why LLMs are intelligent just because they&#8217;re interpretable.</p><p><strong>Adam Marblestone</strong></p><p>I think I would somewhat dispute it. We have some description of what the LLM is fundamentally doing. What that&#8217;s doing is that I have an architecture and I have a learning rule and I have hyperparameters and I have initialization and I have training data.</p><p><strong>Dwarkesh Patel</strong></p><p>But those are things we learned because we built them, not because we interpreted them from seeing the weights. The analogous thing to connectome is like seeing the weights.</p><p><strong>Adam Marblestone</strong></p><p>What I think we should do is we should describe the brain more in that language of things like architectures, learning rules, initializations, rather than trying to find the <a href="https://www.anthropic.com/news/golden-gate-claude">Golden Gate Bridge circuit</a> and saying exactly how this neuron actually&#8230; That&#8217;s going to be some incredibly complicated learned pattern. <a href="https://en.wikipedia.org/wiki/Konrad_K%C3%B6rding">Konrad Kording</a> and <a href="https://www.google.com/search?sca_esv=3e23ed5d65d1f5ab&amp;sxsrf=AE3TifOZQZ3j-0Z8yOfW6MadMzitRdTlSg:1767024250763&amp;q=Timothy+Lillicrap&amp;sa=X&amp;ved=2ahUKEwjL0pqmluORAxVF1fACHaZNNygQ7xYoAHoECBQQAQ&amp;biw=1440&amp;bih=677&amp;dpr=2">Tim Lillicrap</a> have this paper from a while ago, maybe five years ago, called &#8220;<a href="https://arxiv.org/abs/1907.06374">What does it mean to understand a neural network?</a>&#8221; What they say is basically that you could imagine you train a neural network to compute the digits of pi or something. It&#8217;s like some crazy pattern. You also train that thing to predict the most complicated thing you find, predict stock prices, basically predict really complex systems, computationally complete systems. I could train a neural network to do <a href="https://en.wikipedia.org/wiki/Cellular_automaton">cellular automata</a> or whatever crazy thing. It&#8217;s like, we&#8217;re never going to be able to fully capture that with interpretability, I think. It&#8217;s just going to just be doing really complicated computations internally.</p><p>But we can still say that the way it got that way is that it had an architecture and we gave it this training data and it had this loss function. So I want to describe the brain in the same way. And I think that this framework that I&#8217;ve been kind of laying out is that we need to understand the cortex and how it embodies a learning algorithm. I don&#8217;t need to understand how it computes &#8220;Golden Gate Bridge.&#8221;</p><p><strong>Dwarkesh Patel</strong></p><p>But if you can see all the neurons, if you have the connectome, why does that teach you what the learning algorithm is?</p><p><strong>Adam Marblestone</strong></p><p>Well, I guess there are a couple different views of it. So it depends on these different parts of this portfolio. On the totally bottom-up, we-have-to-simulate-everything portfolio, it kind of just doesn&#8217;t. You have to make a simulation of the zebrafish brain or something and then you see what are the emergent dynamics in this and you come up with new names and new concepts and all that. That&#8217;s the most extreme bottom-up neuroscience view. But even there the connectome is really important for doing that biophysical or bottom-up simulation.</p><p>But on the other hand you can say, &#8220;Well, what if we can actually apply some ideas from AI?&#8221; We basically need to figure out, is it an energy-based model or is it an amortized <a href="https://en.wikipedia.org/wiki/Variational_autoencoder">VAE</a>-type model? Is it doing backprop or is it doing something else? Are the learning rules local or global? If we have some repertoire of possible ideas about this, just think of the connectome as a huge number of additional constraints that will help to refine, to ultimately have a consistent picture of that.</p><p>I think about this for the Steering Subsystem stuff too, just very basic things about it. How many different types of dopamine signal or of Steering Subsystem signal or thought assessor or so on&#8230; How many different types of what broad categories are there? Like even this very basic information that there&#8217;s more cell types in the hypothalamus than there are in the cortex, that&#8217;s new information about how much structure is built there versus somewhere else. How many different dopamine neurons are there? Is the wiring between prefrontal and auditory the same as the wiring between prefrontal and visual? The most basic things, we don&#8217;t know. The problem is learning even the most basic things by a series of bespoke experiments takes an incredibly long time. Whereas just learning all that at once by getting a connectome is just way more efficient.</p><p><strong>Dwarkesh Patel</strong></p><p>What is the timeline on this? Presumably the idea of this is, first, to inform the development of AI. You want to be able to figure out how we get AIs to want to care about what other people think of its internal thought pattern. But interp researchers are making progress on this question just by inspecting normal neural networks.<strong> </strong>There must be some feature&#8230;</p><p><strong>Adam Marblestone</strong></p><p>You can do interp on LLMs that exist. You can&#8217;t do interp on a hypothetical model-based reinforcement algorithm like the brain that we will eventually converge to when we do AGI.</p><p><strong>Dwarkesh Patel</strong></p><p>Fair. But what timelines on AI do you need for this research to be practical and relevant?</p><p><strong>Adam Marblestone</strong></p><p>I think it&#8217;s fair to say it&#8217;s not super practical and relevant if you&#8217;re in an <a href="https://ai-2027.com/">AI 2027</a> scenario. And so what science I&#8217;m doing now is not going to affect the science of ten years from now. Because what&#8217;s going to affect the science of 10 years from now is the outcome of this AI 2027 scenario. It kind of doesn&#8217;t matter that much probably if I have the connectome, maybe it slightly tweaks certain things.</p><p>But I think there&#8217;s a lot of reason to think maybe that we will get a lot out of this paradigm. But then the real thing, the thing that is the single event that is transformative for the entire future or something type event is still more than five years away or something.</p><p><strong>Dwarkesh Patel</strong></p><p>Is that because we haven&#8217;t captured omnidirectional inference, we haven&#8217;t figured out the right ways to get a mind to pay attention to things in a way that makes sense?</p><p><strong>Adam Marblestone</strong></p><p>I mean, I would take the entirety of your collective podcast with everyone as showing the distribution of these things. I don&#8217;t know. What was <a href="https://www.dwarkesh.com/p/andrej-karpathy">Karpathy&#8217;s</a> timeline, right? What&#8217;s <a href="https://www.dwarkesh.com/p/demis-hassabis">Demis&#8217;s</a> timeline? So not everybody has a three-year timeline.</p><p><strong>Dwarkesh Patel</strong></p><p>But there are different reasons and I&#8217;m curious which ones are yours.</p><p><strong>Adam Marblestone</strong></p><p>What are mine? I don&#8217;t know, I&#8217;m just watching your podcast. I&#8217;m trying to understand the distribution. I don&#8217;t have a super strong claim that LLMs can&#8217;t do it.</p><p><strong>Dwarkesh Patel</strong></p><p>But is the crux the data efficiency or&#8230;?</p><p><strong>Adam Marblestone</strong></p><p>I think part of it is just that it is weirdly different from all this brain stuff. So intuitively it&#8217;s just weirdly different than all this brain stuff and I&#8217;m kind of waiting for the thing that starts to look more like brain stuff. I think if <a href="https://en.wikipedia.org/wiki/AlphaZero">AlphaZero</a>, and model-based RL and all these other things that were being worked on 10 years ago, had been giving us the <a href="https://en.wikipedia.org/wiki/GPT-5">GPT-5</a> type capabilities, then I would be like, &#8220;Oh wow, we&#8217;re both in the right paradigm and seeing the results a priori. So my prior and my data are agreeing.&#8221; Now it&#8217;s like, &#8220;I don&#8217;t know what exactly my data is. Looks pretty good, but my prior is sort of weird so I don&#8217;t have a super strong opinion on it.&#8221;</p><p>So I think there&#8217;s a possibility that essentially all other scientific research that is being done is somehow obviated. But I don&#8217;t put a huge amount of probability on that. I think my timelines might be more in the 10-year-ish range. If that&#8217;s the case, I think there is probably a difference between a world where we have connectomes on hard drives and we have an understanding of Steering Subsystem architecture. We&#8217;ve compared even the most basic properties of what are the reward functions, cost function, architecture, et cetera, of a mouse versus a shrew versus a small primate, et cetera.</p><p><strong>Dwarkesh Patel</strong></p><p>Is this practical in 10 years?</p><p><strong>Adam Marblestone</strong></p><p>I think it has to be a really big push.</p><p><strong>Dwarkesh Patel</strong></p><p>How much funding, how does it compare to where we are now?</p><p><strong>Adam Marblestone</strong></p><p>It&#8217;s like low billions-dollar scale funding in a very concerted way I would say.</p><p><strong>Dwarkesh Patel</strong></p><p>And how much is on it now?</p><p><strong>Adam Marblestone</strong></p><p>So if I just talk about some of the specific things we have going on with <a href="https://en.wikipedia.org/wiki/Connectomics">connectomics</a>&#8230; <a href="https://www.e11.bio/">E11 Bio</a> is our main thing on connectomics. They are trying to make the technology of connectomic brain mapping several orders of magnitude cheaper. The <a href="https://wellcome.org/insights/reports/scaling-connectomics">Wellcome Trust put out a report a year or two ago</a> that said to get one mouse brain, the first mouse brain connectome would be a several billion dollars project. E11 technology, and the suite of efforts in the field, is trying to get a single mouse connectome down to low tens of millions of dollars.</p><p>That&#8217;s a mammal brain. A human brain is about 1,000 times bigger. If with a mouse brain you can get to $10 million or $20 million, $30 million, with technology, if you just naively scale that, a human brain is now still billions of dollars, to just do one human brain. Can you go beyond that? Can you get a human brain for less than a billion? But I&#8217;m not sure you need every neuron in the human brain.</p><p>We want to, for example, do an entire mouse brain and a human Steering Subsystem and the entire brains of several different mammals with different social instincts. So with a bunch of technology push and a bunch of concerted effort, real significant progress if it&#8217;s focused effort can be done in the hundreds of millions to low billions scale.</p><p><strong>Dwarkesh Patel</strong></p><p>What is the definition of a <a href="https://en.wikipedia.org/wiki/Connectome">connectome</a>? Presumably it&#8217;s not a bottom-up biophysics model. So is it just that it can estimate the input-output of a brain? What is the level of abstraction?</p><p><strong>Adam Marblestone</strong></p><p>You can give different definitions and one of the things that&#8217;s cool&#8230; So the standard approach to connectomics uses the electron microscope and very, very thin slices of brain tissue. It&#8217;s basically labeling. The cell membranes are going to show up, scatter electrons a lot and everything else is going to scatter electrons less. But you don&#8217;t see a lot of details of the molecules, which types of synapses, different synapses of different molecular combinations and properties.</p><p>E11 and some other research in the field has switched to an <a href="https://www.nature.com/articles/s42003-023-05468-9">optical microscope paradigm</a>. With optical, the photons don&#8217;t damage the tissue, so you can wash it and look at fragile gentle molecules. So with E11&#8217;s approach, you can get a &#8220;molecularly annotated connectome.&#8221; That&#8217;s not just who is connected to who by some synapse, but what are the molecules that are present at the synapse? What type of cell is that?</p><p>A molecularly annotated connectome, that&#8217;s not exactly the same as having the synaptic weights. That&#8217;s not exactly the same as being able to simulate the neurons and say what&#8217;s the functional consequence of having these molecules and connections. But you can also do some amount of activity mapping and try to correlate structure to function. Train an ML model basically to predict the activity from the connectome.</p><p><strong>Dwarkesh Patel</strong></p><p>What are the lessons to be taken away from the <a href="https://en.wikipedia.org/wiki/Human_Genome_Project">Human Genome Project</a>? One way you could look at it is that it was a mistake and you shouldn&#8217;t have spent billions of dollars getting one genome mapped. Rather you should have just invested in technologies which have now allowed us to map genomes for hundreds of dollars.</p><p><strong>Adam Marblestone</strong></p><p>Well, <a href="https://www.dwarkesh.com/p/george-church">George Church</a> was my PhD advisor and he&#8217;s pointed out that it was $3 billion or something, roughly $1 per base pair for the first genome. Then the National Human Genome Research Institute basically structured the funding process right. They got a bunch of companies competing to lower the cost. And then the cost dropped like a million-fold in 10 years because they changed the paradigm from macroscopic chemical techniques to these individual DNA molecules which would make a little cluster of DNA molecules on the microscope and you would see just a few DNA molecules at a time on each pixel of the camera. It would give you a different, in parallel, look at different fragments of DNA. So you parallelize the thing by millions-fold. That&#8217;s what reduced the cost by millions-fold.</p><p>With switching from electron microscopy to optical connectomics, potentially even future types of connectomics technology, we think there should be similar patterns. That&#8217;s why E11, the Focus Research Organization, started with technology development rather than starting with saying we&#8217;re going to do a human brain or something and let&#8217;s just brute force it. We said let&#8217;s get the cost down with new technology. But then it&#8217;s still a big thing. Even with new next-generation technology, you still need to spend hundreds of millions on data collection.</p><p><strong>Dwarkesh Patel</strong></p><p>Is this going to be funded with philanthropy, by governments, by investors?</p><p><strong>Adam Marblestone</strong></p><p>This is very TBD and very much evolving in some sense as we speak. I&#8217;m hearing some rumors going around of connectomics-related companies potentially forming. So far E11 has been philanthropy. <a href="https://www.nsf.gov/news/nsf-announces-new-initiative-launch-scale-new-generation">The National Science Foundation just put out this call for Tech Labs</a>, which is somewhat FRO-inspired or related. You could have a tech lab for actually going and mapping the mouse brain with us and that would be philanthropy plus government still in a nonprofit, open-source framework. But can companies accelerate that? Can you credibly link connectomics to AI in the context of a company and get investment for that? It&#8217;s possible.</p><p><strong>Dwarkesh Patel</strong></p><p>I mean the cost of training these AIs is increasing so much. If you could tell some story of not only are we going to figure out some safety thing, but in fact once we do that, we&#8217;ll also be able to tell you how AI works&#8230; You should go to these AI labs and just be like, &#8220;Give me one one-hundredth of your projected budget in 2030.&#8221;</p><p><strong>Adam Marblestone</strong></p><p>I sort of tried a little bit seven or eight years ago and there was not a lot of interest. Maybe now there would be. But all the things that we&#8217;ve been talking about, it&#8217;s really fun to talk about, but it&#8217;s ultimately speculation. What is the actual reason for the energy efficiency of the brain, for example? Is it doing real inference or amortized inference or something else? This is all answerable by neuroscience. It&#8217;s going to be hard, but it&#8217;s actually answerable. So if you can only do that for low billions of dollars or something to really comprehensively solve that, it seems to me, in the grand scheme of trillions of dollars of GPUs and stuff, it actually makes sense to do that investment.</p><p><strong>Dwarkesh Patel</strong></p><p>Also, there&#8217;s been many labs that have been launched in the last year where they&#8217;re raising on the valuation of billions for things which are quite credible but are not like, &#8220;Our <a href="https://stripe.com/resources/more/what-is-annualized-run-rate-arr-how-to-calculate-arr-and-use-it-strategically">ARR</a> next quarter is going to be whatever.&#8221; It&#8217;s like we&#8217;re going to discover materials and&#8212;</p><p><strong>Adam Marblestone</strong></p><p>Yes, moonshot startups or billionaire-backed startups. Moonshot startups I see as on a continuum with <a href="https://www.convergentresearch.org/">FROs</a>. FROs are a way of channeling philanthropic support and ensuring that it&#8217;s open source public benefit, various other things that may be properties of a given FRO. But yes, billionaire-backed startups, if they can target the right science, the exact right science.</p><p>I think there&#8217;s a lot of ways to do moonshot neuroscience companies that would never get you the connectome.  It&#8217;s like, &#8220;Oh, we&#8217;re going to upload the brain&#8221; or something, but never actually get the mouse connectome or something. These fundamental things that you need to get to ground truth the science. There are lots of ways to have a moonshot company go wrong and not do the actual science. But there also may be ways to have companies or big corporate labs get involved and actually do it correctly.</p><p><strong>Dwarkesh Patel</strong></p><p>This brings to mind an idea that you had in a lecture you gave five years ago about. Do you want to explain <a href="https://en.wikipedia.org/wiki/Imitation_learning#Behavior_Cloning">behavior cloning</a>?</p><p><strong>Adam Marblestone</strong></p><p>Actually this is funny because the first time I saw this idea, I think it might have been in <a href="https://gwern.net/blog/2018/brain-imitation-learning">a blog post</a> by <a href="https://www.dwarkesh.com/p/gwern-branwen">Gwern</a>. There&#8217;s always a Gwern blog post. There are now academic research efforts and some amount of emerging company-type efforts to try to do this.</p><p>Normally, let&#8217;s say I&#8217;m training an image classifier or something. I show it pictures of cats and dogs or whatever and they have the label &#8220;cat&#8221; or &#8220;dog&#8221;. And I have a neural network that&#8217;s supposed to predict the label &#8220;cat&#8221; or &#8220;dog&#8221; or something. That is a limited amount of information per label that you&#8217;re putting in. It&#8217;s just &#8220;cat&#8221; or &#8220;dog&#8221;.</p><p>What if I also had, &#8220;Predict what is my neural activity pattern when I see a cat or when I see a dog and all the other things?&#8221; If you add that as an auxiliary loss function or an auxiliary prediction task, does that sculpt the network to know the information that humans know about cats and dogs and to represent it in a way that&#8217;s consistent with how the brain represents it and the kind of representational dimensions or geometry of how the brain represents things, as opposed to just having these labels? Does that let it generalize better? Does that let it have richer labeling?</p><p>Of course that sounds really challenging. It&#8217;s very easy to generate lots and lots of labeled cat pictures. <a href="https://en.wikipedia.org/wiki/Scale_AI">Scale AI</a> or whatever can do this. It is harder to generate lots and lots of brain activity patterns that correspond to things that you want to train the AI to do. But again, this is just a technological limitation of neuroscience. If every iPhone was also a brain scanner, you would not have this problem and we would be training AI with the brain signals. It&#8217;s just the order in which technology has developed is that we got GPUs before we got portable brain scanners.</p><p><strong>Dwarkesh Patel</strong></p><p>What is the ML analog, what you&#8217;d be doing here? Because when you <a href="https://en.wikipedia.org/wiki/Knowledge_distillation">distill</a> models, you&#8217;re still looking at the final layer of the log probs across all&#8212;</p><p><strong>Adam Marblestone</strong></p><p>If you distill one model into another, that is a certain thing. You are just trying to copy one model into another. I think that we don&#8217;t really have a perfect proposal to distill the brain. To distill the brain you need a much more complex brain interface. Maybe you could also do that. You could make surrogate models. <a href="https://toliaslab.org/">Andreas Tolias</a> and people like that are doing some amount of neural network surrogate models of brain activity data. Instead of having your visual cortex do the computation, just have the surrogate model. So you&#8217;re distilling your visual cortex into a neural network to some degree. That&#8217;s a kind of distillation.</p><p>This is doing something a little different. This is basically just saying I&#8217;m adding an auxiliary&#8230; I think of it as regularization or I think of it as adding an auxiliary loss function that&#8217;s smoothing out the prediction task to also always be consistent with how the brain represents it. It might help you with things like adversarial examples, for example.</p><p><strong>Dwarkesh Patel</strong></p><p>But what exactly are you predicting? You&#8217;re predicting the internal state of the brain?</p><p><strong>Adam Marblestone</strong></p><p>Yes. So in addition to predicting the label, a vector of labels like yes cat, not dog, yes, not boat, one-hot vector or whatever of yes, it&#8217;s cat, instead of these gazillion other categories, let&#8217;s say in this simple example. You&#8217;re also predicting a vector which is all these brain signal measurements.</p><p>So Gwern, anyway, <a href="https://gwern.net/blog/2018/brain-imitation-learning">had this long-ago blog post</a> of like, &#8220;Oh, this is an intermediate thing. We talk about whole brain emulation, we talk about AGI, we talk about <a href="https://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface">brain-computer interface</a>. We should also be talking about this brain-data-augmented thing that&#8217;s trained on all your behavior, but is also trained on predicting some of your neural patterns.&#8221;</p><p><strong>Dwarkesh Patel</strong></p><p>And you&#8217;re saying the Learning System is already doing this through the Steering System?</p><p><strong>Adam Marblestone</strong></p><p>Yeah, and our brain, our learning system also has to predict the Steering Subsystem as an auxiliary task. That helps the Steering Subsystem. Now, the Steering Subsystem can access that predictor and build a cool reward function using it.</p><h3>01:23:28 &#8211; What value will automating math have?</h3><p><strong>Dwarkesh Patel</strong></p><p>Separately, you&#8217;re on the board of <a href="https://lean-lang.org/">Lean</a>, which is this formal math language that mathematicians use to prove theorems and so forth. Obviously there&#8217;s a bunch of conversation right now about AI automating math. What&#8217;s your take?</p><p><strong>Adam Marblestone</strong></p><p>Well, I think that there are parts of math that it seems like it&#8217;s pretty well on track to automate. First of all, Lean was developed for a number of years at Microsoft and other places. It has become one of the Convergent <a href="https://news.mit.edu/2025/former-mit-researchers-advance-new-model-innovation-0606">Focused Research Organizations</a> to kind of drive more engineering and focus onto it.</p><p>So Lean is this programming language where instead of expressing your math proof on pen and paper, you express it in this programming language Lean. And then at the end, if you do that that way, it is a verifiable language so that you can click &#8220;verify&#8221; and Lean will tell you whether the conclusions of your proof actually follow perfectly from your assumptions of your proof. So it checks whether the proof is correct automatically.</p><p>By itself, this is useful for mathematicians collaborating and stuff like that. If I&#8217;m some amateur mathematician and I want to add to a proof, <a href="https://en.wikipedia.org/wiki/Terence_Tao">Terry Tao</a> is not going to just believe my result. But if Lean says it&#8217;s correct, it&#8217;s just correct. So it makes it easy for collaboration to happen, but it also makes it easy for correctness of proofs to be an RL signal in very much <a href="https://arxiv.org/abs/2506.14245">RLVR</a>. Formalized math proofing&#8212;so formal means it&#8217;s expressed in something like Lean and verifiable&#8212;is now mechanically verifiable. That becomes a perfect RLVR task.</p><p>I think that that is going to just keep working, it seems like there is at least one billion-dollar valuation company, <a href="https://harmonic.fun/">Harmonic</a>, based on this. <a href="https://deepmind.google/blog/ai-solves-imo-problems-at-silver-medal-level/">AlphaProof</a> is based on this. A couple other emerging really interesting companies. I think that this problem of RLVRing the crap out of math proving is going to work and we will be able to have things that search for proofs and find them in the same way that we have <a href="https://en.wikipedia.org/wiki/AlphaGo">AlphaGo</a> or what have you that can search for ways of playing the game of Go. With that verifiable signal, it works.</p><p>So does this solve math? There is still the part that has to do with conjecturing new interesting ideas. There&#8217;s still the conceptual organization of math of what is interesting. How do you come up with new theorem statements in the first place? Or even the very high-level breakdown of what strategies you use to do proofs. I think this will shift the burden of that so that humans don&#8217;t have to do a lot of the mechanical parts of math. Validating lemmas and proofs and checking if the statement of this in this paper is exactly the same as that paper and stuff like that. That will just work.</p><p>If you really think we&#8217;re going to get all these things we&#8217;ve been talking about, real AGI would also be able to make conjectures. <a href="https://en.wikipedia.org/wiki/Yoshua_Bengio">Bengio</a> has a <a href="https://yoshuabengio.org/2024/02/26/towards-a-cautious-scientist-ai-with-convergent-safety-bounds/">paper</a>, more like a theoretical paper. There are probably a bunch of other papers emerging about this. Is there a loss function for good explanations or good conjectures? That&#8217;s a pretty profound question.</p><p>A really interesting math proof or statement might be one that compresses lots of information and has lots of implications for lots of other theorems. Otherwise you would have to prove those theorems using long complex passive inference. Here, if you have this theorem, this theorem is correct, and you have short passive inference to all the other ones. And it&#8217;s a short compact statement. So it&#8217;s like a powerful explanation that explains all the rest of math. And part of what math is doing is making these compact things that explain the other things.</p><p><strong>Dwarkesh Patel</strong></p><p>It&#8217;s like the <a href="https://en.wikipedia.org/wiki/Kolmogorov_complexity">Kolmogorov complexity</a> of this statement or something.</p><p><strong>Adam Marblestone</strong></p><p>Yeah, of generating all the other statements, given that you know this one or stuff like that. Or if you add this, how does it affect the complexity of the rest of the network of proofs? So can you make a loss function that adds, &#8220;Oh, I want this proof to be a really highly powerful proof&#8221;? I think some people are trying to work on that. So maybe you can automate the creativity part.</p><p>If you had true AGI, it would do everything a human can do. So it would also do the things that the creative mathematicians do. But barring that, I think just RLVRing the crap out of proofs, I think that&#8217;s going to be just a really useful tool for mathematicians. It&#8217;s going to accelerate math a lot and change it a lot, but not necessarily immediately change everything about it.</p><p>Will we get mechanical proof of the <a href="https://en.wikipedia.org/wiki/Riemann_hypothesis">Riemann hypothesis</a> or something like that, or things like that? Maybe, I don&#8217;t know. I don&#8217;t know enough details of how hard these things are to search for, and I&#8217;m not sure anyone can fully predict that, just as we couldn&#8217;t exactly predict when Go would be solved or something like that.</p><p>I think it&#8217;s going to have lots of really cool applied applications. So one of the things you want to do is you want to have provably stable, secure, unhackable software. So you can write math proofs about software and say, &#8220;This code, not only does it pass these <a href="https://en.wikipedia.org/wiki/Unit_testing">unit tests</a>, but I can mathematically prove that there&#8217;s no way to hack it in these ways, or no way to mess with the memory&#8221;, or these types of things that hackers use, or it has these properties. You can use the same Lean and same proof to do formally verified software.</p><p>I think that&#8217;s going to be a really powerful piece of cybersecurity that&#8217;s relevant for all sorts of other AI hacking the world stuff. And if you can prove the Riemann hypothesis, you&#8217;re also going to be able to prove insanely complex things about very complex software. And then you&#8217;ll be able to ask the LLM, &#8220;Synthesize me a software that I can prove is correct.&#8221;</p><p><strong>Dwarkesh Patel</strong></p><p>Why hasn&#8217;t provable programming language taken off as a result of LLMs?</p><p><strong>Adam Marblestone</strong></p><p>I think it&#8217;s starting to. One challenge&#8212;we are actually incubating a potential Focused Research Organization on this&#8212;is the specification problem. So mathematicians know what interesting theorems they want to formalize. Let&#8217;s say I have some code that is involved in running the power grid or something and it has some security properties, well what is the formal spec of those properties? The power grid engineers just made this thing, but they don&#8217;t necessarily know how to lift the formal spec from that. And it&#8217;s not necessarily easy to come up with the spec that is the spec that you want for your code. People aren&#8217;t used to coming up with formal specs and there are not a lot of tools for it.</p><p>So you also have this user interface plus AI problem of what security specs should I be specifying? Is this the spec that I wanted? So there&#8217;s a spec problem and it&#8217;s just been really complex and hard. But it&#8217;s only just in the last very short time that the LLMs are able to generate verifiable proofs of things that are useful to mathematicians, starting to be able to do some amount of that for software verification, hardware verification.</p><p>But I think if you project the trends over the next couple years, it&#8217;s possible that it just flips the tide. Formal methods, this whole field of formal methods or formal verification, provable software. It&#8217;s this weird almost backwater of the more theoretical part of programming languages and stuff, very academically flavored often. Although there was this <a href="https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/">DARPA program that made a provably secure quadcopter helicopter</a> and stuff like that.</p><p><strong>Dwarkesh Patel</strong></p><p>Secure against&#8230; What is the property that is exactly proved? Not for that particular project, but just in general. Because obviously things malfunction for all kinds of reasons.</p><p><strong>Adam Marblestone</strong></p><p>You could say that what&#8217;s going on in this part of the memory over here, which is supposed to be the part the user can access, can&#8217;t in any way affect what&#8217;s going on in the memory over here or something like that. Things like that.</p><p><strong>Dwarkesh Patel</strong></p><p>So there&#8217;s two questions. One is how useful is this? Two is, how satisfying, as a mathematician, would it be? The fact that there&#8217;s this application towards proving that software has certain properties or hardware has certain properties, if that works, that would obviously be very useful. But from a pure&#8230; Are we going to figure out mathematics? Is your sense that there&#8217;s something about finding that one construction cross-maps to another construction in a different domain, or finding that, &#8220;Oh, this <a href="https://en.wikipedia.org/wiki/Lemma_(mathematics)">lemma</a>, if you redefine this term, it still satisfies what I meant by this term. But a counterexample that previously knocked it down no longer applies.&#8221; That kind of dialectical thing that happens in mathematics.</p><p><strong>Adam Marblestone</strong></p><p>Will the software replace that?</p><p><strong>Dwarkesh Patel</strong></p><p>Yeah. How much of the value of this sort of pure mathematics just comes from actually just coming up with entirely new ways of thinking about a problem, mapping it to a totally different representation? Do we have examples?</p><p><strong>Adam Marblestone</strong></p><p>I don&#8217;t know. I think of it maybe a little bit like when everybody had to write <a href="https://en.wikipedia.org/wiki/Assembly_language">assembly code</a> or something like that. The amount of fun cool startups that got created was just a lot less or something. Fewer people could do it, progress was more grinding and slow and lonely and so on. You had more false failures because you didn&#8217;t get something about the assembly code, rather than the essential thing of was your concept right. Harder to collaborate and stuff like that. And so I think it will be really good.</p><p>There is some worry that by not learning to do the mechanical parts of the proofs that you fail to generate the intuitions that inform the more conceptual parts, the creative part.</p><p><strong>Dwarkesh Patel</strong></p><p>It&#8217;s the same with assembly.</p><p><strong>Adam Marblestone</strong></p><p>Right. So at what point is that applying? With <a href="https://en.wikipedia.org/wiki/Vibe_coding">vibe coding</a>, are people not learning computer science or actually are they vibe coding and they&#8217;re also simultaneously looking at the LLM that&#8217;s explaining these abstract computer science concepts to them and it&#8217;s all just all happening faster? Their feedback loop is faster and they&#8217;re learning way more abstract computer science and algorithm stuff because they&#8217;re vibe coding. I don&#8217;t know, it&#8217;s not obvious. That might be something about the user interface and the human infrastructure around it.</p><p>But I guess there&#8217;s some worry that people don&#8217;t learn the mechanics and therefore don&#8217;t build the grounded intuitions or something. But my hunch is it&#8217;s super positive. Exactly, on net, how useful that will be or how much overall math breakthroughs, or math breakthroughs even that we care about, will happen? I don&#8217;t know.</p><p>One other thing that I think is cool is the accessibility question. Okay, that sounds a little bit corny. Okay, yeah, more people can do math, but who cares? But I think there&#8217;s lots of people that could have interesting ideas. Like maybe the <a href="https://en.wikipedia.org/wiki/Quantum_gravity">quantum theory of gravity</a> or something. Yeah, one of us will come up with the quantum theory of gravity instead of a card-carrying physicist. In the same way that Steve Byrnes is reading the neuroscience literature and he hasn&#8217;t been in the neuroscience lab that much. But he&#8217;s able to synthesize across the neuroscience literature and be like, &#8220;Oh, Learning Subsystem, Steering Subsystem. Does this all make sense?&#8221; He&#8217;s an outsider neuroscientist in some ways. Can you have outsider string theorists or something, because the math is just done for them by the computer? And does that lead to more innovation in string theory? Maybe yes.</p><p><strong>Dwarkesh Patel</strong></p><p>Interesting. Okay, so if this approach works and you&#8217;re right that LLMs are not the final paradigm, and suppose it takes at least 10 years to get the final paradigm in that world. There&#8217;s this fun sci-fi premise where you have&#8230; Terence Tao today had a <a href="https://mathstodon.xyz/@tao/115722360006034040">tweet</a> where he&#8217;s like, &#8220;These models are like automated cleverness but not automated intelligence.&#8221; And you can quibble with the definitions there. But if you have automated cleverness and you have some way of filtering&#8212;which if you can formalize and prove things that the LLMs are saying you could do&#8212;then you could have this situation where quantity has a quality all of its own.</p><p>So what are the domains of the world which could be put in this provable symbolic representation? So in the world where AGI is super far away, maybe it makes sense to literally turn everything the LLMs ever do, or almost everything they do, into super provable statements. So LLMs can actually build on top of each other because everything they do is super provable.</p><p>Maybe this is just necessary because you have billions of intelligences running around. Even if they are super intelligent, the only way the future AGI civilization can collaborate with each other is if they can prove each step. They&#8217;re just brute force churning out&#8230; This is what the <a href="https://en.wikipedia.org/wiki/Matrioshka_brain">Jupiter brains</a> are doing.</p><p><strong>Adam Marblestone</strong></p><p>It&#8217;s a universal language, it&#8217;s provable. It&#8217;s also provable from the perspective of, &#8220;Are you trying to exploit me or are you sending me some message that&#8217;s trying to hack into my brain effectively?&#8221; Are you trying to socially influence me? Are you actually just sending me just the information that I need and no more for this?</p><p>So <a href="https://davidad.org/">davidad</a>, who&#8217;s this program director at <a href="https://www.aria.org.uk/about-aria/our-team/programme-directors/">ARIA</a> now in the UK, he has this whole design of an <a href="https://en.wikipedia.org/wiki/DARPA">ARPA</a>-style program, a sort of safeguarded AI that very heavily leverages provable safety properties. Can you apply proofs to&#8230; Can you have a world model? But that world model is actually not specified just in neuron activations, but it&#8217;s specified in equations. Those might be very complex equations, but if you can just get insanely good at just auto-proving these things with cleverness, auto-cleverness&#8230; Can you have explicitly interpretable world models as opposed to neural net world models and move back basically to symbolic methods just because you can just have insane amount of ability to prove things? Yeah, I mean that&#8217;s an interesting vision. I don&#8217;t know in the next 10 years whether that will be the vision that plays out, but I think it&#8217;s really interesting to think about.</p><p>Even for math, I mean, Terence Tao is doing some amount of stuff where it&#8217;s not about whether you can prove the individual theorems. It&#8217;s like let&#8217;s prove all the theorems en masse and then let&#8217;s study the properties of the aggregate set of proved theorems. Which are the ones that got proved and which are the ones that didn&#8217;t? Okay, well that&#8217;s the landscape of all the theorems instead of one theorem at a time.</p><h3>01:38:18 &#8211; Architecture of the brain</h3><p><strong>Dwarkesh Patel</strong></p><p>Speaking of symbolic representations, one question I was meaning to ask you is, how does the brain represent the world model? Obviously nets out in neurons, but I don&#8217;t mean extremely functionally. I mean conceptually, is it in something that&#8217;s analogous to the hidden state of a neural network or is it something that&#8217;s closer to a symbolic language?</p><p><strong>Adam Marblestone</strong></p><p>We don&#8217;t know. There&#8217;s some amount of study of this. There&#8217;s these things like <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC5573145/">face patch neurons</a> that represent certain parts of the face that geometrically combine in interesting ways. That&#8217;s with geometry and vision. Is that true for other more abstract things? There&#8217;s this idea of cognitive maps. A lot of the stuff that a rodent hippocampus has to learn is place cells and, where is the rodent going to go next and is it going to get a reward there? It&#8217;s very geometric. And do we organize concepts with an abstract version of a spatial map?</p><p>There&#8217;s some questions of can we do true symbolic operations? Can I have a register in my brain that copies a variable to another register regardless of what the content of that variable is? That&#8217;s this variable binding problem. Basically I don&#8217;t know if we have that machinery or is it more like cost functions and architectures that make some of that approximately emerge, but maybe it would also emerge in a neural net? There&#8217;s a bunch of interesting neuroscience research trying to study this, what the representations look like.</p><p><strong>Dwarkesh Patel</strong></p><p>But what&#8217;s your hunch?</p><p><strong>Adam Marblestone</strong></p><p>Yeah, my hunch is that it&#8217;s going to be a huge mess and we should look at the architecture, the loss functions, and the learning rules. I don&#8217;t expect it to be pretty in there.</p><p><strong>Dwarkesh Patel</strong></p><p>Which is that it is not a symbolic language type thing?</p><p><strong>Adam Marblestone</strong></p><p>Yeah, probably it&#8217;s not that symbolic. But other people think very differently.</p><p><strong>Dwarkesh Patel</strong></p><p>Another random question speaking of binding, what is up with feeling like there&#8217;s an experience? All the parts of your brain which are modeling very different things, have different drives, and at least presumably feel like there&#8217;s an experience happening right now. Also that across time you feel like&#8230;</p><p><strong>Adam Marblestone</strong></p><p>Yeah, I&#8217;m pretty much at a loss on this one. I don&#8217;t know. <a href="https://maxhodak.com/">Max Hodak</a> has been giving talks about this recently. He&#8217;s another really hardcore neuroscience person, neurotechnology person. The thing I mentioned with <a href="https://en.wikipedia.org/wiki/Doris_Tsao">Doris</a> maybe also sounds like it might have some touching on this question. But yeah, I don&#8217;t think anybody has any idea. It might even involve new physics.</p><p><strong>Dwarkesh Patel</strong></p><p>Here&#8217;s another question which might not have an answer yet. <a href="https://www.ibm.com/think/topics/continual-learning">Continual learning</a>, is that the product of something extremely fundamental at the level of even the learning algorithm? You could say, &#8220;Look, at least the way we do backprop in neural networks is that you freeze the weight, there&#8217;s a training period and you freeze the weights. So you just need this active inference or some other learning rule in order to do continual learning.&#8221; Or do you think it&#8217;s more a matter of architecture and how memory is exactly stored and what kind of associative memory you have basically?</p><p><strong>Adam Marblestone</strong></p><p>So continual learning&#8230; I don&#8217;t know. At the architectural level, there&#8217;s probably some interesting stuff that the hippocampus is doing. People have long thought this. What kinds of sequences is it storing? How is it organizing, representing that? How is it replaying it back? What is it replaying back? How exactly does that memory consolidation work? Is it training the cortex using replays or memories from the hippocampus or something like that? There&#8217;s probably some of that stuff.</p><p>There might be multiple timescales of <a href="https://en.wikipedia.org/wiki/Neuroplasticity">plasticity</a> or clever learning rules that can simultaneously be storing short-term information and also doing backprop with it. Neurons may be doing a couple things: some fast weight plasticity and some slower plasticity at the same time, or synapses that have many states. I mean, I don&#8217;t know. From a neuroscience perspective, I&#8217;m not sure that I&#8217;ve seen something that&#8217;s super clear on what causes continual learning except maybe to say that this systems consolidation idea of hippocampus consolidating cortex. Some people think it is a big piece of this and we still don&#8217;t fully understand the details.</p><p><strong>Dwarkesh Patel</strong></p><p>Speaking of fast weights, is there something in the brain which is the equivalent of this distinction between <a href="https://www.coursera.org/articles/neural-network-parameters">parameters</a> and <a href="https://en.wikipedia.org/wiki/Activation_function">activations</a> that we see in neural networks? Specifically in <a href="https://en.wikipedia.org/wiki/Transformer_(deep_learning)">transformers</a> we have this idea that some of the activations are the <a href="https://towardsdatascience.com/what-are-query-key-and-value-in-the-transformer-architecture-and-why-are-they-used-acbe73f731f2/">key and value vectors</a> of previous tokens that you build up over time.</p><p>There&#8217;s the so-called fast weights that whenever you have a new token, you query them against these activations, but you also obviously can&#8217;t query them against all the other parameters in the network which are part of the actual built-in weights. Is there some such distinction that&#8217;s analogous?</p><p><strong>Adam Marblestone</strong></p><p>I don&#8217;t know. I mean we definitely have weights and activations. Whether you can use the activations in these clever ways, different forms of actual attention, like <a href="https://mcgovern.mit.edu/2014/04/10/how-the-brain-pays-attention/">attention</a> in the brain&#8230; Is that based on, &#8220;I&#8217;m trying to pay attention&#8221;... I think there&#8217;s probably several different kinds of actual attention in the brain. I want to pay attention to this area of visual cortex. I want to pay attention to the content in other areas that is triggered by the content in this area. Attention that&#8217;s just based on reflexes and stuff like that.</p><p>So I don&#8217;t know. There&#8217;s not just the cortex, there&#8217;s also the thalamus. The thalamus is also involved in somehow relaying or gating information. There&#8217;s <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC4017159/">cortico-cortical connections</a>. There&#8217;s also some amount of connection between cortical areas that goes through the thalamus. Is it possible that this is doing some sort of matching or constraint satisfaction or matching across keys over here and values over there? Is it possible that it can do stuff like that? Maybe. I don&#8217;t know. This is all part of the architecture of this corticothalamic system. I don&#8217;t know how transformer-like it is or if there&#8217;s anything analogous to that <a href="https://en.wikipedia.org/wiki/Attention_(machine_learning)">attention</a>. It&#8217;d be interesting to find out.</p><p><strong>Dwarkesh Patel</strong></p><p>We&#8217;ve got to give you a billion dollars so you can come on the podcast again and tell me how exactly the brain works.</p><p><strong>Adam Marblestone</strong></p><p>Mostly I just do data collection. It&#8217;s really unbiased data collection so all the other people can figure out these questions.</p><p><strong>Dwarkesh Patel</strong></p><p>Maybe the final question to go off on is, what was the most interesting thing you learned from the <a href="https://www.essentialtechnology.blog/p/introducing-the-convergent-research">Gap Map</a>? Maybe you want to explain what the Gap Map is.</p><p><strong>Adam Marblestone</strong></p><p>In the process of incubating and coming up with these Focused Research Organizations, these nonprofit startup-like moonshots that we&#8217;ve been getting philanthropists and now government agencies to fund, we talked to a lot of scientists. Some of the scientists were just like, &#8220;Here&#8217;s the next thing my graduate student will do. Here&#8217;s what I find interesting. Exploring these really interesting hypothesis spaces, all the types of things we&#8217;ve been talking about.&#8221;</p><p>Some of them were like, &#8220;Here&#8217;s this gap. I need this piece of infrastructure. There&#8217;s no combination of grad students in my lab or me loosely collaborating with other labs with traditional grants that could ever get me that. I need to have an organized engineering team that builds the miniature equivalent of the <a href="https://en.wikipedia.org/wiki/Hubble_Space_Telescope">Hubble Space Telescope</a>. If I can build that Hubble Space Telescope, then I will unblock all the other researchers in my field or some path of technological progress in the way that the Hubble Space Telescope lifted the boats and improved the life of every astronomer.&#8221; But it wasn&#8217;t really an astronomy discovery in itself. It was just that you had to put this giant mirror in space with a <a href="https://en.wikipedia.org/wiki/Charge-coupled_device">CCD camera</a> and organize all the people and engineering and stuff to do that. So some of the things we talked to scientists about looked like that.</p><p>The Gap Map is just a list of a lot of those things and we call it a Gap Map. I think it&#8217;s actually more like a fundamental capabilities map. What are all these things, like mini Hubble space telescopes? And then we organized that into gaps for helping people understand that or search that.</p><p><strong>Dwarkesh Patel</strong></p><p>What was the most surprising thing you found?</p><p><strong>Adam Marblestone</strong></p><p>I think I&#8217;ve talked about this before, but one thing is just the overall size or shape of it or something like that. It&#8217;s a few hundred fundamental capabilities. So if each of these were a deep tech startup-size project, that&#8217;s only a few billion dollars or something. If each one of those were a Series A, that&#8217;s only&#8230; It&#8217;s not like a trillion dollars to solve these gaps. It&#8217;s lower than that. So that&#8217;s one thing. Maybe we assumed that, and that&#8217;s what we got. It&#8217;s not really comprehensive. It&#8217;s really just a way of summarizing a lot of conversations we&#8217;ve had with scientists.</p><p>I do think that in the aggregate process, things like Lean are actually surprising because I did start from neuroscience and biology and it was very obvious that there&#8217;s these -omics. We need genomics, but we also need connectomics. We can engineer <em>E. coli</em>, but we also need to engineer the other cells. There&#8217;s somewhat obvious parts of biological infrastructure. I did not realize that math proving infrastructure was a thing and that was emergent from trying to do this.</p><p>So I&#8217;m looking forward to seeing other things where it&#8217;s not actually this hard intellectual problem to solve it. It&#8217;s maybe slightly the equivalent of AI researchers just needing GPUs or something like that and focus and really good <a href="https://en.wikipedia.org/wiki/PyTorch">PyTorch</a> code to start doing this. Which are the fields that do or don&#8217;t need that? So fields that have had gazillions of dollars of investment, do they still need some of those? Do they still have some of those gaps or is it only more neglected fields? We&#8217;re even finding some interesting ones in actual astronomy, actual telescopes that have not been explored. Maybe because if you&#8217;re getting above a critical mass-size project, then you have to have a really big project and that&#8217;s a more bureaucratic process with the federal agencies.</p><p><strong>Dwarkesh Patel</strong></p><p>I guess you just need scale in every single domain of science these days.</p><p><strong>Adam Marblestone</strong></p><p>Yeah, I think you need scale in many of the domains of science. That does not mean that the low-scale work is not important. It does not mean that creativity, serendipity, etc., and each student pursuing a totally different direction or thesis that you see in universities is not also really key. But I think some amount of scalable infrastructure is missing in essentially every area of science, even math, which is crazy. Because mathematicians I thought just needed whiteboards, but they actually need Lean. They actually need verifiable programming languages and stuff. I didn&#8217;t know that.</p><p><strong>Dwarkesh Patel</strong></p><p>Cool. Adam, this is super fun. Thanks for coming on.</p><p><strong>Adam Marblestone</strong></p><p>Thank you so much. My pleasure.</p><p><strong>Dwarkesh Patel</strong></p><p>Where can people find your stuff?</p><p><strong>Adam Marblestone</strong></p><p>Pleasure. The easiest way now&#8230; My <a href="http://adammarblestone.org">adammarblestone.org</a> website is currently down, I guess. But <a href="http://convergentresearch.org">convergentresearch.org</a> can link to a lot of the stuff we&#8217;ve been doing.</p><p><strong>Dwarkesh Patel</strong></p><p>And then you have a great blog, <a href="https://longitudinal.blog/">Longitudinal Science</a>.</p><p><strong>Adam Marblestone</strong></p><p>Longitudinal Science, yes, on WordPress.</p><p><strong>Dwarkesh Patel</strong></p><p>Cool.</p><p><strong>Adam Marblestone</strong></p><p>Thank you so much. Pleasure.</p>]]></content:encoded></item><item><title><![CDATA[Thoughts on AI progress (Dec 2025)]]></title><link>https://www.dwarkesh.com/p/thoughts-on-ai-progress-dec-2025-video</link><guid isPermaLink="false">https://www.dwarkesh.com/p/thoughts-on-ai-progress-dec-2025-video</guid><dc:creator><![CDATA[Dwarkesh Patel]]></dc:creator><pubDate>Tue, 23 Dec 2025 20:24:48 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/182443852/42c223a82a58931b4887c35b3f8342fc.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h3>What are we scaling?</h3><p>I&#8217;m confused why some people have short timelines and at the same time are bullish on the current scale up of reinforcement learning atop LLMs. If we&#8217;re actually close to a human-like learner, this whole approach of training on verifiable outcomes is doomed.</p><p>Currently the labs are trying to bake in a bunch of skills into these models through &#8220;mid-training&#8221; - there&#8217;s an entire supply chain of companies building RL environments which teach the model how to navigate a web browser or <a href="https://fortune.com/2025/10/22/sam-altman-openai-wall-street-junior-bankers-ai-entry-level-jobs/">use Excel to write financial models</a>.</p><p>Either these models will soon learn on the job in a self directed way - making all this pre-baking pointless - or they won&#8217;t - which means AGI is not imminent. Humans don&#8217;t have to go through a special training phase where they need to rehearse every single piece of software they might ever need to use.</p><p>Beren Millidge made interesting points about this in a recent <a href="https://www.beren.io/2025-08-02-Most-Algorithmic-Progress-is-Data-Progress/">blog post</a>:</p><blockquote><p>When we see frontier models improving at various benchmarks we should think not just of increased scale and clever ML research ideas but billions of dollars spent paying PhDs, MDs, and other experts to write questions and provide example answers and reasoning targeting these precise capabilities.</p></blockquote><p>You can see this tension most vividly in robotics. In some fundamental sense, robotics is an algorithms problem, not a hardware or data problem &#8212; with very little training, humans can learn how to teleoperate current hardware to do useful work. So if we had a human like learner, robotics would (in large part) be solved. But the fact that we don&#8217;t have such a learner makes it necessary to go out into a thousand different homes to learn how to pick up dishes or fold laundry.</p><p>One counterargument I&#8217;ve heard from the takeoff-within-5-years crew is that we have to do this cludgy RL in service of building a superhuman AI researcher, and then the million copies of automated Ilya can go figure out how to solve robust and efficient learning from experience.</p><p>This gives the vibes of that old joke, &#8220;We&#8217;re losing money on every sale, but we&#8217;ll make it up in volume.&#8221; Somehow this automated researcher is  going to figure out the algorithm for AGI - a problem humans have been banging their head against for the better part of a century - while not having the basic learning capabilities that children have? I find this super implausible.</p><p>Besides, even if you think the RLVR scaleup will soon help us automate AI research, the labs&#8217; actions suggest otherwise. You don&#8217;t need to pre-bake the consultant&#8217;s skills at crafting Powerpoint slides in order to automate Ilya. So clearly the labs&#8217; actions hint at a world view where these models will continue to fare poorly at generalizing and on-the-job learning, thus making it necessary to build in the skills that they hope will be economically valuable.</p><p>Another counterargument you could make is that even if the model could learn these skills on the job, it is just so much more efficient to build them up just once during training rather that again and again for each user or company. And look, it makes a lot of sense to just bake in fluency with common tools like browsers and terminals. Indeed one of the key advantages that AGIs will have is this greater capacity to share knowledge across copies. But people are underrating how much company and context specific skills are required to do most jobs. And there just isn&#8217;t currently a robust efficient way for AIs to pick up those skills.</p><h3>Human labor is valuable precisely because it&#8217;s not shleppy to train</h3><p>I was at a dinner with an AI researcher and a biologist. The biologist said she had long timelines. We asked what she thought AI would struggle with. She said her work has recently involved looking at slides and decide if a dot is actually a macrophage or just looks like one. The AI researcher says, &#8220;Image classification is a textbook deep learning problem&#8212;we could easily train for that.&#8221;</p><p>I thought this was a very interesting exchange, because it revealed a key crux between me and the people who expect transformative economic impacts in the next few years. Human workers are valuable precisely because we don&#8217;t need to build schleppy training loops for every small part of their job. It&#8217;s not net-productive to build a custom training pipeline to identify what macrophages look like given the way this particular lab prepares slides, then another for the next lab-specific micro-task, and so on. What you actually need is an AI that can learn from semantic feedback or from self directed experience, and then generalize, the way a human does.</p><p>Every day, you have to do a hundred things that require judgment, situational awareness, and skills &amp; context learned on the job. These tasks differ not just across different people, but from one day to the next even for the same person. It is not possible to automate even a single job by just baking in some predefined set of skills, let alone all the jobs.</p><p>In fact, I think people are really underestimating how big a deal actual AGI will be because they&#8217;re just imagining more of this current regime. They&#8217;re not thinking about billions of human-like intelligences on a server which can copy and merge all their learnings. And to be clear, I expect this (aka actual AGI) in the next decade or two. That&#8217;s fucking crazy!</p><h3>Economic diffusion lag is cope for missing capabilities</h3><p>Sometimes people will say that the reason that AIs aren&#8217;t more widely deployed across firms and already providing lots of value (outside of coding) is that technology takes a long time to diffuse. I think this is cope. People are using this cope to gloss over the fact that these models just lack the capabilities necessary for broad economic value.</p><p>Steven Byrnes has an <a href="https://www.lesswrong.com/posts/xJWBofhLQjf3KmRgg/four-ways-learning-econ-makes-people-dumber-re-future-ai">excellent post</a> on this and many other points:</p><blockquote><p>New technologies take a long time to integrate into the economy? Well ask yourself: how do highly-skilled, experienced, and entrepreneurial immigrant humans manage to integrate into the economy immediately? Once you&#8217;ve answered that question, note that AGI will be able to do those things too.</p></blockquote><p>If these models were actually like humans on a server, they&#8217;d diffuse incredibly quickly. In fact, they&#8217;d be so much easier to integrate and onboard than a normal human employee (they could read your entire Slack and Drive in minutes and immediately distill all the skills your other AI employees have). Plus, hiring is very much like a <a href="https://en.wikipedia.org/wiki/The_Market_for_Lemons">lemons market</a>, where it&#8217;s hard to tell who the good people are, and hiring someone bad is quite costly. This is a dynamic you wouldn&#8217;t have to worry about when you just wanna spin up another instance of a vetted AGI model.</p><p>For these reasons, I expect it&#8217;s going to be much much easier to diffuse AI labor into firms than it is to hire a person. And companies hire lots of people all the time. If the capabilities were actually at AGI level, people would be willing to spend trillions of dollars a year buying tokens (knowledge workers cumulatively earn 10s of trillions of dollars of wages a year). The reason that lab revenue are 4 orders of magnitude off right now is that the models are nowhere near as capable as human knowledge workers.</p><h3>Goal post shifting is justified</h3><p>AI bulls will often criticize AI bears for repeatedly moving the goal posts. This is often fair. AI has made a ton of progress in the last decade, and it&#8217;s easy to forget that.</p><p>But some amount of goal post shifting is justified. If you showed me Gemini 3 in 2020, I would have been certain that it could automate half of knowledge work. We keep solving what we thought were the sufficient bottlenecks to AGI (general understanding, few shot learning, reasoning), and yet we still don&#8217;t have AGI (defined as, say, being able to completely automate 95% of knowledge work jobs). What is the rational response?</p><p>It&#8217;s totally reasonable to look at this and say, &#8220;Oh actually there&#8217;s more to intelligence and labor than I previously realized. And while we&#8217;re really close to (and in many ways have surpassed) what I would have defined as AGI in the past, the fact that model companies are not making trillions is revenue clearly reveals that my previous definition of AGI was too narrow.&#8221;</p><p>I expect this to keep happening into the future. I expect that by 2030 that the labs will have made significant progress on my hobby horse of continual learning, and the models will start earning 100s of billions in revenue, but they won&#8217;t have automated all knowledge work, and I&#8217;ll be like, &#8220;We&#8217;ve made a lot of progress, but we&#8217;re not at AGI yet. We also need X, Y, and Z thing to get to trillions in revenue.&#8221;</p><p>Models keep getting more impressive at the rate the short timelines people predict, but more useful at the rate the long timelines people predict.</p><h3>RL scaling is laundering the prestige of pretraining scaling</h3><p>With pretraining, we had this extremely clean and general trend in improvement in loss across multiple orders of magnitude of compute (albeit on a power law, which is as weak as exponential growth is strong). People are trying to launder the presitge of pretraining scaling, which was almost as predictable as a physical law of the universe, to justify bullish projections about RLVR, for which we have no well fit publicly known trend. When intrepid researchers do try to piece together the implications from scarce public datapoints, they get quite bearish results. For example, Toby Ord has a <a href="https://www.tobyord.com/writing/how-well-does-rl-scale">great post</a> where he cleverly connects the dots between different o-series benchmark charts, which suggested &#8220;we need something like a 1,000,000x scale-up of total RL compute to give a boost similar to a GPT level&#8221;.</p><h3>Broadly deployed intelligence explosion</h3><p>People have spent a lot of time talking about a software only singularity (where AI models write the code for a smarter successor system), a software + hardware singularity (where AIs also improve their successor&#8217;s computing hardware), or variations therein.</p><p>All these scenarios neglect what I think will be the main driver of further improvements atop AGI: continual learning. Again, think about how humans become more capable at anything. It&#8217;s mostly from experience in the relevant domain.</p><p>Over conversation, <a href="https://www.beren.io/">Beren Millidge</a> made the interesting suggestion that the future might look continual learning agents going out, doing jobs and generating value, and then bringing all their learnings back to the hive mind model, which does some kind of batch distillations on all these agents. The agents themselves could be quite specialized - containing what Karpathy called &#8220;the cognitive core&#8221; plus knowledge and skills relevant to the job they&#8217;re being deployed to do.</p><p>&#8220;Solving&#8221; continual learning won&#8217;t be a singular one-and-done achievement. Instead, it will feel like solving in context learning. GPT-3 demonstrated that in context learning could be very powerful (its ICL capabilities were so remarkable that the title of the GPT-3 <a href="https://arxiv.org/abs/2005.14165">paper</a> is &#8216;Language Models are Few-Shot Learners&#8217;). But of course, we didn&#8217;t &#8220;solve&#8221; in-context learning when GPT-3 came out - and indeed there&#8217;s plenty of progress still to be made, from comprehension to context length. I expect a similar progression with continual learning. Labs will probably release something next year which they call continual learning, and which will in fact count as progress towards continual learning. But human level continual learning may take another 5 to 10 years of further progress.</p><p>This is why I don&#8217;t expect some kind of runaway gains to the first model that cracks continual learning, thus getting more and more widely deployed and capable. If you had fully solved continual learning drop out of nowhere, then sure, it&#8217;s &#8220;game set match&#8221;, as Satya put it. But that&#8217;s not what&#8217;s going to happen. Instead, some lab is going to figure out how to get some initial traction on the problem. Playing around with this feature will make it clear how it was implemented, and the other labs will soon replicate this breakthrough and improve it slightly.</p><p>Besides, I just have some prior that competition will stay fierce, informed by the observation that all these previous supposed flywheels (user engagement on chat, synthetic data, etc) have done very little to diminish the greater and greater competition between model companies. Every month (or less), the big three will rotate around the podium, with other competitors not that far behind. There is some force (potentially talent poaching, rumor mills, or reverse engineering) which has so far neutralized any runaway advantages a single lab might have had.</p>]]></content:encoded></item><item><title><![CDATA[Sarah Paine — Why Russia Lost the Cold War]]></title><description><![CDATA[Oil crisis, Sino-Soviet split, ethnic rebellions, and arms build-up]]></description><link>https://www.dwarkesh.com/p/sarah-paine-cold-war</link><guid isPermaLink="false">https://www.dwarkesh.com/p/sarah-paine-cold-war</guid><dc:creator><![CDATA[Dwarkesh Patel]]></dc:creator><pubDate>Fri, 19 Dec 2025 17:41:19 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/182083324/c1f3915d7a374bb63eada4a5ba007eee.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>This is the final episode of the Sarah Paine lecture series, and it&#8217;s probably my favorite one.</p><p>Sarah gives a &#8220;tour of the arguments&#8221; on what ultimately led to the Soviet Union&#8217;s collapse, diving into the role of the US, the Sino-Soviet border conflict, the oil bust, ethnic rebellions and even the Roman Catholic Church. As she points out, this is all particularly interesting as we find ourselves potentially at the beginning of another Cold War.</p><p>As we wrap up this lecture series, I want to take a moment to thank Sarah for doing this with me. It has been such a pleasure.</p><p>If you want more of her scholarship, I highly recommend checking out the books she&#8217;s written. You can find them <a href="https://www.amazon.com/stores/S.-C.-M.-Paine/author/B001HCVOTG">here</a>.</p><p>Watch on <a href="https://youtu.be/FdkpWrlR5zg">YouTube</a>; listen on <a href="https://podcasts.apple.com/us/podcast/sarah-paine-why-russia-lost-the-cold-war/id1516093381?i=1000742025593">Apple Podcasts</a> or <a href="https://open.spotify.com/episode/5MF73tDb0Nw9Bt7O5oIsIV?si=ublgonJZTO-HKfvqbXqoww">Spotify</a>.</p><div id="youtube2-FdkpWrlR5zg" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;FdkpWrlR5zg&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/FdkpWrlR5zg?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h3>Sponsors</h3><ul><li><p><a href="https://labelbox.com/dwarkesh">Labelbox</a> helped me create a tool to transcribe our episodes! I&#8217;ve struggled with transcription in the past because I don&#8217;t just want verbatim transcripts, I want transcripts reworded to read like essays. Labelbox helped me generate the <em>exact</em> data I needed for this. If you want to learn how Labelbox can help you (or if you want to try out the transcriber tool yourself), go to <a href="https://labelbox.com/dwarkesh">labelbox.com/dwarkesh</a></p></li><li><p><a href="https://sardine.ai/dwarkesh">Sardine</a> doesn&#8217;t just assess customer risk for banking &amp; retail. Their AI risk management platform is also extremely good at detecting fraudulent job applications, which I&#8217;ve found useful for my own hiring process. If you need help with hiring risk&#8212;or any other type of fraud prevention&#8212;go to <a href="https://sardine.ai/dwarkesh">sardine.ai/dwarkesh</a>.</p></li><li><p><a href="https://gemini.google.com">Gemini&#8217;s</a> Nano Banana Pro helped us make many of the visuals in this episode. For example, we used it to turn dense tables into clear charts so that&#8217;d it be easier to quickly understand the trends that Sarah discusses. You can try Nano Banana Pro now in the Gemini app. Go to <a href="https://gemini.google.com">gemini.google.com</a>.</p></li></ul><h2>Timestamps</h2><p><a href="https://www.dwarkesh.com/i/182083324/did-reagan-single-handedly-win-the-cold-war">(00:00:00) &#8211; Did Reagan single-handedly win the Cold War?</a></p><p><a href="https://www.dwarkesh.com/i/182083324/eastern-bloc-uprisings-and-oil-crisis">(00:15:53) &#8211; Eastern Bloc uprisings &amp; oil crisis</a></p><p><a href="https://www.dwarkesh.com/i/182083324/gorbachevs-mistakes">(00:30:37) &#8211; Gorbachev&#8217;s mistakes</a></p><p><a href="https://www.dwarkesh.com/i/182083324/german-unification-and-nato-expansion">(00:37:33) &#8211; German unification and NATO expansion</a></p><p><a href="https://www.dwarkesh.com/i/182083324/the-gulf-war-and-the-cold-war-endgame">(00:48:31) &#8211; The Gulf War and the Cold War endgame</a></p><p><a href="https://www.dwarkesh.com/i/182083324/how-central-planning-survived-so-long">(00:56:10) &#8211; How central planning survived so long</a></p><p><a href="https://www.dwarkesh.com/i/182083324/sarahs-life-in-the-ussr-in">(01:14:46) &#8211; Sarah&#8217;s life in the USSR in 1988</a></p><h2>Transcript</h2><h3>00:00:00 &#8211; Did Reagan single-handedly win the Cold War?</h3><p><strong>Sarah Paine</strong></p><p>Thank you for coming. It&#8217;s a treat to be with you and sharing all this stuff. Since we seem to be in a <a href="https://en.wikipedia.org/wiki/Second_Cold_War">second Cold War</a>, maybe it&#8217;s a good time to revisit the last one to see why it turned out the way it did and why the participants in it thought it turned out the way it did.</p><p>I&#8217;m going to pose the question: why did Russia lose the <a href="https://en.wikipedia.org/wiki/Cold_War">Cold War</a>? People have loads of different answers to that question. This is going to be a tour of the counter-arguments. I&#8217;m going to start with an answer that many Americans have. It&#8217;s a very simple one that&#8217;s like, &#8220;<a href="https://en.wikipedia.org/wiki/Ronald_Reagan">Ronald Reagan</a> single-handedly defeated the <a href="https://en.wikipedia.org/wiki/Soviet_Union">Soviet Union</a>.&#8221; That&#8217;s one possible answer. But then I&#8217;m going to give you all kinds of counter-arguments to that.</p><p>Some of them are going to be other external explanations of what others did to the Soviet Union. Others are internal ones of what the Soviet Union did, the cards that it didn&#8217;t play particularly well. And then I&#8217;ve got some umbrella explanations. So that&#8217;s my plan for this evening.</p><p>The story that Ronald Reagan did it&#8230; Well, here&#8217;s a picture at the <a href="https://www.reaganlibrary.gov/reagans/ronald-reagan/rancho-del-cielo">Reagan Ranch</a> after the Cold War is over. You see the <a href="https://en.wikipedia.org/wiki/Mikhail_Gorbachev">Gorbachevs</a> and you see the Reagans and they seem to be having a grand old time, which suggests there&#8217;s something maybe off with that explanation.</p><p>But anyway, the way the &#8220;Ronald Reagan did it&#8221; school goes is that Ronald Reagan did a massive military buildup and some would argue it bankrupted the Soviet Union. He was a man of words and deeds. He made really good speeches that were memorable.</p><p><a href="https://www.reaganlibrary.gov/archives/speech/address-members-british-parliament">Here&#8217;s one before Parliament</a> where he says, &#8220;The regimes planted by totalitarianism have had more than 30 years to establish their legitimacy, but none&#8212;not one regime&#8212;has yet been able to risk free elections. Regimes planted by bayonets do not take root.&#8221;</p><p>And then here he is before the <a href="https://en.wikipedia.org/wiki/Brandenburg_Gate">Brandenburg Gate</a>, this is in Berlin, long a symbol of German greatness. But then it was a locked gate on the <a href="https://en.wikipedia.org/wiki/Berlin_Wall">Berlin Wall</a>. Here&#8217;s Ronald Reagan: &#8220;General Secretary Gorbachev, if you seek peace, if you seek prosperity for the Soviet Union and Eastern Europe, if you seek liberalization, come to this gate. Mr. Gorbachev, open this gate. <a href="https://en.wikipedia.org/wiki/Tear_down_this_wall!">Tear down this wall!</a>&#8220;</p><p>And who can forget the <a href="https://en.wikipedia.org/wiki/Evil_Empire_speech">&#8220;Evil Empire&#8221;</a> speech, which he gave to the National Association of Evangelicals in Orlando, Florida, and they skipped Disneyland to hear it.</p><p>Reagan did a very significant military buildup that actually had started under <a href="https://en.wikipedia.org/wiki/Jimmy_Carter">Carter</a> when the <a href="https://en.wikipedia.org/wiki/Soviet%E2%80%93Afghan_War">Soviets invaded Afghanistan</a>, big mistake as we discovered. He also invested in and deployed missiles in Europe. He was busy <a href="https://en.wikipedia.org/wiki/Reagan_Doctrine">funding anti-communist insurgencies</a> and also others who didn&#8217;t like the Soviet Union all over the world. He started doing more aggressive military patrolling. By the time he&#8217;s out of office, he was like half a dozen ships short of this 600-ship navy or whatever it is he was planning to make. He was also trying to build a missile shield, his <a href="https://en.wikipedia.org/wiki/Strategic_Defense_Initiative">Strategic Defense Initiative</a>.</p><p>The problem is the Soviets tried to match him on this. If you add up the <a href="https://en.wikipedia.org/wiki/Gross_national_income">GNPs</a> of the United States, <a href="https://en.wikipedia.org/wiki/NATO">NATO</a> allies, and Japan, well, that would be seven times larger than the Soviet GNP. You&#8217;ve got to be aware of asymmetric strategy. The CIA thought during the Cold War that perhaps Russia was spending up to 20% of its GNP on defense. After the Cold War ended, when you were getting more accurate statistics, it turns out it was at least 40 or 50%. Some people say it was up to a truly economy-busting 70%, if you take into account all the infrastructure investments that were associated with military things. If you look during the Cold War, the United States was spending less than 8%, Germany less than 6%, Japan less than 2%, and Nazi Germany, which is no piker, 55%. So you look at all this and it was difficult.</p><p>So I am going to be quoting lots of Russians today because they have thought deeply about the fate of their country, how life as they knew it disappeared, the Soviet Union gone, the empire gone. They thought a lot about it. Here is a former Soviet ambassador to West Germany, <a href="https://en.wikipedia.org/wiki/Valentin_Falin">Valentin Falin</a>. Here&#8217;s his take: &#8220;Following the American strategy of our exhaustion in the arms race, our crisis in public health and all the things that have to do with standard of living reached a new dimension.&#8221; Then if you add to the arms race of the United States the arms race that was going on with China on that border, the arms race plunged the Soviet economy into a permanent crisis.</p><p>Here you have <a href="https://en.wikipedia.org/wiki/Georgy_Arbatov">Georgy Arbatov</a>, who was the late Soviet Union&#8217;s finest expert on the United States, or at least the most famous one. He&#8217;s looking at the Soviet war in Afghanistan. He said, &#8220;It is quite clear that the Afghan war was most advantageous for the United States. And we got our Vietnam.&#8221; Because the United States is busy funding the other side, and it&#8217;s costly. Gorbachev is looking at this, as he&#8217;s telling the <a href="https://en.wikipedia.org/wiki/Politburo_of_the_Communist_Party_of_the_Soviet_Union">Politburo</a> a year after he came into power. He said, &#8220;Look, the Americans are betting precisely on the fact that the Soviet Union is scared of this SDI, the Strategic Defense Initiative, a missile defense. That&#8217;s why they&#8217;re putting pressure on us, to exhaust us.&#8221; Correct.</p><p>So some would argue that the US victory in the arms race guaranteed victory in the Cold War. Go Ronnie. That&#8217;s one explanation. But I&#8217;m going to give you a tour of the counter-arguments and some other explanations, starting with Presidents <a href="https://en.wikipedia.org/wiki/Gerald_Ford">Ford</a>, Carter, and the <a href="https://en.wikipedia.org/wiki/Helsinki_Accords">Helsinki Declaration</a>.</p><p>After World War II, the Soviets had wanted to convene a conference of European states to confirm its expanded World War II borders. And for a long time, nobody was interested. The Western Europeans are sick of all the drama. The United States still doesn&#8217;t want to show, but we go along with our allies, and our allies insist on including human rights provisions. We think this is crazy because we know the Soviets are never going to enforce those things. But you get the Helsinki Accords that have all sorts of human rights provisions.</p><p>Well, lo and behold, unbeknownst to anybody, dissidents across the Eastern Bloc and human rights activists across the West start holding the communists to account for the agreements that they have signed and start contrasting the liberation that communism promises versus the dictatorship actually delivered. This human rights movement within the <a href="https://en.wikipedia.org/wiki/Eastern_Bloc">Soviet bloc</a> and abroad, took on a life of its own.</p><p>Here you have the former director of the CIA and former head of the Department of Defense, <a href="https://en.wikipedia.org/wiki/Robert_Gates">Robert Gates</a>, saying, &#8220;The Soviets desperately wanted this big conference and it laid the foundations for the end of their empire. We resisted it for years only to discover years later that this conference had yielded benefits beyond our wildest imagination.&#8221; Go figure.</p><p>Here is Jimmy Carter with his <a href="https://history.state.gov/milestones/1977-1980/human-rights">human rights initiative</a>. It was <a href="https://en.wikipedia.org/wiki/Pavel_Palazhchenko">Gorbachev&#8217;s English language translator</a> who said that Carter&#8217;s emphasis on precisely the human rights that were denied to Soviets really resonated and it made people think that they wanted a more democratic, open, liberal society. Here&#8217;s Carter giving a <a href="https://millercenter.org/the-presidency/presidential-speeches/may-22-1977-university-notre-dame-commencement">graduation address at Notre Dame</a>. He said, &#8220;We have reaffirmed America&#8217;s commitment to human rights as a fundamental tenet of our foreign policy. What draws us Americans together is a belief in human freedom. We want the world to know that our nation stands for more than just financial prosperity. We&#8217;re bigger than that.&#8221;</p><p>And here is <a href="https://en.wikipedia.org/wiki/Eduard_Shevardnadze">Eduard Shevardnadze</a>, Gorbachev&#8217;s foreign minister, echoing some of these sentiments. He said, &#8220;Look, the belief that we are a great country is deeply ingrained in me, but great in what? Territory? Population, quantity of arms, people&#8217;s troubles, the individual&#8217;s lack of rights? And what do we, who have virtually the highest infant mortality rate in the world, take pride in? It&#8217;s not easy answering the questions. Who are you? Who do you wish to be? A country which is feared or a country which is respected? A country of power or a country of kindness.&#8221;</p><p>Others agreed that communism was essential to the survival of the Soviet Union, but it&#8217;s an undemocratic ideology. Fundamentally, it&#8217;s a foundation that can&#8217;t endure forever. That&#8217;s the take of <a href="https://en.wikipedia.org/wiki/Vitaly_Ignatenko">Vitaly Ignatenko</a>, who&#8217;s a Russian journalist. <a href="https://en.wikipedia.org/wiki/Oleg_Grinevsky">Oleg Grinevsky</a>, who&#8217;s a Soviet career diplomat, is saying, &#8220;Look, communist ideology is associated above all with the Soviet Union. Its rejection created a vacuum and it determined its ultimate fate.&#8221; <a href="https://en.wikipedia.org/wiki/Boris_Yeltsin">Boris Yeltsin</a>, who is Gorbachev&#8217;s successor, said, &#8220;Look, no one wants a new Soviet Union.&#8221;</p><p>So some would argue, this counter-argument, that human rights clauses of the Helsinki Accords and Carter&#8217;s subsequent human rights campaign destroyed communist belief in communism. Okay. Another president, another counter-argument. Those who are fans of <a href="https://en.wikipedia.org/wiki/Richard_Nixon">Richard Nixon</a> would say, &#8220;No, no, no, no, no. It was Richard Nixon who played the <a href="https://millercenter.org/the-presidency/educational-resources/nixon-china">China card</a> so the United States and China could gang up on the Soviet Union and overextend it financially to wreck it militarily.&#8221;</p><p>I think the Chinese would beg to differ and say, &#8220;No, no, no, no. It was <a href="https://en.wikipedia.org/wiki/Mao_Zedong">Mao</a> who played the America card.&#8221; Because what&#8217;s going on in 1969? There&#8217;s a <a href="https://en.wikipedia.org/wiki/Sino-Soviet_border_conflict">border war between China and the Soviet Union</a>. <a href="https://en.wikipedia.org/wiki/Project_596">China&#8217;s gotten its nuclear bomb in &#8216;64</a>. It no longer has to defer to the Soviet Union and starts playing more tough on their border disagreements. So the Soviets are really upset. They come to the United States and ask us whether it would be okay to nuke these people, because they think Americans don&#8217;t like the Chinese . Well we didn&#8217;t, but we said, &#8220;No, it&#8217;s not okay to nuke those people.&#8221;</p><p>So the Chinese figure it out. The one that wants to nuke you is your primary adversary, right? Up until then&#8230; Think about it, China and Russia, for them the United States was the primary adversary. Now they&#8217;re primary adversaries with each other, freeing up the United States to decide which one it&#8217;s going to cozy up to. And the United States decides it&#8217;s going to cozy up to China.</p><p>Why? Well, Chinese belligerency forces the Soviets. Not only have they already got a big militarized border with Europe, now they&#8217;re going to do the same thing on a very long border with China. These are nuclear-armed mechanized forces, very expensive. Imagine if this country had to have such borders with Canada and Mexico. It would be bankrupting, and we are far richer than the Soviet Union was then, whenever. It was bankrupting. So some would argue that US cooperation with China fatally overextended the Soviet Union.</p><p>One could take all of these arguments, starting with President Nixon all the way through Reagan, to make an overarching argument that says, &#8220;Look, each president opened up opportunities for the others who then leveraged them.&#8221; So Nixon plays the China card, which others play with increasing dexterity. Ford comes in and begins dabbling in human rights. Carter then comes in and really goes for human rights and starts doing a military buildup, which then Ronald Reagan really does. So that by the time you get to Reagan, he is dealing in a position of both ideological and military strength vis-&#224;-vis the Soviet Union.</p><p>For those who think that US foreign policy was not consistent during the Cold War, you&#8217;re not looking at it at the strategic level.<strong> </strong>There were certain different strategies going on and how best to achieve it, but both parties agreed the goals were free trade, democracy, <a href="https://en.wikipedia.org/wiki/Containment">containment of communism</a>. Those were staples of US foreign policy, for both parties, for its duration.</p><p>So some would argue that Presidents Nixon through Reagan produced the cumulative presidential effects to defeat the Soviet Union.</p><p>Okay, others would say to forget this <a href="https://en.wikipedia.org/wiki/Great_man_theory">great man theory</a> of history business, that&#8217;s really pass&#233;. What really accounted for the outcome of the Cold War was this military platform, that&#8217;s Pentagonese for large military systems. But anyway, it&#8217;s a <a href="https://en.wikipedia.org/wiki/Nuclear_submarine">nuclear-powered</a>, nuclear-armed submarine. They say that this is the item.</p><p>The way <a href="https://en.wikipedia.org/wiki/Deterrence_theory">deterrence theory</a> worked during the Cold War, and I believe now as well, is that in order to deter the other side, you have to have a reliable <a href="https://en.wikipedia.org/wiki/Second_strike">second-strike capability</a>. So if they thought of lobbing a nuke at you, they would be guaranteed that you would have the second strike to lob a nuke back. Therefore, they&#8217;re never going to lob the first nuke.</p><p>When Jimmy Carter became president, he was a graduate of <a href="https://en.wikipedia.org/wiki/United_States_Naval_Academy">Annapolis</a> and also a submariner. The United States began a much more aggressive deployment of its fleet and that&#8217;s continued even more so under Reagan. We&#8217;re taking our submarines and we&#8217;re targeting Soviet submarines in their home water bastions. So the Soviets are thinking that we&#8217;re going to be able to destroy their second-strike capability on our first strike and they&#8217;re having a heart attack.</p><p>So here you have <a href="https://en.wikipedia.org/wiki/Valery_Boldin">Valery Boldin</a>, a longtime aide to Gorbachev, saying, &#8220;Look, the most powerful strength of the United States is the naval fleet and we aren&#8217;t going to get one, or our geography isn&#8217;t set up to use one the way the United States can.&#8221; And then you have <a href="https://en.wikipedia.org/wiki/Dmitry_Yazov#:~:text=Yazov%20was%20the%20last%20person,Marshal%20of%20the%20Soviet%20Union.">Marshal Yazov</a> saying, &#8220;For the Americans, the main means of atomic attack is the fleet.&#8221;</p><p>So then you get <a href="https://en.wikipedia.org/wiki/Sergey_Akhromeyev">Marshal Akhromeyev</a>, who&#8217;s visiting the United States in 1987. At the end of the Cold War he will kill himself, but he&#8217;s still around in &#8216;87. He&#8217;s telling his American hosts, &#8220;You know where our submarines are, but we don&#8217;t know where yours are. It&#8217;s destabilizing. You, you the United States Navy, are the problem.&#8221; Go Navy. And here&#8217;s his host, <a href="https://en.wikipedia.org/wiki/Carlisle_Trost">Admiral Trost</a>, who&#8217;s going, &#8220;Yeah, the inability of the Soviet Union to maintain a strong defensive capability led to the demise of the Soviet Union and to the removal of the Soviets as a major threat to us.&#8221;</p><p>So you can make a perfectly good argument to say the Soviet Union could not counter technologically or financially the US submarine threat to its retaliatory nuclear forces, so war termination was the only thing it could do.</p><p>All of these preceding explanations are navel explanations, spelled with an &#8216;e&#8217;, as in staring at one&#8217;s own. They&#8217;re all about what the United States did or didn&#8217;t do. So let&#8217;s get beyond the half-court tennis of Team America. You need to look at the other side of the net. This is where the Western guru for things military, <a href="https://en.wikipedia.org/wiki/Carl_von_Clausewitz">Carl von Clausewitz</a>, emphasizes reciprocity in war and the interaction of both sides. You&#8217;re not going to do well unless you consider what the other side is doing.</p><h3>00:15:53 &#8211; Eastern Bloc uprisings &amp; oil crisis</h3><p>So I have given you some external explanations and I&#8217;m going to do the internal ones. Here is <a href="https://en.wikipedia.org/wiki/Arnold_J._Toynbee">Arnold Toynbee</a>, he&#8217;s one of the finest historians of the 20th century. He wrote a <a href="https://en.wikipedia.org/wiki/A_Study_of_History">big multi-volume history of the West</a>, in which he argues that civilizations die from suicide, not by murder. So I discussed the murder, what the United States tried to do to the Soviet Union. Now I&#8217;m going to talk about the suicide, what the Soviets did to themselves. And here is counter-argument number one. The Soviet Union was an empire, and when that collapsed, that meant they lost the Cold War.</p><p>During the Cold War, the <a href="https://en.wikipedia.org/wiki/Korean_War">Korean War</a> and the <a href="https://en.wikipedia.org/wiki/Vietnam_War">Vietnam War</a>, there was much fear in the West of this <a href="https://en.wikipedia.org/wiki/Domino_theory">domino theory</a>. The idea is one country falls to communism, then the next and next and next and next would fall to communism. Turns out the domino theory did not apply to capitalism.<strong> </strong>It applied to communism because once the democratic contagion hit one <a href="https://en.wikipedia.org/wiki/Warsaw_Pact">Warsaw Pact</a> country in Eastern Europe, it spread to the others until it was a seething mess and they fell like dominoes.</p><p><a href="https://en.wikipedia.org/wiki/Revolutions_of_1989">So in 1988-89, there were all kinds of demonstrations in the Eastern Bloc</a>, the Soviet Union. In the Soviet Union, they&#8217;re for political freedoms. In the Eastern Bloc, they&#8217;re for freedom from the Soviet Union. Gorbachev may not have gotten that detail. They&#8217;re all about not only wanting political freedoms, but also they&#8217;re about crumbling economies and how to fix their miserable standards of living. Very uncharacteristically, the Russians didn&#8217;t send tanks. In fact, Gorbachev welcomed and encouraged reforms in the Eastern Bloc, both political and economic, just as he was doing in the Soviet Union. So his ideas of <em><a href="https://en.wikipedia.org/wiki/Glasnost">glasnost</a></em>, openness, and <em><a href="https://en.wikipedia.org/wiki/Perestroika">perestroika</a></em>, rebuilding, resonated at home and abroad.</p><p>These reforms began in Poland. Poland had been a scene of much worker unrest many times, in <a href="https://en.wikipedia.org/wiki/1956_Pozna%C5%84_protests">1956</a>, <a href="https://en.wikipedia.org/wiki/December_1970_protests_in_Poland">1970</a>, <a href="https://en.wikipedia.org/wiki/June_1976_in_Polish_protests">1976</a>, and 1980 and 1981. In 1981, this is when <a href="https://en.wikipedia.org/wiki/Solidarity_(Polish_trade_union)">Solidarity</a>, the workers movement, gets going and it gets a national and an international reputation. The next set of strikes are happening in <a href="https://en.wikipedia.org/wiki/1988_Polish_strikes">1988</a>, because in the preceding several years, the Polish standard of living had shrunk by over 3%.</p><p>The government was out of cash and wanted to raise basic food prices, and Poles hit the streets. The government was in a panic, because it was worried the economy would go into free fall. So the government cut a deal with Solidarity. They said, &#8220;You call off the strikes and then we&#8217;ll let you into political talks,&#8221; and Solidarity agreed. There was a complicating factor on all of this. It&#8217;s called the <a href="https://en.wikipedia.org/wiki/Catholic_Church">Roman Catholic Church</a>, which is an institution of enormous credibility and legitimacy in Poland, which had a partiality for Solidarity and it had a <a href="https://en.wikipedia.org/wiki/Pope_John_Paul_II">Polish pope</a>.</p><p>So the roundtable discussions were these political talks. They occurred a year later in February 1989, and the Soviets encouraged them. In fact, here&#8217;s one Soviet person there advising the Poles: &#8220;Look, you&#8217;ve got to find some quick solutions out of your economic and political mess. You&#8217;re an itty-bitty country, so when you make mistakes, they&#8217;ll be itty-bitty mistakes. But if we make them, they&#8217;ll be big.&#8221; They got that one right.</p><p>The <a href="https://en.wikipedia.org/wiki/Polish_United_Workers%27_Party">Polish Communist Party</a> thought they had this one covered by the way they jiggered the election rules. Not quite. The day they held elections is exactly the same day that <a href="https://en.wikipedia.org/wiki/Deng_Xiaoping">Deng Xiaoping</a> turned the tanks on demonstrators in Beijing and you have the <a href="https://en.wikipedia.org/wiki/1989_Tiananmen_Square_protests_and_massacre">Tiananmen Massacre</a>. Two solutions for the problem. So the way the elections worked out in Poland is that Solidarity won every single seat for which it could compete but one. And then only three people in the communist-designated seats actually won. So who won all the rest of them? The box on the ballot called &#8220;none of the above.&#8221; Yes, the Roman Catholic Church had helped instruct people that that&#8217;s the box you want. With that, the legitimacy of the Communist Party to rule had just been wrecked and we&#8217;re on to democracy in Poland.</p><p>This democratic contagion then spread into <a href="https://en.wikipedia.org/wiki/East_Germany">East Germany</a> four months later. This is about the 40th anniversary of the founding of East Germany. 70,000 people <a href="https://en.wikipedia.org/wiki/Monday_demonstrations_in_East_Germany">demonstrated in Leipzig</a>. Within the week around like 1.4 million Germans are demonstrating in over 200 demonstrations. Typically, the East Germans would have sent tanks. That was what they would have done in the past. But would-be tank man <a href="https://en.wikipedia.org/wiki/Erich_Honecker">Erich Honecker</a> was already out of a job. His ruinous policies of living off debt since he came to power in 1971 had just about wrecked East Germany. So he was out.</p><p>Then less than two weeks later, the <a href="https://en.wikipedia.org/wiki/Council_of_Ministers_of_East_Germany">Council of Ministers</a> resigns. Then on November 8th, the <a href="https://en.wikipedia.org/wiki/Socialist_Unity_Party_of_Germany">Politburo</a> resigns. Then on the 9th, whatever is left of that government is issuing new travel regulations. You might wonder what travel has got to do with it. I&#8217;ll get there.</p><p>So in response to a question at a news conference, this guy, <a href="https://en.wikipedia.org/wiki/G%C3%BCnter_Schabowski">G&#252;nter Schabowski</a>, who was one of the remaining communists helping run the show, gets asked a question and he doesn&#8217;t know the answer. So he wings it. The question is, &#8220;When do these travel regulations go into effect?&#8221; And he goes, &#8220;Immediately.&#8221; Well, crowds immediately started gathering at the six gates to the <a href="https://en.wikipedia.org/wiki/Berlin_Wall">Berlin Wall</a>. At one of them, the <a href="https://www.npr.org/sections/parallels/2014/11/06/361785478/the-man-who-disobeyed-his-boss-and-opened-the-berlin-wall">border guards decided that discretion was the better part of valor</a>, and they opened the gate and East Germans poured into West Berlin.</p><p>Within the first week alone, over half of East Germany&#8217;s population visited the West. Within the month, 1% of the population emigrated to the West. Like the Polish elections, this opening of the gate was a pivotal decision. A pivotal decision, whatever it is, means there&#8217;s no going back to the way it was. Here&#8217;s good old G&#252;nter going, &#8220;Gosh, we hadn&#8217;t a clue that opening the wall was the beginning of the end of East Germany.&#8221; Okay, better luck next time. And the Russians were shocked by how unpopular they were. They were thinking they were going to get credit, Gorbachev, for Eastern Europe&#8217;s liberation rather than blame for Eastern Europe&#8217;s enserfment.</p><p>Here you have <a href="https://en.wikipedia.org/wiki/Yuri_Ryzhov_(physicist)">Yuri Ryzhov</a>, a scientist and parliamentarian going, &#8220;All of our former satellites by compulsion cast off from us as fast and as far as possible.&#8221; And <a href="https://history.state.gov/historicaldocuments/frus1969-76v16/persons">Anatoly Kovalev</a>, who is a deputy foreign minister, said, &#8220;Look, we had no confidence whatsoever concerning whom the East German army is going to shoot, the demonstrators or us. And the same thing for the Polish and Hungarian armies.&#8221; Great. With allies like this, who needs enemies? The allies kind of cover it. So this argument says unrest in the empire forced the Soviet Union to forfeit the Cold War.</p><p>Okay, I got another counterargument. It says, &#8220;Nonsense, the real problem was that the <a href="https://en.wikipedia.org/wiki/Satellite_state">satellites</a> were unhealthy. That&#8217;s why the whole thing fell apart.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1U7Q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd15a048e-e653-4cee-9ada-6e65a131ac53_2048x1005.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1U7Q!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd15a048e-e653-4cee-9ada-6e65a131ac53_2048x1005.png 424w, https://substackcdn.com/image/fetch/$s_!1U7Q!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd15a048e-e653-4cee-9ada-6e65a131ac53_2048x1005.png 848w, https://substackcdn.com/image/fetch/$s_!1U7Q!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd15a048e-e653-4cee-9ada-6e65a131ac53_2048x1005.png 1272w, https://substackcdn.com/image/fetch/$s_!1U7Q!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd15a048e-e653-4cee-9ada-6e65a131ac53_2048x1005.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1U7Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd15a048e-e653-4cee-9ada-6e65a131ac53_2048x1005.png" width="1456" height="714" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d15a048e-e653-4cee-9ada-6e65a131ac53_2048x1005.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:714,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1U7Q!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd15a048e-e653-4cee-9ada-6e65a131ac53_2048x1005.png 424w, https://substackcdn.com/image/fetch/$s_!1U7Q!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd15a048e-e653-4cee-9ada-6e65a131ac53_2048x1005.png 848w, https://substackcdn.com/image/fetch/$s_!1U7Q!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd15a048e-e653-4cee-9ada-6e65a131ac53_2048x1005.png 1272w, https://substackcdn.com/image/fetch/$s_!1U7Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd15a048e-e653-4cee-9ada-6e65a131ac53_2048x1005.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>So this map is 1960. You see all those tempting green places. They&#8217;re about to become independent, and they are really sick of their Western European colonizers. Enter the Soviet Union with a program to put the West out of business. There were many takers.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!38qW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e9cf375-bf02-4f8b-9ad9-1814243194b0_2048x1005.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!38qW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e9cf375-bf02-4f8b-9ad9-1814243194b0_2048x1005.png 424w, https://substackcdn.com/image/fetch/$s_!38qW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e9cf375-bf02-4f8b-9ad9-1814243194b0_2048x1005.png 848w, https://substackcdn.com/image/fetch/$s_!38qW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e9cf375-bf02-4f8b-9ad9-1814243194b0_2048x1005.png 1272w, https://substackcdn.com/image/fetch/$s_!38qW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e9cf375-bf02-4f8b-9ad9-1814243194b0_2048x1005.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!38qW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e9cf375-bf02-4f8b-9ad9-1814243194b0_2048x1005.png" width="1456" height="714" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7e9cf375-bf02-4f8b-9ad9-1814243194b0_2048x1005.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:714,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!38qW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e9cf375-bf02-4f8b-9ad9-1814243194b0_2048x1005.png 424w, https://substackcdn.com/image/fetch/$s_!38qW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e9cf375-bf02-4f8b-9ad9-1814243194b0_2048x1005.png 848w, https://substackcdn.com/image/fetch/$s_!38qW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e9cf375-bf02-4f8b-9ad9-1814243194b0_2048x1005.png 1272w, https://substackcdn.com/image/fetch/$s_!38qW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e9cf375-bf02-4f8b-9ad9-1814243194b0_2048x1005.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Fast forward to late 1980s. The Soviet Union is on a roll. Small hitch, in the late 1970s there was a <a href="https://en.wikipedia.org/wiki/Early_1980s_recession">big recession that continued into the 80s</a> and tanked commodity prices. For some of the newfound pals like Angola, South Yemen, Ethiopia, Nicaragua, it wrecked their export earnings because they&#8217;re exporting commodities and these commodity prices are down. In many cases, it halved them. The Soviet Union was really dependent on oil exports, still is. Oil prices tanked and oil accounted for up to 55% of the Soviet budget. So here <a href="https://en.wikipedia.org/wiki/Leonid_Brezhnev">Brezhnev</a> has got a deep bench of non-performing piles at a time when he doesn&#8217;t have the money to support all of them.</p><p>Worse yet from the Soviet point of view, it&#8217;s dumped all this money in these Third World friends but  meanwhile, it&#8217;s got its own nationalities who are deeply unhappy and they want out of the empire. Most problematically, they all revolt at exactly the same time. One of the rules for continental empire is &#8220;no two-front wars&#8221;. Russia has so many fronts at this point, it can&#8217;t even keep count.</p><p>The unrest in the internal empire of nationalities started as soon as Gorbachev got in. There were <a href="https://en.wikipedia.org/wiki/Jeltoqsan">student movements in Kazakhstan</a> and <a href="https://en.wikipedia.org/wiki/Sakha_Republic">Yakutia</a>, opposite ends of things. By the time you get to 1990, there are like 76 seething ethnic rebellions in different parts of this. There was too much going on for the Soviet government to handle. So you could argue that the Soviet Union bankrupted itself with the Third World while ignoring its own internal Third World of nationalities, whose simultaneous revolts brought down the Soviet Union.</p><p>I got a completely different argument for you. If you don&#8217;t like all of those, I got another one for you. It&#8217;s the economy, stupid, right? That line. One could argue that communism failed as an economic system. If you look at growth statistics for the Soviet Union, they&#8217;re pretty good post-World War II when they&#8217;re rebuilding, but they really <a href="https://en.wikipedia.org/wiki/Era_of_Stagnation">stagnate from the mid-70s onward</a>. For the decade preceding Gorbachev&#8217;s coming to power, Soviet growth stats were one to two percent lower than those of the United States, and the compounding effects of that were enormous.</p><p>What&#8217;s going on? Everyone&#8217;s lying to each other. The data that Soviets are using is garbage. If you&#8217;re working for a subunit of an enterprise, you have to lie about the inventories you have, saying you have less than you do, and then you have to lie about what you need, saying you need more than you do because you&#8217;re worried about getting enough things. It&#8217;s not a market system where the price dictates it. This is all about the plan. You&#8217;ve got to enter the right numbers and then you get whatever inputs you get from the centralized plan.</p><p>So everyone&#8217;s lying. They&#8217;re aggregating all the lies. The higher up the food chain you aggregate these things, the worse the data is, so that the Soviet government has no idea what the actual value of capital or labor are. It has no idea what actual productivity is, and no one has any idea what consumer preferences are. You&#8217;re not using markets and prices. The misallocation of capital and labor goes unnoticed until it metastasizes into a catastrophe.</p><p>To give you a sense of these misallocations, the Soviet Union was rotting 20 to 40% of its crops. It&#8217;s using scarce hard currency for agricultural imports to make up for those crops, a total mess. You can look at what happens to the economy with oil prices down. We&#8217;re into a spiraling mess, so that from when Gorbachev comes in in &#8216;85 to when it hits a trough in Russia in 1998, you see this crashing share of world GDP by the Eastern bloc. If you look at Soviet statistics on deficits, trade balances, debt, they&#8217;re just soaring, and then GNP growth goes double-digit negative. That&#8217;s called shrinkage. It&#8217;s not the normal thing.</p><p>Marshall Yazov, here&#8217;s his take: &#8220;We simply lack the power of all these wealthy NATO nations. We had to find an alternative to the arms race.&#8221; And here&#8217;s a foreign service officer, <a href="https://en.wikipedia.org/wiki/Anatoly_Adamishin">Anatoly Adamishin</a>. He said, &#8220;Look, our problems began with the departure from isolation. The main reasons for collapse were internal, not external reasons. The Soviet economy was literally exhausted from this monstrous arms race, militarism, enemies with half the world.&#8221; That&#8217;s his take. Gorbachev told the Central Committee, &#8220;Look, we&#8217;re encircled not by invincible armies, but by superior economies.&#8221; He often told people, &#8220;Living this way any longer is impossible.&#8221; So you can make a powerful argument that it&#8217;s the Soviet economy that lost the Cold War.</p><h3>00:30:37 &#8211; Gorbachev&#8217;s mistakes</h3><p>This gentleman, <a href="https://en.wikipedia.org/wiki/Alexis_de_Tocqueville">Alexis de Tocqueville</a>, is very famous for writing <a href="https://en.wikipedia.org/wiki/The_Old_Regime_and_the_Revolution">a book about the last days of the French monarchy</a> before the <a href="https://en.wikipedia.org/wiki/French_Revolution">French Revolution</a> overturned it. He also wrote something about <em><a href="https://en.wikipedia.org/wiki/Democracy_in_America">Democracy in America</a></em>, both excellent books. But this one&#8217;s from the one about France, where Tocqueville observes, &#8220;The most dangerous moment for a bad government is when it begins to reform.&#8221;</p><p>Russians of all political persuasions agree on at least one thing. That is that Gorbachev&#8217;s role in how the Cold War turned out was pivotal, that he played a very essential part. Gorbachev made his decision based on certain false assumptions. One of them was the irreversible direction of history. Gorbachev thought of history going always forward towards communism, never backwards to capitalism. Of course, Eastern Europe took a U-turn, went straight back to capitalism. Here is <a href="https://en.wikipedia.org/wiki/Leonid_Shebarshin">Leonid Shebarshin</a>, who is a senior person in the <a href="https://en.wikipedia.org/wiki/KGB">KGB</a>, their intelligence office. He said, &#8220;The thought never occurred to the government that it&#8217;s possible to withdraw from socialism.&#8221;</p><p>If you think about both communist theory and how imperialism works in practice, usually the mother country is more developed than whatever all the colonies are, right? Well, the Soviet Union was an inverted empire. People in Eastern Europe as a group were more well-educated and they were richer than Russians. It was like a donut empire. So when the empire went to Eastern Europe, Russians could no longer siphon off the wealth of these enserfed populations in Eastern Europe, which explains why they wanted to leave. It also suggests why <a href="https://en.wikipedia.org/wiki/Vladimir_Putin">Putin</a> wants them back.</p><p>Another false assumption has to do with the sentiments of the neighbors. Gorbachev was convinced he was going to get credit for liberating Eastern Europe, rather than blame as a Russian for having enserfed them in the first place. For Gorbachev, the clock began on his watch. For other people, no, <a href="https://en.wikipedia.org/wiki/Joseph_Stalin">Stalin&#8217;s</a> when it began, when he started shooting a lot of people.</p><p>Here you have <a href="https://en.wikipedia.org/wiki/Anatoly_Chernyaev">Anatoly Chernyaev</a>, foreign policy adviser to Gorbachev, saying that Gorbachev thought that bringing freedom to our Eastern European satellites would have them adopt socialism with a human face. &#8220;He made an enormous mistake because these countries brutally turned their back on us.&#8221; Really, if that&#8217;s brutal, then what pray tell was Stalin? And then it gets better: &#8220;The politics in connection with our former friends were totally unexpected to us.&#8221; Really? You occupy people, you never leave, you shoot a lot of people in their government, you put in a new government, you siphon off a lot of their wealth, and you impose a non-performing economic system, and you wonder why they don&#8217;t like you.</p><p>Think about the United States. It intervenes all around the world in other people&#8217;s troubles. It dumps billions in economic aid and even leaves and people don&#8217;t like us. I don&#8217;t know why the Russians think they&#8217;re so special.</p><p>Another false assumption: Gorbachev believed that if the Warsaw Pact, the military alliance of the Eastern Bloc, disappeared, then NATO would disappear. He also believed that if the <a href="https://en.wikipedia.org/wiki/Comecon">Comecon</a>, which is their trading organization, went away, then the <a href="https://en.wikipedia.org/wiki/European_Economic_Community">European Community</a> in those days&#8212;it becomes the <a href="https://en.wikipedia.org/wiki/European_Union">European Union</a> later&#8212;would disappear. Not quite. It turns out that organizations that are coercive versus those that are voluntary, they dissolve for different reasons.</p><p>And then Gorbachev also assumed that the United States would share a continental outlook of not wanting strong powers and that the United States therefore would not want a unified Germany, let alone a strong unified Germany. So when all the unrest is happening in Germany, Gorbachev is off taking a vacation. Poor life choice, because at that moment, <a href="https://en.wikipedia.org/wiki/George_H._W._Bush">President George Bush Sr.</a> and <a href="https://en.wikipedia.org/wiki/Helmut_Kohl">Chancellor Kohl</a> of Germany are working on fast-tracking German unification of a fully sovereign, unified Germany&#8212;both halves in NATO.</p><p>So many of Gorbachev&#8217;s closest supporters at the end of it all blamed him. They said, &#8220;Look, his foreign policy mistakes were a function of his domestic policy mistakes and it destroyed the Soviet Union.&#8221; Back to this America expert, <a href="https://en.wikipedia.org/wiki/Vladimir_Lukin">Vladimir Lukin</a>: &#8220;Gorbachev was no Deng Xiaoping.&#8221; And Arbatov, who&#8217;s their premier America expert: &#8220;The stupidity of our leaders caused the disintegration of the Soviet Union.&#8221; So the big bozo was playing with plastic bags, stuck one on his head, committed suicide. It was by mistake. Lukin continued: &#8220;In the West, they love Gorbachev because everything took place so easily and cheaply, basically like that, but only for you. For us, it was expensive.&#8221; But you could argue the time to reassess all the Stalinist stuff was long overdue.</p><p>Here&#8217;s a completely different way of looking at it. I&#8217;ve been giving you sins of commission, and now I&#8217;m going to do sins of omission. It&#8217;s a good framework. It&#8217;s useful for other things. The sins of commission are all the things Gorbachev did. Now what I&#8217;m going to do is what the army didn&#8217;t do. Some would argue that the <a href="https://en.wikipedia.org/wiki/Red_Army">Red Army</a> should have done exactly what Deng Xiaoping ordered his army to do. You just send the tanks against civilian demonstrators and they truly crush them and it&#8217;ll be over. Communist Party is still in power in China 30 years later. So there are some people who believe that this was a terrible mistake.</p><p>So this argument would be that timely tank deployments&#8212;TTD, my contribution to military acronyms&#8212;would have changed the outcome of the Cold War. Others would be back to the great men of history and sins of commission, and they wouldn&#8217;t be picking on Gorbachev but his successor <a href="https://en.wikipedia.org/wiki/Boris_Yeltsin">Boris Yeltsin</a>. There are two big pieces of evidence when we look. He removed <a href="https://en.wikipedia.org/wiki/Leading_role_of_the_party">Article 6 from the Soviet Constitution</a>, which guaranteed that the <a href="https://en.wikipedia.org/wiki/Communist_Party_of_the_Soviet_Union">Communist Party</a> would always monopolize power. And then in addition in the following year, Yeltsin&#8217;s the head of Russia, he gets together with the heads of Ukraine and Belarus, and they signed the <a href="https://en.wikipedia.org/wiki/Belovezha_Accords">Belavezha Accords</a>, which then formally dissolved the Soviet Union. So according to this way of thinking, it&#8217;s his fault. It&#8217;s suicide on purpose. And what it does is it opens the door for multiple parties and for the nationalities within the Soviet Empire to become independent.</p><h3>00:37:33 &#8211; German unification and NATO expansion</h3><p>So I&#8217;ve given you internal explanations. I&#8217;ve given you external explanations. Now I&#8217;m going to give you some umbrella explanations. They&#8217;re based on all the preceding evidence, and they come to opposite conclusions. The first one was, well, any of the above, it&#8217;s inevitable. The opposite conclusion from the same evidence is that no, it took all of the above. The West barely won.</p><p>I&#8217;m going to start with &#8220;any of the above&#8221;. You could argue with this many serious problems, it was a matter of time before the Soviet Union collapsed. It was an objectionable system for precisely the reasons the West didn&#8217;t like it. It had a brutally inefficient economic system. Russians who invented the thing, at the end of the day, didn&#8217;t want it either. By this way of looking at it, you have people like <a href="https://en.wikipedia.org/wiki/Yuri_Ryzhov_(physicist)">Yuri Ryzhov</a>, a genuine rocket scientist, who says, &#8220;Look, the main reason for the collapse of the Soviet Union is the rottenness of its system.&#8221; Then here&#8217;s a journalist, <a href="https://oac4.cdlib.org/findaid/ark:/13030/kt9v19r9pf/">Teimuraz Stepanov</a>, who said, &#8220;Look, I think from the beginning the genes of disintegration were contained in the genetics of this governmental political formation.&#8221; Don&#8217;t you love the products of the Soviet educational system? Don&#8217;t ever use wording like that.</p><p>So you could argue that the Soviet Union was destined to fail with this many problems. Others would come to the opposite conclusion. They would say, &#8220;No, it took every single one of them for the Cold War to end on Western terms.&#8221; Back to Anatoly Kovalev, the deputy foreign minister, he said, &#8220;Look, all these factors merge&#8212;internal, ideological, economic, military&#8212;it&#8217;s all of them. You remove any one of them and you get a different outcome. Maybe the Cold War ends, but it might end completely differently.&#8221; So by this line of reasoning, the West barely won and should feel very fortunate that it did.</p><p>One can take this last argument and say it was more than that. It also took the confluence in office of two very talented leaders: Helmut Kohl of Germany and George Bush Sr. of the United States, not the son who got into those forever wars, but the dad who didn&#8217;t. George Bush Sr. had one of the most amazing resumes of any person ever to become president of the United States. Just look at him. When he&#8217;s really young, he&#8217;s a war hero in World War II. He&#8217;s a Navy pilot, a dangerous thing to do. He did it. Then he comes back and he gets his BA at Yale and graduates with honors. Then he becomes a representative for this district in Texas after he&#8217;s already made himself a millionaire in the oil business that he started. Then he became ambassador to the UN, followed by US representative to the <a href="https://en.wikipedia.org/wiki/China">PRC</a>, before we had formal diplomatic relations. So he&#8217;s the guy who&#8217;s setting that up. He becomes director of the CIA, and then he is Ronald Reagan&#8217;s understudy for eight years as vice president. He is incredibly fit for the job.</p><p>Helmut Kohl is equally fit for the job. He is the longest-serving chancellor in German history since his illustrious predecessor, <a href="https://en.wikipedia.org/wiki/Otto_von_Bismarck">Otto von Bismarck</a>. He starts out getting a PhD in history and political science. He also starts out in business, but then he works for state government, initially as a representative, then as a governor. He becomes chairman of his political party, the <a href="https://en.wikipedia.org/wiki/Christian_Democratic_Union_of_Germany">Christian Democratic Union</a>, for a quarter of a century.</p><p>Once he gets in, he decides he&#8217;s going to buy up East Germany one tourist at a time. How does that work? East Germans, it turns out, really like to travel. West Germans had always been able to travel to East Germany, or they long had been able to travel to East Germany, but East Germans definitely could not easily travel to West Germany. Why? Because they have a habit of staying. But all of a sudden, East Germany eases up on the travel regulations. You might ask why, and the answer would be money. Just like the Poles, the East Germans were deep in an economic mess of their own making.</p><p>Would-be tank man Erich Honecker, who got the boot at the very end, well, his staying-in-power paradigm that he implements in 1971 is that he&#8217;s going to live off debt. He needs to make certain social benefits available and consumer benefits available for labor stability, to not have labor unrest. The way he&#8217;s going to do that is he&#8217;s not going to do many domestic investments and he&#8217;s going to do a lot of borrowing, particularly from West Germany. Well, that&#8217;s unsustainable long-term. By the time you get to the end of the Cold War, if he&#8217;s going to fix that and even out the accounts, it would be a 30% decline in the East German standard of living. So he really needs the pocket change from the tourists.</p><p>So what Kohl does is a brisk business of tourists and things. What he does in return for the easing of travel restrictions, he pays East Germany several hundred million Deutschmarks extra to allow that to happen. And then he gets the Hungarians to go along. <a href="https://en.wikipedia.org/wiki/Removal_of_Hungary%27s_border_fence_with_Austria">He gets the Hungarians to open up their Austrian border</a> to let East Germans out that way, and he gives them a half a billion Deutschmarks for that little favor.</p><p>When Kohl introduces his <a href="https://germanhistorydocs.ghi-dc.org/sub_document.cfm?document_id=223">10-point unification program</a>&#8212;because now he&#8217;s thinking he&#8217;s going to get both Germanys together&#8212;this is when he starts doling out big bucks to the Soviet Union, whose economy is unraveling. Gorbachev is going to be desperate for this cash as that&#8217;s happening. So West Germany provides 100 million in food, especially in meat, for the Soviet Union that doesn&#8217;t have these things.</p><p>Nevertheless, the unrest just keeps on going. The Berlin Wall, as I&#8217;ve told you, is breached, and then you wind up with a West German caretaker government, and the financial situation in Russia itself is unraveling. By the time you get to January 1990, Bush and Kohl get together and they decide they want to really fast-track German reunification. Why? Because they&#8217;ve got to get it done before this unraveling crisis causes Gorbachev to fall from power. So they have got a game going, the two of them. It&#8217;s complicated. Here&#8217;s why.</p><p>Gorbachev was dead-set against Germany, a united Germany, in NATO. He&#8217;s not really keen about a united Germany, let alone one in NATO. The US State Department experts, the guys who know everything, are saying, &#8220;No, you want to go slow on this unification business.&#8221; Kohl is also running a coalition government. There are people in that government he cannot fire because they&#8217;re from different political parties. One of them is his foreign minister, this guy <a href="https://en.wikipedia.org/wiki/Hans-Dietrich_Genscher">Genscher</a>, who is very skeptical about Germany being part of NATO. Then it turns out, although Britain had talked a good piece during the Cold War, it didn&#8217;t actually want a unified Germany, nor did France. Why? Because that unified Germany would eclipse them economically. They didn&#8217;t want that to happen.</p><p>So Kohl and Bush divide up the tasks. Kohl is going to reassure the Soviet Union that Germany is not going to be belligerent or do horrible things. And Kohl is going to work on financial unification because the Soviets are thinking in terms of military unification. You know, where you deploy your troops. That determines things. Wrong instrument of national power, precisely because the Soviets didn&#8217;t understand finance. That&#8217;s why they&#8217;re in such a mess. Whereas the Germans do. What they&#8217;re going to do is get East Germany on the West German Deutschmark, and at that point they will control all the money and they will control decisions. But the Russians aren&#8217;t going to see that coming.</p><p>Meanwhile, Bush is supposed to work the alliances particularly with Britain and France in the West. There are all sorts of meetings that are coming up. Bush&#8217;s job is to delay those meetings for as long as possible so German unification can proceed as far as possible. The two of them are doing a tag-team diplomacy with Gorbachev that he just can&#8217;t keep up with, given that his own home economy has got these double-digit shrinkage rates.</p><p>Here&#8217;s how they go. As the trades get bigger, the amount of money you pay Gorbachev gets bigger. First of all, it&#8217;s just to get a unified Germany. Then it&#8217;s to get a unified Germany with West Germany still in NATO. Then it&#8217;s to get a unified Germany with all of Germany in NATO. So here&#8217;s how the money goes. <a href="https://en.wikipedia.org/wiki/Treaty_on_the_Final_Settlement_with_Respect_to_Germany">Gorbachev agrees to German unification</a>. We are no longer paying hundreds of millions of Deutschmarks. We&#8217;re paying billions of Deutschmarks, five billion Deutschmarks for that one. Then <a href="https://nsarchive.gwu.edu/briefing-book/russia-programs/2017-12-12/nato-expansion-what-gorbachev-heard-western-leaders-early">Gorbachev agrees that states can choose their own alliances</a>, i.e. whether or not to join NATO. The US offers nine assurances, but it&#8217;s also a trade agreement that Gorbachev really wants. Then the economic union goes into effect.</p><p>So we&#8217;ve now done the financial reunification of Germany. This is when there&#8217;s a <a href="https://www.nato.int/en/about-us/official-texts-and-resources/official-texts/1990/07/05/declaration-on-a-transformed-north-atlantic-alliance">London Declaration</a> that&#8217;s inviting Eastern European countries to coordinate more closely with NATO. In return, Gorbachev gets a promise of a <a href="https://en.wikipedia.org/wiki/G7">G7</a> summit meeting that&#8217;s going to fast-track aid to him, which it will do. And then Gorbachev agrees to German NATO membership.</p><p>At this point, even bigger things are happening. Germany&#8217;s going to agree to its border with Poland. I&#8217;ll get there and explain. Germany provides 15 billion in Deutschmarks, including building all kinds of new apartment buildings for repatriated Soviet soldiers who are going home. Why are you doing that? Because you want those soldiers focused on buying furniture, not running a military coup. That&#8217;s what they&#8217;re doing.</p><p>So the <a href="https://en.wikipedia.org/wiki/German_reunification">unification happens in mid-September 1990</a>. Here&#8217;s the Polish borders. At the end of World War II, <a href="https://en.wikipedia.org/wiki/Territorial_changes_of_Poland_immediately_after_World_War_II">Stalin moved Poland 200 kilometers to the west</a>, and it winds up taking a third of German territory by the time that&#8217;s all over. So the Germans don&#8217;t really want to sign all that away. In addition, as part of that, there were 12 million German refugees who were thrown out of wherever they were living to send them back to Germany, of whom 2 million died. So this is a big deal and it&#8217;s in living memory. Germany agrees to this, that the borders are done. German-Polish borders are set.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dYAF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae91822a-8557-4c77-966f-37e7c5ab24ec_1071x803.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dYAF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae91822a-8557-4c77-966f-37e7c5ab24ec_1071x803.png 424w, https://substackcdn.com/image/fetch/$s_!dYAF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae91822a-8557-4c77-966f-37e7c5ab24ec_1071x803.png 848w, https://substackcdn.com/image/fetch/$s_!dYAF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae91822a-8557-4c77-966f-37e7c5ab24ec_1071x803.png 1272w, https://substackcdn.com/image/fetch/$s_!dYAF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae91822a-8557-4c77-966f-37e7c5ab24ec_1071x803.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dYAF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae91822a-8557-4c77-966f-37e7c5ab24ec_1071x803.png" width="634" height="475.35200746965455" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ae91822a-8557-4c77-966f-37e7c5ab24ec_1071x803.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:803,&quot;width&quot;:1071,&quot;resizeWidth&quot;:634,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dYAF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae91822a-8557-4c77-966f-37e7c5ab24ec_1071x803.png 424w, https://substackcdn.com/image/fetch/$s_!dYAF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae91822a-8557-4c77-966f-37e7c5ab24ec_1071x803.png 848w, https://substackcdn.com/image/fetch/$s_!dYAF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae91822a-8557-4c77-966f-37e7c5ab24ec_1071x803.png 1272w, https://substackcdn.com/image/fetch/$s_!dYAF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae91822a-8557-4c77-966f-37e7c5ab24ec_1071x803.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>00:48:31 &#8211; The Gulf War and the Cold War endgame</h3><p>Complicating factor: a month and a half before this unification treaty is signed, <a href="https://en.wikipedia.org/wiki/Saddam_Hussein">Saddam Hussein</a> decides he&#8217;s going to <a href="https://en.wikipedia.org/wiki/Iraqi_invasion_of_Kuwait">invade Kuwait</a> because he&#8217;s broke. He&#8217;s had a <a href="https://en.wikipedia.org/wiki/Iran%E2%80%93Iraq_War">long war with Iran</a>, huge debts, many owed to Kuwait, which he doesn&#8217;t want to pay back. So if you invade them, that solves that problem. Also, he would take over Kuwait&#8217;s very rich oil fields, and together that would make Iraq probably the swing producer of oil. So he thinks that&#8217;s a great idea.</p><p>Except the Cold War&#8217;s over actually. The Russians are more than willing to cooperate with the United States. Gorbachev really needs more money, and he is willing to go along with getting Iraq out of Kuwait, but not with regime change in Iraq. Because think about it, Iraq is a very important debtor state to the Soviet Union. It owed them between $10&#8211;13 billion. That&#8217;s a lot of money for a broke creditor.</p><p>But Gorbachev is being extraordinarily cooperative with Bush Sr. He sends <a href="https://en.wikipedia.org/wiki/Yevgeny_Primakov">Yevgeny Primakov</a> on multiple missions to Baghdad. The first one, Primakov gets all Russian hostages out of Iraq. Then on the second trip, he gets all Westerners out, Americans included. Third trip, not so lucky. He&#8217;s there for the coalition force bombing. I don&#8217;t think he liked that very much. But imagine that bombing going on if there were Western human shields going down with every target. Russia took that card right off the table.</p><p>Here&#8217;s some of the reasoning. <a href="https://www.c-span.org/person/sergei-tarasenko/28815/">Sergei Tarasenko</a> was an aide to <a href="https://en.wikipedia.org/wiki/Eduard_Shevardnadze">Foreign Minister Shevardnadze</a>, and they understood that the United States was going to do something about this invasion of Kuwait. So the Russians thought, &#8220;It&#8217;ll be better if we force all of this to go through the UN, where Russia has a veto power.&#8221; He said, &#8220;Look, there was a division of roles.&#8221; It extends to China, the help that Russia provided. &#8220;When the Americans asked us to work with the Chinese, we told the Chinese, &#8216;Think about it. You&#8217;re one of the big five with veto power. Doesn&#8217;t it suit your interest to funnel everything through the UN where you can put your foot down?&#8217; And the Chinese came around to that idea.&#8221;</p><p>However, the Russians had red lines. Here&#8217;s Anatoly Kovalev again, the deputy foreign minister. The red line is, American troops stay out of Iraq. No regime change in Iraq. You do that and you will tank the termination of the Cold War. And that would be the goal. Here&#8217;s Kovalev saying, &#8220;I advanced the basic principle that we must support the territorial integrity of Iraq. This was our sacred position. We must not permit a division of Iraq.&#8221;</p><p>So if you wonder why the ground war ended after 100 hours, this is it. The big thing out there is war termination of the Cold War. That&#8217;s the big thing. Saddam Hussein is a minor event over there. Sorry, but he was. If it had tanked Cold War termination or upset the reunification of Germany, France and Britain might have been very happy, because <a href="https://en.wikipedia.org/wiki/Fran%C3%A7ois_Mitterrand">Fran&#231;ois Mitterrand</a>, who is the president of France, and <a href="https://en.wikipedia.org/wiki/Margaret_Thatcher">Margaret Thatcher</a>, prime minister of Britain, were against German unification. They knew it would marginalize their own country. Germany&#8217;s going to be a bigger economy, which it is.</p><p>Fran&#231;ois Mitterrand eventually found solace in expanding the European Community to the European Union when you&#8217;re incorporating all these Eastern Bloc countries into it. He plays a really important role in concluding the <a href="https://en.wikipedia.org/wiki/Maastricht_Treaty">Maastricht Treaty</a> that forms the European Union. But Margaret Thatcher just plain lost. She was just upset about the whole thing. She said, &#8220;Germany will be the Japan of Europe and worse than Japan.&#8221; I guess she hadn&#8217;t been to Japan lately. She said, &#8220;The Germans will get in peace what Hitler couldn&#8217;t get in war.&#8221; She wanted to leave Red Army troops in Germany for the duration. Imagine if that had been the case and now dealing with Putin&#8230; If he had troops in Germany, we would be in trouble.</p><p>But Bush and Kohl worked around all of them. Bush said to Kohl at the end of it, &#8220;Look, I&#8217;m not going to beat my chest and dance on the Berlin Wall.&#8221; Both of them were very careful never to humiliate Gorbachev about the Soviet loss of the Cold War. Why? Because they knew that if they did that, he might fall from power sooner rather than later. Also, they were afraid that if they did that, the hardliners would come to power much more rapidly than they actually did. It was 20 years before Putin started consolidating his power.</p><p>The newly independent countries of Eastern Europe needed those 20 years to integrate militarily, politically, economically with the West so that the cement could set before you got the Russians trying to destabilize them. So they bought them 20 years to do this. But there&#8217;s a cost to all this. Bush never got credit for his essential role in ending the Cold War on Western terms. So he was not reelected for a second term.</p><p>Anyway, when it came time for Nobel Prizes and why the Cold War ended, <a href="https://en.wikipedia.org/wiki/Anatoly_Adamishin">Anatoly Adamishin</a>, this Soviet Foreign Service officer, said, &#8220;Look, it&#8217;s difficult to deny the Soviet Union was the one that ended the Cold War.&#8221; And <a href="https://en.wikipedia.org/wiki/Edwin_Meese">Edwin Meese</a>, who was a counselor to Reagan and also his attorney general, said, &#8220;Look, the Cold War began because of Soviet policies and it ended in a sense because of Soviet policies.&#8221; The Nobel Prize Committee agreed. They awarded the prize to Gorbachev, not to Bush, for his role in liberating Eastern Europe.</p><p>So when you&#8217;re thinking about this question of why Russia lost the Cold War, I hope you will come up with a more complicated answer than, &#8220;Well, Ronnie did it.&#8221; There are probably other causes at work as well. Anyway, thank you for your attention. That&#8217;s what I have for you this evening.</p><h3>00:56:10 &#8211; How central planning survived so long</h3><p><strong>Dwarkesh Patel</strong></p><p>Sarah, thank you so much for doing these.</p><p><strong>Sarah Paine</strong></p><p>Thank you for having me. That would be the more important thing.</p><p><strong>Dwarkesh Patel</strong></p><p>There&#8217;s an interesting question of why the Soviet Union collapsed when it did. I think the even more interesting question is why a system that was so centrally planned, monstrously inefficient, brutal, a colonial land empire, how such a country could survive for so long into the 20th century. I feel like that&#8217;s the thing that actually needs explanation. How did this regime last for 74 years?</p><p><strong>Sarah Paine</strong></p><p>There are loads of dysfunctional places all over the planet that have been dysfunctional forever. You look at well, why are they dysfunctional? To me, the answer to that one in a way is the example of North Korea. Of all countries that should fall, a place that has ongoing famines in the 21st century, and it used to be the richest part of the Korean Peninsula.</p><p>These authoritarian regimes are really good at maintaining coercive powers. Think about it. In order to educate someone, it takes years as a parent to bring up a little person and then you get them educated and maybe they&#8217;re an A-list politician. It takes seconds to assassinate them. It&#8217;s the asymmetry between construction and destruction. Destruction is so easy. Dictatorships are all over the world. It&#8217;s a sad part of the human condition. They clearly know what they&#8217;re up to.</p><p>In the case of the Soviet Union, there were multiple intelligence organizations. That&#8217;s what Stalin was using to keep track of everyone. So you want to monopolize information so that you know more information than other people. And then they have a whole bunch of people who are the winners of the <em><a href="https://en.wikipedia.org/wiki/Nomenklatura">nomenklatura</a></em>, the elites there. You make sure you pay all of them off. I mean think about it. Human societies, slaves, serfs&#8230; We humans have been doing these things to each other for a long time.</p><p><strong>Dwarkesh Patel</strong></p><p>So dictatorships can certainly sustain themselves for a long time. But the Soviet Union was special in that by the 60s and 70s, they had a GNP that&#8217;s 60% of America&#8217;s, this incredibly dynamic economy. In the 40s and 50s, they had much higher growth rates, so much so that prominent economists like <a href="https://en.wikipedia.org/wiki/Paul_Samuelson">Paul Samuelson</a> are saying that by the 90s, based on what they&#8217;re seeing at the time, the Soviet Union will have a bigger economy than America.</p><p>This is just quite surprising that they would have such high growth rates. If you just think about how <a href="https://en.wikipedia.org/wiki/Planned_economy">central planning</a> works, people are going to tell you how much steel you can make and which company gets to use the cotton fabric and cement, etc. You have hundreds of millions of people living under this system. It&#8217;s actually quite shocking that they actually had notable growth rates after World War II for decades on end.</p><p><strong>Sarah Paine</strong></p><p>Well, first of all, it&#8217;s a war economy, essentially. You&#8217;re putting all your money into having a big military. Russians define greatness&#8212;this is part of it&#8212;as being a big power, and its a military power with territory. Most countries in wartime mobilize for the military. This country did it in World War II. All kinds of rationing, we&#8217;re not using market prices. You&#8217;re setting different prices, giving people ration cards and things. The thing about the Soviets is they kept it forever. They never got rid of it. So that&#8217;s one piece.</p><p>Another problem with the Soviet Union is all of the data. So I don&#8217;t know what data you&#8217;ve seen, and I know the data I&#8217;ve seen. It&#8217;s hard to know because the Ruble is a non-convertible currency and a lot of things they measured in weight and other things. Like they&#8217;re the greatest TV producer in the world, they said. Why? Because they made the heaviest TVs in the world. I&#8217;m serious, when I was there this was it. They would spontaneously combust, which is not the normal thing a TV should do for you, burn down the apartment building.</p><p>So they&#8217;re going to measure their heavy TVs as a positive, and the Ruble is non-convertible. So there was a guy named <a href="https://en.wikipedia.org/wiki/Murray_Feshbach">Murray Feshbach</a>, and I can&#8217;t remember which part of the US government he was in, but he was really good at looking at their statistics and then adjusting them. But people didn&#8217;t know. I gave you the CIA ones. The CIA, they&#8217;re not stupid people. They&#8217;ve got the best data they could find and they&#8217;re coming up with 20% of the Soviet budget is probably devoted to the military. After the Cold War is over, they&#8217;re going, &#8220;Whoops, we missed.&#8221; It&#8217;s at least double that and maybe triple. So it&#8217;s really hard to know even with the statistics you&#8217;re getting. Certainly what Paul Samuelson had wouldn&#8217;t be accurate. It&#8217;s just a guess.</p><p><strong>Dwarkesh Patel</strong></p><p>My favorite example of this is that there were top-down commands that you had to produce a certain amount of steel. A steel factory would then be incentivized to make thicker bars of steel rather than thinner bars because that would count as greater production, except a lot of inputs actually do require the thinner sheets. So then the other factories have to thin down the steel, but that also counts towards GDP. So producing the inefficient steel and then cutting it down to size is both being double-counted towards GDP.</p><p><strong>Sarah Paine</strong></p><p>Oh, and just the whole waste of it. Like the heavy TVs, they probably have four times the inputs that they need to make them that would be good for other things. It&#8217;s this notion that you can actually plan an economy. Prices are a miracle. Good old <a href="https://en.wikipedia.org/wiki/Adam_Smith">Adam Smith</a>, the <a href="https://en.wikipedia.org/wiki/Invisible_hand">invisible hand</a>. Prices are the way to go and markets, it&#8217;s more efficient.</p><p><strong>Dwarkesh Patel</strong></p><p>I wonder if one thing that&#8217;s going on is that in the early and mid-20th century, you have economies which are much simpler, at least compared to today. So even then, obviously, command and control is less workable than capitalism. But if you just have heavy industry, you need a certain amount of cement, steel, concrete, fabrics, coal. That&#8217;s much more workable than, &#8220;We&#8217;ve got to centrally command what SaaS tools your enterprise is allowed to use.&#8221;</p><p><strong>Sarah Paine</strong></p><p>Oh, yeah. It&#8217;s interesting on the development thing. The communists have insisted on heavy industry. That&#8217;s the thing that they want. Forget about the consumer goods. If you look at the countries that really have made it, like Japan and the <a href="https://en.wikipedia.org/wiki/Meiji_Restoration">Meiji Restoration</a>, they&#8217;re doing a lot of light industry and consumer goods. Then they move into heavy, but they&#8217;ve already got people on bicycles and they&#8217;ve got textiles and other things up and running.</p><p>That would also apply to Taiwan and Korea. They do, by all means, get heavy industry. But that&#8217;s not the starter program. The starter program is basic standard of living. Again I&#8217;m not an economist, but it turns out if you just look at who&#8217;s rich and who&#8217;s not, that seems to me a more workable thing.</p><p><strong>Dwarkesh Patel</strong></p><p>There&#8217;s also the fact that the centralized regime is building things according to the 30s plan. And even after post-war reconstruction, they&#8217;re still calling back on these plans from the 30s that call for heavy industry for a bygone era.</p><p>In the 70s, 80s, we had our rust belt collapse of manufacturing. People complain about this as, &#8220;Look, the US has this hollowed-out manufacturing base.&#8221; But it&#8217;s much better to have industries which are left behind so that the whole economy as a whole can be more dynamic and move on than the Soviet Union where the entire thing became a rust belt because they couldn&#8217;t move on.</p><p><strong>Sarah Paine</strong></p><p>It&#8217;s more exciting than that. Again, I&#8217;m not an economist, but apparently they missed the plastics revolution. I mean think about our own lives. Now we&#8217;re finding we have too many plastics, but plastics are an incredible material and they&#8217;re just missing that. I remember in Russia trying to figure out where to get sour cream and was being laughed at by Russians because I was so stupid in the store that I couldn&#8217;t find it. Well, we have little plastic tubs with the sour cream. Back in the late 80s, when I was there, you had to bring your glass jar with you so you could hand it over the counter so someone could take a filthy ladle and fill up your jar. I mean, this is part of not having plastics.</p><p>And then they totally missed the computer revolution. This plays into Ronald Reagan winning the military race. We&#8217;re putting these chips and things into our ballistic missiles and they can&#8217;t do that. And that&#8217;s a problem.</p><p><strong>Dwarkesh Patel</strong></p><p>Speaking of plastics, I didn&#8217;t realize before preparing for this lecture the overwhelming role that oil played in first explaining why the Soviet Union was able to sustain itself for so long and then why it collapsed. By the late 50s, Soviet growth rates were already starting to go down, especially compared to the postwar boom that America is experiencing. <a href="https://en.wikipedia.org/wiki/West_Siberian_petroleum_basin">In &#8216;59, they discovered these massive oil fields in Siberia</a>.</p><p>And then from 1973 to 1985, I think, 80% of the Soviet Union&#8217;s hard currency earnings were just from oil. They use this because central planning can&#8217;t produce even grain, let alone advanced technology. They use this to import a bunch of stuff to sustain the Red Army, to sustain the population, to subsidize Eastern Europe. And then of course, <a href="https://en.wikipedia.org/wiki/1980s_oil_glut">prices collapsed in 1985</a>. Do you think that if the Siberian reserves weren&#8217;t found in the late 50s, that it&#8217;s possible that the Soviet Union would have collapsed 30 years prior?</p><p><strong>Sarah Paine</strong></p><p>I don&#8217;t know, but they wouldn&#8217;t have been able to do all the <a href="https://www.cia.gov/readingroom/docs/CIA-RDP09T00367R000400330001-8.pdf">Africa program</a> and things. It just would be too expensive. So certainly it would have been a reduced thing. It&#8217;s also the gas reserves they got up in like the north central Soviet Union. I can&#8217;t remember the places, but this is the gas that gets pumped to Europe because that&#8217;s the better place. They make those big investments and it takes a while for them to pay off. That was a big deal because they needed help from Western oil companies or whoever does the gas pipelines, compressors, whatever it is you need. There was a big to-do about that, about whether we should sell the stuff or whether we shouldn&#8217;t sell the stuff. The Europeans wanted to sell. We were trying not to. This was going on under Reagan as well.</p><p>But anyway, they had built a lot of it and it was essential to their pocket change. But then when they got all the pocket change, they never saved. Whatever the oil wealth was, they spent up to the max. Doesn&#8217;t it sound familiar? Governments, you have money, you spend it. Forget about rainy days.</p><p><strong>Dwarkesh Patel</strong></p><p>So after the Soviet Union collapses, there was a period when Putin was still winning somewhat free elections. So if you look at why Russia&#8217;s economy recovers and why Putin was so popular in the 2000s, from 2000 to 2008, oil goes from $10 a barrel to $140 a barrel. This goes to your point about how we give credit or blame to political leaders for often what are just long-run macro trends.</p><p><strong>Sarah Paine</strong></p><p>Well, what I didn&#8217;t cover is that when the Soviet Union collapses, Soviet living standards, Russian living standards, they implode and it&#8217;s a mess for 20 years. It is just unbelievably difficult.</p><p>Oh, and another piece of the brilliant Soviet management: in order to maintain control over the empire, instead of building things all in one place, you build some plane parts here, some plane parts there, some plane parts all over the empire. So when the empire goes, great, I&#8217;ve got a quarter of a plane, and then where do I get the other parts? So all of that fell apart.</p><p>When Putin suddenly has a lot of money he starts spending it on people, because initially there&#8217;s plenty of money. Russian standards of living do go up. So of course they like him, and they give him credit for all of that. But then that runs its course, right? And then it&#8217;s less good and then he&#8217;s more excited about&#8230; Well, it&#8217;s his mindset anyway. When you get more money, you want to get the empire back. And then Russians also like that, right?</p><p><strong>Dwarkesh Patel</strong></p><p>Speaking of the empire, Russia&#8217;s economy just had this terrible period after the collapse of the Soviet Union. A lot of the Eastern European satellites seem to recover in this gangbusters way. Obviously, East Germany. But even Poland today is such a big success story. What&#8217;s going wrong with the mainland itself that these other countries are able to recover from communism much better?</p><p><strong>Sarah Paine</strong></p><p>Well, they had always been much more connected to Western Europe. Czechoslovakia before the war was a full-up highly developed country absolutely tied to the West. Poland, I believe, <a href="https://en.wikipedia.org/wiki/Nicolaus_Copernicus">Copernicus</a> is from a place like Poland, right? It&#8217;s a center of the <a href="https://en.wikipedia.org/wiki/Age_of_Enlightenment">Enlightenment</a>.</p><p>But when I was using the George Bush Sr. archives, it&#8217;s fascinating. So it&#8217;s &#8216;88, &#8216;89 when the Soviet Union&#8217;s imploding. There&#8217;s a lot of correspondence between Eastern European, particularly Polish, leaders coming to the Bush administration saying, &#8220;Hey, our banking system, we know it&#8217;s a mess. Our financial system&#8217;s a mess. We know we need expertise to help us figure out what our legal system is going to look like.&#8221; And Bush is all over that. I&#8217;m sure he farmed them out to the private sector who would also be all over that, like giving them free consulting. So as a result, you do have them really taking advantage of this 20 years.</p><p>At the same time when Bush would have loved to have given some of the same advice&#8230; There were people like <a href="https://en.wikipedia.org/wiki/Jeffrey_Sachs">Jeffrey Sachs</a> and others who went to the Soviet Union, but it was not remotely the same thing. This is people throughout Polish society requesting this advice, not like one guy with an office in Moscow. Basically, the Russians thought they knew it all and they thought they understood. This is all the unknown unknowns, the things you don&#8217;t understand, your blind spots. Truly, economics is a blind spot for the Soviets.</p><p>Because think about it, when the tsars ran the show, it&#8217;s like a riff off the Mongol Empire. You take cuts from people&#8217;s businesses, from trade that comes through. Then it&#8217;s also about selling basic commodities. You&#8217;re not thinking, under the tsars, of Russia doing high-end manufacturing. I mean, I guess <a href="https://en.wikipedia.org/wiki/House_of_Faberg%C3%A9">Faberg&#233;</a> and some jewelry if you want to do that. But really that&#8217;s not it. It doesn&#8217;t have this commercial tradition, being tied into this commercial tradition of Western Europe and all the sea routes for trade.</p><p>Then when you get the communists, they aren&#8217;t about that at all. So there&#8217;s really a dearth of knowledge. Think about this country with all the little kids selling lemonade, right? You see them on the streets. They&#8217;re already learning. The kids who are doing newspaper routes, they&#8217;re already learning about buying things, selling things at a very young age. We just take this knowledge for granted. It was just absent in the Soviet Union and not as much absent in Eastern Europe that had been more connected in.</p><p><strong>Dwarkesh Patel</strong></p><p>Before we get to the period of Russian collapse, let&#8217;s go back to the end of the Soviet period. Gorbachev starts instituting these economic reforms along with <em>glasnost</em> and <em>perestroika</em>. But what I find mysterious is that those economic reforms not only fail to prevent the stagnation that the Soviet Union is experiencing, but they in fact make things worse.</p><p>You would think that reform, even if it&#8217;s handled badly, would have some sort of positive impact. If you do it badly, then it&#8217;ll have a smaller positive impact. But here it just causes this huge hyperinflation, causes all these big problems. So why did reform have this backwards impact?</p><p><strong>Sarah Paine</strong></p><p>There&#8217;s so much that needs reforming there. But part of it, I think, is because he wanted to do political reforms. That&#8217;s what he understands. As a human being, that would be the thing that he&#8217;s very familiar with. Think about it. He&#8217;s an A-list member of the Communist Party to be the guy when they do generational change, he&#8217;s the one. So he&#8217;s obviously very astute at that level, but the problem is economics. He&#8217;s giving away political power before he&#8217;s fixed the economic problems.  China&#8217;s conclusion is there is no way you&#8217;re going to touch political power. They&#8217;re going to hang onto that and then deal with as much of the economics as they&#8217;re going to deal with. That&#8217;s part of it.</p><p>But part of it is there&#8217;s no tradition for all of these things. Then you go, &#8220;Well, how did Russia get this way?&#8221; It&#8217;s a very difficult address. Prior to the <a href="https://en.wikipedia.org/wiki/Industrial_Revolution">Industrial Revolution</a>, it&#8217;s flat, neighbors all invade, and so you needed a big army in order to defeat them. A big army is going to want a war economy. Historically, you&#8217;re going to want to support a big land force. I mean this is my take. Others who are actually experts on these various periods of Russian history can come up with something else. But I think you&#8217;re funneling, you&#8217;re channeling your economics into that.</p><p>Whereas you&#8217;re looking at Europe, particularly Britain, and it&#8217;s merchants. They have a big aristocracy who are not going to dirty themselves with buying and selling stuff, but there are a tremendous number of very rich merchants in Britain that are going to influence government laws and things, which is not going to take place in Russia. Then what&#8217;s nice about the Navy for Britain is you send them away. They&#8217;re not going to run a coup in the capital because they&#8217;re off on the ship somewhere. And there aren&#8217;t that many of them compared to a standing army. So I suspect, I can&#8217;t prove this, that this leads to different outcomes or contributes to them.</p><p><strong>Dwarkesh Patel</strong></p><p>One theory I heard that is complementary to your theory is that Gorbachev is instituting reforms because he thinks there should be decentralization and democratization, but he doesn&#8217;t fundamentally believe in the market system. So he&#8217;s delegating power to these quasi-firms. At the same time, he thinks the price system is immoral, private property is immoral. So they can&#8217;t intermediate between themselves using real prices.</p><p>So then how do these firms intermediate? Well, there&#8217;s corruption. If you can&#8217;t use actual prices and property to figure out who gets what allocation of scarce resources, you just backroom deal, which makes the problem worse.</p><p><strong>Sarah Paine</strong></p><p>Well there&#8217;s no legal system and you need a legal system. Legal systems take a long time to develop. So you&#8217;re telling the Soviet Union, &#8220;Okay, communism is down and now chop chop, we need a new legal system.&#8221; It&#8217;s not going to happen.</p><h3>01:14:46 &#8211; Sarah&#8217;s life in the USSR in 1988</h3><p><strong>Dwarkesh Patel</strong></p><p>You were mentioning the problem that Eastern European countries especially had, which is that they&#8217;re going more and more into debt because they&#8217;re not able to produce globally competitive exports. They have this last-ditch effort that &#8220;We&#8217;re going to solve our problems with some technological miracle. We need to get even more over-leveraged. We&#8217;ll get some Western machinery or technology, and then we&#8217;ll be able to finally produce something that the world wants.&#8221; I&#8217;m curious up to what point this was a plausible hope. Through the 80s and even till the end of the 80s, they still believed that Czechoslovakia or East Germany or something could catch up with West Europe?</p><p><strong>Sarah Paine</strong></p><p>They&#8217;re desperate. Think about it. If you&#8217;re a communist leader, how many other cards are there to play? You&#8217;re looking, &#8220;Okay, this is the only card I got.&#8221; And they&#8217;re doing other things because of the social unrest. They want to import food and consumer products because they&#8217;ve been so neglected.</p><p>Then there&#8217;s another piece, which is <a href="https://en.wikipedia.org/wiki/Videocassette_recorder">VCRs</a>, the videos. All of a sudden, those things came around. I remember being in the Soviet Union, the academic year of 1988-89. One of my classmates had been an English language tutor of this person in Moscow and set me up because that was the only way to get a good meal once a week. For a meal, I would talk English for an hour.</p><p>What that family wanted more than anything else was a VCR player. You could have hard currency and buy it at the diplomatic store. So I basically got them a VCR by going to the diplomatic thing with my very limited foreign currency. I bought an overpriced VCR for them and got all kinds of meals for the rest of the year. But it meant that they could all of a sudden get Western movies.</p><p>There are things in movies where there&#8217;ll be a picture of a fugitive running by the fruit section of the Berkeley Bowl. The Russians would gasp like, &#8220;Oh.&#8221; It&#8217;s unbelievable. I think that <a href="https://en.wikipedia.org/wiki/Raisa_Gorbacheva">Raisa Gorbachev</a>, Gorbachev&#8217;s wife, when she came and visited, she must have realized that a welfare mother on food stamps had better buying power than she did by just being able to have access to Walmart.</p><p>I think the elites, as they&#8217;re traveling&#8230; I have no statistical data on this, but as you travel, it&#8217;s like I&#8217;m comparing me getting sour cream in a jar. That was the other thing, counting up all the things in a Soviet supermarket. The total was something like 77 items total in this supermarket. I don&#8217;t think that compares favorably to a candy rack as you leave a 7-Eleven. And when you went by the meat section, the smell just about knocked you out, rotten meat. It was really disgusting.</p><p>I got really good at making borscht. Go to the peasant market, pay hard currency for bones, because I couldn&#8217;t afford any meat, but I could afford the bones. Then I would buy&#8230; The Russians produce really good sugar beets so I got beets. Then you&#8217;re starting to get rotten apples over the winter, but they at least come from Hungary. Russians didn&#8217;t even produce apples in those days, but Hungarians did. The Romanians provided the canned tomatoes, and I could do a credible borscht.</p><p>But you&#8217;re talking about Moscow, the center of everything. I remember buying potatoes at the market and the rotten spots felt gelatinous. So you&#8217;d have to cut those out. And then you&#8217;re wondering how many nutrients are in the rest of that potato. It was a really gross year. I remember going to the candy store and I would buy caramel from Poland or somewhere. It was like a food item because it was actually edible.</p><p><strong>Dwarkesh Patel</strong></p><p>At this point, I bet you were wondering why you didn&#8217;t write a biography of Napoleon so you could just visit Paris instead.</p><p><strong>Sarah Paine</strong></p><p>My brother&#8217;s comment is, &#8220;You&#8217;re studying Russia and China, two countries in the breakdown lane.&#8221;</p><p><strong>Dwarkesh Patel</strong></p><p>By the way, the point about the grocery stores having 74 items is interesting in two ways. One, central control works much better if you have a much smaller amount of items to optimize over. So if things are standardized, it can work much better. And second, to your point about GDP being hard to compare between the Soviets and the United States, how do you compare a rotten tomato or a rotten potato to the Idaho ones that you can get?</p><p><strong>Sarah Paine</strong></p><p>They would have compared it by pound.</p><p><strong>Dwarkesh Patel</strong></p><p>Exactly.</p><p><strong>Sarah Paine</strong></p><p>Yeah.</p><p><strong>Dwarkesh Patel</strong></p><p>You said you were there in &#8216;88 and &#8216;89. So this is before the Berlin Wall has fallen.</p><p><strong>Sarah Paine</strong></p><p>I was watching the Tiananmen demonstrations on Soviet TV. The only reason you got that TV coverage is because Gorbachev was in Beijing. So all the press was there. That&#8217;s why you have the coverage. And they stayed on because the students were demonstrating and the Chinese closed society wasn&#8217;t aware of the power of television. Guys, they&#8217;re going to film you doing all of this stuff and they will get the film out.</p><p><strong>Dwarkesh Patel</strong></p><p>In &#8216;88, was the mood&#8230; Obviously things are going terribly, but did people realize that they&#8217;re only two years away, or three years away from the complete dissolution of the Soviet Union?</p><p><strong>Sarah Paine</strong></p><p>No. Maybe the end of the Soviet Union, but there was such optimism of thinking we&#8217;re finally going to be a full-up democratic country. It&#8217;s going to be wonderful, with no sense of the work schedules that go into a capitalist economy. To create the wealth in this country, a lot of people are working far more than 40 hours a week, particularly as they&#8217;re getting started, working enormous hours.</p><p>That was not something that was in most people&#8217;s minds. Sure, the kids who became the ballerinas in the <a href="https://en.wikipedia.org/wiki/Bolshoi_Ballet">Bolshoi</a> are working long hours to do that. But as an economy as a whole, they didn&#8217;t understand the source of wealth and had no inkling of all the things that are missing, not least of which is that no one&#8217;s got the right education. Great, you got <a href="https://en.wikipedia.org/wiki/Karl_Marx">Marx</a> memorized. That does you zero good.</p><p><strong>Dwarkesh Patel</strong></p><p>So around this time is when people are finally learning about what actually happened during the Stalinist period.</p><p><strong>Sarah Paine</strong></p><p>Oh, yeah.</p><p><strong>Dwarkesh Patel</strong></p><p>So people are optimistic that we can have a changing of the guard and maybe things will improve. But at the same time, they&#8217;re learning about how terrible their history actually was. Between these two things&#8230; Also at some point they must realize through the 90s that things actually aren&#8217;t improving. In fact, they&#8217;re getting worse. So what is the inflection point at which the mood is just...</p><p><strong>Sarah Paine</strong></p><p>I don&#8217;t know, because I wasn&#8217;t living there. I was thinking that there would be impending problems as a Chicken Little American. The sky is falling, the sky is falling. Americans always think disaster is coming. I sort of fit in that crowd. But I think there was a lot of optimism and exuberance thinking, &#8220;We have the freedom to really understand our history and what&#8217;s happening.&#8221;</p><p>This is for educated people, people with college degrees in Moscow and St. Petersburg. Now what&#8217;s going on in the rest of the country is undoubtedly a different story because as bad as living in Moscow was, living in the countryside was going back in time far further. So those people weren&#8217;t living well at all. And it&#8217;s going to get really bad for them.</p><p><strong>Dwarkesh Patel</strong></p><p>Okay, so people are learning about these things for the first time. Is the sense that they kind of suspected? I mean, people have family. They must have known, &#8220;My uncle was off in this little mining town that he was forced to go to for a decade right after World War II.&#8221; Were they totally shocked or was there some sense that things were pretty bad and now we&#8217;re just learning the extent of it?</p><p><strong>Sarah Paine</strong></p><p>I think there was an understanding it was terrible, but I think there&#8217;s this exuberance of thinking it&#8217;s going to get much better. Then the disappointment is equally extreme. And then there was this feeling that the West owes us because you&#8217;re all really rich and you now owe us to fix everything. The counterargument to that is, &#8220;No you are an enormous migraine. You set back all of these countries across the globe in time with this nonfunctioning communist model that you peddled around there. And now you want extra aid.&#8221;</p><p>The problem was that we wanted to do some of the aid, but they&#8217;re not going to be receptive to it. That was another conclusion with the Bush administration, that if we dumped a lot of money in it, it would just go straight into corruption. You need a legal structure in order to place money, and they just plain didn&#8217;t have it. That was another thing that was worrying the Bush administration. There&#8217;s nowhere to put the money.</p><p><strong>Dwarkesh Patel</strong></p><p>Speaking of these different countries that the Soviets and the United States were competing for during the Cold War, you had this presentation where you say Reagan alone didn&#8217;t do this. But I wonder if the broader lesson is that nothing any US president did in terms of foreign policy&#8230; That was all a sideshow, this t&#234;te-&#224;-t&#234;te competition for different Third World countries: &#8220;We&#8217;re going to get Brazil, we&#8217;re going to get Vietnam, we&#8217;re going to get Algeria.&#8221; That just seems much less significant than the fact that liberal capitalism was more appealing and out-produced communism. So even if some country, even if Brazil goes communist, this is not going to change the fundamental playing board here.</p><p><strong>Sarah Paine</strong></p><p>If you do not protect the liberal economies of Europe, you&#8217;re not going to have anywhere to play the liberal economic game, and also Japan. One of the reasons you feel that liberal economies work is you&#8217;ve got economic miracles going on in Japan, Korea, Taiwan, Singapore, and Hong Kong back in the day. So if you abandon those places...</p><p>Also in the Cold War, there was a tremendous amount of economic growth across the world, particularly in the Third World. Why? Because in the past, if there&#8217;s a civil war, whoever&#8217;s losing either comes to us or comes to the Russians and says, &#8220;Help us.&#8221; So whoever it is helps, and then the other side feels obliged to help, and then you&#8217;re just destroying wealth ever more rapidly. The Cold War was anything but cold in the Third World. Tens of millions of people died in these conflicts. So when you end that, all of a sudden they can start compounding growth.</p><p>So there is a problem with not countering someone who&#8217;s going to impose communist systems all over the place. Communist systems are really good at putting dictators into power in a civil war situation. It&#8217;s very effective. That&#8217;s how Mao gets into power. The problem is, then they win the civil war, they&#8217;re in power, they annihilate the opposition, but then it produces compounding poverty thereafter.</p><p><strong>Dwarkesh Patel</strong></p><p>So there is this conundrum, and I genuinely don&#8217;t know the answer to this. In order to beat off these communist factions and guerrillas, we often through the Cold War had to support other dictators. Probably in many cases they were better than the communist alternative. It&#8217;s just very hard to beat <a href="https://en.wikipedia.org/wiki/Pol_Pot">Pol Pot</a> and Mao in terms of how terrible you can be. But obviously this was in its own way problematic. Even if we didn&#8217;t have to support dictators, we had to alienate countries.</p><p>You had this <a href="https://www.youtube.com/watch?v=LbkO84MsmyM">previous lecture that you gave</a> on the Indo-Pakistani chapter in history where we had to alienate India in order to fend off against the Soviet Union in this little episode. I don&#8217;t know what the solution to this is. If you think that this theater mattered less, then you could say we should have just kept our hands clean of these different Third World countries. But to your point, if you want to be able to show that these countries are going to experience growth under capitalism, then you want them to not be under the subjugation of communists. But then you have to support sometimes objectionable regimes.</p><p><strong>Sarah Paine</strong></p><p>I think you had a more optimistic generation, ironically, optimistic. The people who had survived World War II, there was a real generosity. American servicemen and women were welcomed all over Europe and they were adored in Europe. They came back and they were a very generous group of people. Others felt generous to them.</p><p>That&#8217;s when the <a href="https://en.wikipedia.org/wiki/G.I._Bill">GI Bill</a> just passed saying, &#8220;You&#8217;ve saved everyone. Therefore we&#8217;re going to give you college educations, extend home loans to you.&#8221; Not to African Americans, they were excluded from this, which is a problem. But white Americans weren&#8217;t. It led to massive economic growth where people who&#8217;d never had a college education in their family, they did. All of a sudden, instead of having really hard manual labor, this real optimism. And then it extended to foreign countries. This is when this country was tremendously generous to others, and it worked very well for us.</p><p>Think about the <a href="https://en.wikipedia.org/wiki/Marshall_Plan">Marshall Plan</a>. It looks really generous putting all this money into Europe. We made a fortune off of it, as did Europe. If you&#8217;re smart, you&#8217;re looking for win-wins of things where you both benefit because that&#8217;ll incentivize the other side to join in. This is basic strategy. This is one of the reasons I&#8217;ve got problems with the United States&#8217; turn to zero-sum approaches where &#8220;I&#8217;m going to get everything, you get nothing. Then I look so smart when we do the clickbait on this moment where I get everything and you get nothing.&#8221; It&#8217;s much smarter.</p><p>The other piece is that a lot of things don&#8217;t pay off immediately. George Bush is not reelected president. He absolutely deserved to be. Because what he did, the payoff was huge, ending the Cold War on Western terms. But it doesn&#8217;t pay off in time for the next election. I think this is where Americans miss it. You&#8217;re looking at what someone does on a given day when the real implications are what&#8217;s going to happen in a decade. Like on tax policy, if we keep racking up our debt, it may get us out of the corner today, but is it going to back us into a corner later on? This is where Americans need to think a little harder about long-term implications of things.</p><p><strong>Dwarkesh Patel</strong></p><p>I thought when you pointed out that it would cost 60 billion Deutschmarks for West Germany to pay Gorbachev to let East Germany join West Germany.<strong> </strong>That&#8217;s a lot of money.<strong> </strong>But if you think about decades and decades of future growth, it&#8217;s a huge bargain. It&#8217;s a mistake to think about how expensive things seem at the moment. It&#8217;s another huge country that you&#8217;ve turned.</p><p><strong>Sarah Paine</strong></p><p>There&#8217;s a statement that politicians think of the next election, statesmen think of the next generation. George Bush and Helmut Kohl are statesmen. They&#8217;re thinking of the next generation. The group that fought World War II, many of US and allied leaders, were statespeople. They&#8217;re thinking of the next generation. Or if you&#8217;re thinking of where I&#8217;ve got Mitterrand, who&#8217;s negotiating the Maastricht Treaty about the European Union, that is statesperson&#8217;s work of what&#8217;s the next generation. It&#8217;s important. We need more statesmen, statespeople, political leaders.</p><p><strong>Dwarkesh Patel</strong></p><p>To try out a different thesis on you, through this period the Soviet Union is also trying to buy off other countries, especially when it thinks its economy can grow. Especially when oil, after the <a href="https://en.wikipedia.org/wiki/1973_oil_crisis">1973 oil crisis</a>, oil prices just skyrocket. This is why some Soviet citizens remember the <a href="https://en.wikipedia.org/wiki/Leonid_Brezhnev">Brezhnev</a> era favorably. Oil made it possible for the Soviets to not only import stuff, but through the Brezhnev period there&#8217;s actually a net export of resources to Eastern European satellites rather than the other way around.</p><p><strong>Sarah Paine</strong></p><p>That&#8217;s probably their data. I get it, their oil is really subsidized, but everything in the Soviet Union that was worth having came from somewhere else. The problem is how do you measure it? They&#8217;re just going to measure by weight or something else. It doesn&#8217;t really capture what they&#8217;re getting.</p><p><strong>Dwarkesh Patel</strong></p><p>The larger question being that, it&#8217;s not like the Soviet Union didn&#8217;t think of doing things like the Marshall Plan. Obviously nothing to that extent, but this idea that you can win people&#8217;s favors by providing them military aid, providing them foreign aid. They just didn&#8217;t have the resources to do it to the extent that the US could.</p><p><strong>Sarah Paine</strong></p><p>That&#8217;s true, but there&#8217;s a real coercive piece too. If you mess with them, it&#8217;ll be really ugly.</p><p><strong>Dwarkesh Patel</strong></p><p>Here&#8217;s what I don&#8217;t understand about the arms buildup during the Cold War. The Soviet Union is spending 2% of their GDP just on nuclear weapons alone at its peak. Arms control advocates will make this quip, which is that we&#8217;ve already got enough weapons to destroy the world many times over. Why do we need more? But that is sort of an interesting question. What was the point of spending so much of GDP on the marginal nuclear weapon or marginal weapon system?</p><p><strong>Sarah Paine</strong></p><p>I don&#8217;t know the answer, but you read the plans about these things and you wonder what people are thinking. We were trying to develop tactical nukes. There was only a little trick with that. Whoever deployed it would be within the blast range of the tactical nuke. You&#8217;re going, &#8220;Who develops a weapon like that?&#8221; Apparently we did. Luckily we didn&#8217;t deploy it.</p><p>I don&#8217;t know the answer of why we had such massive redundancy in these nuclear weapons, why the arsenals were so massive. I don&#8217;t know the story on how you maintain these things and how long they last. It doesn&#8217;t make much more sense to me than it does to you.</p><p><strong>Dwarkesh Patel</strong></p><p>Another question. <a href="https://en.wikipedia.org/wiki/Sino-Soviet_split">Sino-Soviet split</a>, this huge diplomatic coup. The Soviets had to put a million soldiers on the Siberian front against China. They had to spend 2% of GDP just stationing and garrisoning this area, which is obviously a lot. That&#8217;s often what many countries spend on defense as a whole, let alone just along one front.</p><p>At the same time, 2% GDP, well if they just had one or two more years of extra economic growth or faster growth, that could make up for this huge diplomatic coup. Again this goes back to the point of, if some domestic policy just caused slightly higher economic growth rates, that would make up for the biggest diplomatic coup of the entire Cold War. It goes back to economy first, diplomacy second.</p><p><strong>Sarah Paine</strong></p><p>Firstly, I have real problems with the statistics. I got a sample size of one, <em>moi</em>. I remember living in Moscow. It was so backwards. It&#8217;s just breathtakingly backwards in just about every way imaginable. They got a big fancy subway system that looks remarkably retro, and at least it works. But the consumer goods were so awful, the quality was so bad. You look at the buildings themselves.</p><p>I get it, they make nuclear weapons. Do they make anything else? Their cars were a joke, their <a href="https://en.wikipedia.org/wiki/Lada">Ladas</a> or whatever they were. It&#8217;s just thing after thing. So you&#8217;re looking at all their stats because that&#8217;s what they are telling you, that we&#8217;re so great. It really is an <a href="https://en.wikipedia.org/wiki/The_Emperor%27s_New_Clothes">Emperor Wears No Clothes</a> moment that finally the little kid goes, &#8220;Oh, you&#8217;re actually naked.&#8221;</p><p>I can give you an example. These acquaintances in Moscow were talking about hospitals outside of Moscow that some of them didn&#8217;t have running water. How do you have a hospital without running water? I don&#8217;t even know how that&#8217;s even conceivable. Or when their kid had put her hand through a glass door or something. They wanted to get her stitched up because she&#8217;s bleeding. She&#8217;s not going to die, but she&#8217;s probably bleeding all over the place. They bring her to one place and, oh, they got no thread to do the stitches. So then they have to go to another place. Who runs a country like this?</p><p><strong>Dwarkesh Patel</strong></p><p>Alright, you convinced me. <a href="https://en.wikipedia.org/wiki/Bay_Area_Rapid_Transit">BART</a> is acceptable. I&#8217;ll stick here. Subway&#8217;s not a big deal. I don&#8217;t want to move to Moscow.</p><p>Okay, while the Eastern European satellites are trying to leave the Soviet Union, this has happened many times through the 20th century. <a href="https://en.wikipedia.org/wiki/Hungarian_Revolution_of_1956">Hungary in 1956</a>, <a href="https://en.wikipedia.org/wiki/Warsaw_Pact_invasion_of_Czechoslovakia">Czechoslovakia in &#8216;68</a>, Poland through Solidarity. Every previous time there&#8217;s a many-million-person-strong Red Army stationed in Eastern Europe left over from World War II, which rolls in the tanks and prevents these revolutions from taking place.</p><p>So what happens in the late 80s and early 90s? The Red Army is still there. There&#8217;s still millions of Red Army soldiers. They just don&#8217;t shoot.</p><p><strong>Sarah Paine</strong></p><p>Generational change. The leaders don&#8217;t have the stomach for it anymore. I don&#8217;t know how you&#8217;d feel about sending tanks and going, &#8220;Oh, we&#8217;re going to splatter all these people.&#8221; I think for many Americans, that would not be the choice that they would make. So this ruthless generation is gone.</p><p>Another piece is that Gorbachev had traveled and I think he had some Czech friends. I can&#8217;t remember all of his lists of friends. But they&#8217;d been horrified by Czechoslovakia in 1968 as young people watching, as Russian young people watching it and thinking, &#8220;It&#8217;s just wrong. We shouldn&#8217;t be doing this. If communism is what it should be, this is not what should be happening.&#8221;</p><p>This is of their youth. Gorbachev and his generation. It&#8217;s not just him, he reflects a whole generation of communists. They&#8217;re thinking, &#8220;There&#8217;s got to be another way. This is just not right.&#8221; So he thinks he&#8217;s got his other way. It&#8217;s this exuberance of the reforms and things that are happening in Russia. There&#8217;s a tremendous feeling of energy. He&#8217;s telling the Poles, &#8220;You get at it too. We&#8217;re all going to do this thing.&#8221; But it&#8217;s all the expertise and things that he&#8217;s missing, that he&#8217;s unaware that he&#8217;s missing, as are all these other people, because how could they have it? They&#8217;ve been living in a command economy.</p><p><strong>Dwarkesh Patel</strong></p><p>This is what I wanted to ask you about. You had the de Tocqueville quote about how revolutions happen when governments start to institute some kind of reform. Gorbachev is doing <em>perestroika</em>, <em>glasnost</em>. There&#8217;s the conservative reactionary parts of the Communist Party, which by the way is a phrase I wouldn&#8217;t expect to have said. But they&#8217;re trying to resist this. So Gorbachev goes about dismantling the party secretariat and instead devolving power down to the individual republics. We know what happens later. These republics are saying, &#8220;Look, we want our own country now.&#8221;</p><p>But this raises a question. If you do inherit a brutal regime, and now you say, &#8220;I want to do reforms.&#8221; You know this dynamic that de Tocqueville pointed out, which is that as soon as you start reforms, actually what tends to happen is that you lose power, not that people consolidate it under you. What actually should you do? Because you&#8217;re like, &#8220;I want to improve people&#8217;s lives.&#8221; But as soon as you try to do that, the whole thing&#8217;s going to fall apart.</p><p><strong>Sarah Paine</strong></p><p>This is so far above my pay grade. I&#8217;m a professor. I have trouble justifying a B+ on a paper. I&#8217;m a believer in gradual reforms. Do it incrementally. For the Soviet Union, it would be gradual legal reforms, work it through their <a href="https://en.wikipedia.org/wiki/State_Duma">Duma</a> slowly, and do it that way. But seek out help from the European Union that has many, many experts that would be overjoyed if Putin and friends would cease doing their number on Ukraine. Now the problem is you&#8217;re going to get into reparations for the horrors they&#8217;ve inflicted. So that ship has sadly sailed for this generation. There&#8217;s no nice ending for Russians. It&#8217;s too late.</p><p>But you can look at Europe itself improving its institutions and Ukraine improving its institutions. If you think about what forces you to change, the existential threat on Ukraine, if they survive all this, this is forcing them really to clean up their institutions. So it&#8217;s happening rapidly there, but we don&#8217;t know the end of that story, how it ends.</p><p><strong>Dwarkesh Patel</strong></p><p>I do think these are interesting lessons here of whenever we look at a country from the outside, we have this thing of, &#8220;Well, just reform everything and just fix your economy.&#8221; Whenever we understand the system better&#8230; For example, in the United States, healthcare is 20% of GDP. This idea that Trump or Obama or Biden, whoever, could just come in and be like, &#8220;Well, I&#8217;ll just fix healthcare.&#8221; We recognize that this is a wildly implausible thing to happen. But then we have this expectation that in Russia, Gorbachev or Yeltsin could have just been like, &#8220;100% of my economy is messed up, and I&#8217;m just going to fix it.&#8221;</p><p><strong>Sarah Paine</strong></p><p>American hubris in action. Think about our country. We have one of the most crazy tax codes on the planet, and neither party can touch it. Because you touch any part of it, someone negotiated that wording exactly. Yet think of how much of our economy is taken up by the overhead of all the tax accountants, all the misdirected cash in order to take advantage of something that&#8217;s simply an invention of the tax system.</p><p>There was years ago when there was talk of doing a flat tax, &#8220;Wouldn&#8217;t that be much more efficient?&#8221; You can imagine what accountants thought about that one. That idea has totally died. Talk about inefficiency. Then we realize we have budgetary problems in this country. This would seem to be something that ought to be on people&#8217;s radar, clean up the tax code. But isn&#8217;t it precisely that many people don&#8217;t want the radar on the tax code? That&#8217;s why we&#8217;re wondering who can get in and out of girls&#8217; or boys&#8217; bathrooms, instead of looking at the tax code, which should be the real thing.</p><p><strong>Dwarkesh Patel</strong></p><p>I think there should be big deductions for podcasts. It should count for research and development.</p><p><strong>Sarah Paine</strong></p><p>Well, Dwarkesh, you&#8217;re almost at that stage. You need to add a lobbyist in DC.</p><p><strong>Dwarkesh Patel</strong></p><p>We&#8217;ll work on it.</p><p>There&#8217;s a <a href="https://amzn.to/4pCPVIS">very interesting book about North Korea</a>, I forget the title, where the author is pointing out that North Korea could not even start doing reforms today because as soon as there was some sort of information from the outside world that North Koreans could see&#8212;which would be part of any reform&#8212;they would immediately realize that everything the government has told them is false. South Korea is enormously wealthier and they have this terrible standard of living.</p><p>Obviously, this is the same experience that Eastern Europeans had. Literally in many cases, you had a country that was bisected in half and the other half is living so much richer. In those situations, I guess this goes back to the question of, &#8220;Well, today in North Korea, how would it even kick off if <a href="https://en.wikipedia.org/wiki/Kim_Jong_Un">Kim Jong-un</a> just had a change of heart or if somebody else came into power?&#8221; They&#8217;re probably just trapped in this to the extent that they want to keep power.</p><p><strong>Sarah Paine</strong></p><p>Oh, he&#8217;s trapped because he&#8217;s a dead boy if he tries to take a go at retirement.<strong> </strong>In Asia&#8212;I don&#8217;t know exactly all of the parts of Asia where this applies to, it&#8217;s some parts&#8212;there&#8217;s a thought that things last for three generations and then it&#8217;s over. So he&#8217;s the third generation. Whether this is true or not doesn&#8217;t matter. If you believe it&#8217;s true, it will become a self-fulfilling prophecy. So I&#8217;ll be interested. I probably won&#8217;t live to see it, you in the room will, what happens to the Kim family, whether it makes it to generation four or not. But by their own belief system, in theory, they shouldn&#8217;t. So who knows?</p><p><strong>Dwarkesh Patel</strong></p><p>One more question about oil.</p><p><strong>Sarah Paine</strong></p><p>Based on my big expertise on oil, zero. Okay.</p><p><strong>Dwarkesh Patel</strong></p><p>During this period between &#8216;73 and &#8216;85, when they had these huge oil revenues, presumably there was some amount of exuberance. But did the government recognize and realize that they&#8217;re super fragile to the price of oil and if that collapses, they need some sort of contingency plan, some rainy day fund? You must notice that, &#8220;Oh, this is half my budget, and all of my foreign currency is coming from oil, and this is a very volatile commodity.&#8221; Nobody noticed that?</p><p><strong>Sarah Paine</strong></p><p>Yeah, well, it&#8217;s interesting. I was reading this long chronology that was put together sort of like early Putin. So before they really shut down all the information. It was just a chronology of the Cold War, big fat book. Just like someone like me to read a book like that. So I&#8217;m going date after date after date after date. It&#8217;s written by people who are really angry about how the Cold War turned out. One of the takeaways from the compilers of this thing is they kept criticizing. They showed how much for every year Russia was making in oil revenues. It was huge. But in their analysis it was, &#8220;And they saved none of it&#8221;, right? There was no sense of investing in something.</p><p>There&#8217;s something called consumption. There&#8217;s another thing called investment. Going around and buying a bunch of Western grain is consumption. There&#8217;s none of this being put in anything that&#8217;s going to yield anything. So that was a big criticism from the authors of this book. To the question you&#8217;re asking, &#8220;No, they just milked it while they were there.&#8221;</p><p><strong>Dwarkesh Patel</strong></p><p>Final question, this is not so much a question as an observation. I don&#8217;t know if you have a reaction to this. Just look at Russia&#8217;s history through the 20th century: tsarism, communism, collectivization, to more than 10% of your population dying from World War II, then back to Stalin, and then more communism, and then the economy collapses again, and then Putin. Especially if you look at the satellite states, they had all of this happen to them and worse because now they&#8217;re getting invaded.</p><p>Whereas you have other countries. Japan and Germany also had tragic histories, but then they recovered. Maybe it&#8217;s just the tragedy of Russia.</p><p><strong>Sarah Paine</strong></p><p>Yeah, you&#8217;re lucky you&#8217;re not Russian.</p><p><strong>Dwarkesh Patel</strong></p><p>Yeah, exactly.</p><p><strong>Sarah Paine</strong></p><p>No, it is tragic. It is tragic. It started out as a difficult address, pre-Industrial Revolution, that required certain things to survive. They were more ruthless than their neighbors. They did survive. I mean, <a href="https://www.dwarkesh.com/p/sarah-paine-russo-chinese">in a previous lecture</a>, I discussed how they wiped out entire princely states and <a href="https://en.wikipedia.org/wiki/Khanate">Khanates</a> and things, they just wiped them out. Then you&#8217;re using their elites because it&#8217;s a rough neighborhood. The problem is if you aren&#8217;t on the winning side, you&#8217;re going to be on the losing side, right?</p><p>But since the Industrial Revolution, where you can do compounded economic growth that comes from commerce and trade and industry and things, that&#8217;s the real way to get powerful because power becomes a function of your wealth. That involves having legal systems, institutions, and stability. Russia has found it very difficult getting with that program. It has to do with, I think, this very difficult historical legacy of who rises to power, and also all the missing things. They didn&#8217;t have the <a href="https://en.wikipedia.org/wiki/Renaissance">Renaissance</a>, they didn&#8217;t have the <a href="https://en.wikipedia.org/wiki/Reformation">Reformation</a>, these fundamental movements that were very influential in the West.</p><p>So there&#8217;s a lot of negative space of things that didn&#8217;t happen. There&#8217;s all the awful stuff that you saw that did happen, but then they&#8217;re missing things. So it&#8217;s very difficult. Then people like Putin can set the clock way back because he&#8217;s killed so many Ukrainians. What he&#8217;s done will take a generation at minimum to get to anywhere where people are going to be thinking about&#8230; People will be talking about reparations from Russia for quite a while and they&#8217;re poor, they&#8217;re not going to want to do that.</p><p><strong>Dwarkesh Patel</strong></p><p>I should have thought to end on a more optimistic note, but...</p><p><strong>Sarah Paine</strong></p><p>Well, history&#8217;s ended, okay?</p><p><strong>Dwarkesh Patel</strong></p><p>Well, you&#8217;ve outlined the ways in which countries can chart a better course for themselves and that&#8217;s where the optimism can come from.</p><p><strong>Sarah Paine</strong></p><p>Actually, I&#8217;ve told a story about the last Cold War that stayed cold in the industrialized world, which was a good thing, because it could have been nuclear. It was tragic in many other parts of the world, but at least it stayed cold in the industrialized part. There was a strategy that a very thoughtful generation of people, not just in the United States but all over the West, put together to allow for a non-nuclear landing for the Soviet Union when it fell apart. From this, you can derive some of the strategies that worked for ending it that way. These are the kind of strategies that we&#8217;re going to have to use in order to navigate the second Cold War.</p><p>The other piece about the Cold War is the Soviet Union living miserable lives of their own making. But Americans were actually having a good time. They paid taxes, they had to pay for all the nuclear weapons. But as I recall, people are running around in Disneyland, they&#8217;re doing their European trips, they&#8217;re buying houses. So actually Americans, people in Western Europe, were living fulfilling lives while they&#8217;re waiting out for others to get with the program.</p><p>If we&#8217;re going to make it through this second one, we need to start cooperating with our allies, building institutions, and improving laws. Don&#8217;t just burn down the house. We will get through this one too, and we will live fulfilling lives while we&#8217;re waiting for Putin to come up with something different or Xi Jinping to come up with something different. But if we blow through our good hand of cards...</p><p>You interview all kinds of people at the cutting edge of technology. If we get rid of all of our university funding, we aren&#8217;t going to have the intellectual capital on which those businesses are based. If we&#8217;re going to dump all our allies for unknown reasons and just alienate them so they organize without us&#8230; If we&#8217;re going to just throw away entire institutions without thinking very carefully about what we&#8217;re doing&#8230; We become a cooperative adversary and we will be the bozo putting a plastic bag on our own head.</p><p>I look at the rhymes here. The Soviets had this ancient leadership who just couldn&#8217;t get their act together and they&#8217;re living off of debt instead of thinking creatively. The rhymes are awful, but we don&#8217;t have to do it that way. So it is more optimistic, but we need to get our house in order. That&#8217;s why I&#8217;m doing these lectures. They&#8217;re lectures in strategy to give you tools on how to come to your own decisions. That&#8217;s your business, not mine.</p><p><strong>Dwarkesh Patel</strong></p><p>That is an excellent note to close on. Sarah, I want to thank you so much for doing this lecture series with us. It has been a true education across these six lectures, everything from individual wars to the strategic and tactical decisions which explain them, to the broader lessons for today&#8217;s world. I do interview lots of different kinds of people, but from a sort of view-per-minute average-adjusted basis, I host a Sarah Paine podcast. If you just sort by popular, Sarah Paine comes up a lot.</p><p><strong>Sarah Paine</strong></p><p>But you&#8217;ve got it backwards. I was an unknown academic and then you cold-called me about doing an interview. I said, &#8220;Sure.&#8221; Dwarkesh, as a result of all this, I&#8217;m getting emails from all over the place. So let&#8217;s talk about who&#8217;s grateful to whom. Anyway, I&#8217;m devoted to your generation. Thank you for having me. Thank you for coming and being such a warm audience. Really appreciate it.</p>]]></content:encoded></item><item><title><![CDATA[Thoughts on AI progress (Dec 2025)]]></title><description><![CDATA[Why I'm moderately bearish in the short term, and explosively bullish in the long term]]></description><link>https://www.dwarkesh.com/p/thoughts-on-ai-progress-dec-2025</link><guid isPermaLink="false">https://www.dwarkesh.com/p/thoughts-on-ai-progress-dec-2025</guid><dc:creator><![CDATA[Dwarkesh Patel]]></dc:creator><pubDate>Tue, 02 Dec 2025 21:39:14 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!QEPJ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F90fa9666-5b8b-4685-a8fb-4b64cb7e0333_1080x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>What are we scaling?</h3><p>I&#8217;m confused why some people have short timelines and at the same time are bullish on the current scale up of reinforcement learning atop LLMs. If we&#8217;re actually close to a human-like learner, this whole approach of training on verifiable outcomes is doomed.</p><p>Currently the labs are trying to bake in a bunch of skills into these models through &#8220;mid-training&#8221; - there&#8217;s an entire supply chain of companies building RL environments which teach the model how to navigate a web browser or <a href="https://fortune.com/2025/10/22/sam-altman-openai-wall-street-junior-bankers-ai-entry-level-jobs/">use Excel to write financial models</a>.</p><p>Either these models will soon learn on the job in a self directed way - making all this pre-baking pointless - or they won&#8217;t - which means AGI is not imminent. Humans don&#8217;t have to go through a special training phase where they need to rehearse every single piece of software they might ever need to use.</p><p>Beren Millidge made interesting points about this in a recent <a href="https://www.beren.io/2025-08-02-Most-Algorithmic-Progress-is-Data-Progress/">blog post</a>:</p><blockquote><p>When we see frontier models improving at various benchmarks we should think not just of increased scale and clever ML research ideas but billions of dollars spent paying PhDs, MDs, and other experts to write questions and provide example answers and reasoning targeting these precise capabilities ... In a way, this is like a large-scale reprise of the expert systems era, where instead of paying experts to directly program their thinking as code, they provide numerous examples of their reasoning and process formalized and tracked, and then we distill this into models through behavioural cloning. This has updated me slightly towards longer AI timelines since given we need such effort to design extremely high quality human trajectories and environments for frontier systems implies that they still lack the critical core of learning that an actual AGI must possess.</p></blockquote><p>You can see this tension most vividly in robotics. In some fundamental sense, robotics is an algorithms problem, not a hardware or data problem &#8212; with very little training, humans can learn how to teleoperate current hardware to do useful work. So if we had a human like learner, robotics would (in large part) be solved. But the fact that we don&#8217;t have such a learner makes it necessary to go out into a thousand different homes to learn how to pick up dishes or fold laundry.</p><p>One counterargument I&#8217;ve heard from the takeoff-within-5-years crew is that we have to do this cludgy RL in service of building a superhuman AI researcher, and then the million copies of automated Ilya can go figure out how to solve robust and efficient learning from experience.</p><p>This gives the vibes of that old joke, &#8220;We&#8217;re losing money on every sale, but we&#8217;ll make it up in volume.&#8221; Somehow this automated researcher is  going to figure out the algorithm for AGI - a problem humans have been banging their head against for the better part of a century - while not having the basic learning capabilities that children have? I find this super implausible.</p><p>Besides, even if you think the RLVR scaleup will soon help us automate AI research, the labs&#8217; actions suggest otherwise. You don&#8217;t need to pre-bake the consultant&#8217;s skills at crafting Powerpoint slides in order to automate Ilya. So clearly the labs&#8217; actions hint at a world view where these models will continue to fare poorly at generalizing and on-the-job learning, thus making it necessary to build in the skills that they hope will be economically valuable.</p><p>Another counterargument you could make is that even if the model could learn these skills on the job, it is just so much more efficient to build them up just once during training rather that again and again for each user or company. And look, it makes a lot of sense to just bake in fluency with common tools like browsers and terminals. Indeed one of the key advantages that AGIs will have is this greater capacity to share knowledge across copies. But people are underrating how much company and context specific skills are required to do most jobs. And there just isn&#8217;t currently a robust efficient way for AIs to pick up those skills.</p><h3>Human labor is valuable precisely because it&#8217;s not shleppy to train</h3><p>I was at a dinner with an AI researcher and a biologist. The biologist said she had long timelines. We asked what she thought AI would struggle with. She said her work has recently involved looking at slides and decide if a dot is actually a macrophage or just looks like one. The AI researcher says, &#8220;Image classification is a textbook deep learning problem&#8212;we could easily train for that.&#8221;</p><p>I thought this was a very interesting exchange, because it revealed a key crux between me and the people who expect transformative economic impacts in the next few years. Human workers are valuable precisely because we don&#8217;t need to build schleppy training loops for every small part of their job. It&#8217;s not net-productive to build a custom training pipeline to identify what macrophages look like given the way this particular lab prepares slides, then another for the next lab-specific micro-task, and so on. What you actually need is an AI that can learn from semantic feedback or from self directed experience, and then generalize, the way a human does.</p><p>Every day, you have to do a hundred things that require judgment, situational awareness, and skills &amp; context learned on the job. These tasks differ not just across different people, but from one day to the next even for the same person. It is not possible to automate even a single job by just baking in some predefined set of skills, let alone all the jobs.</p><p>In fact, I think people are really underestimating how big a deal actual AGI will be because they&#8217;re just imagining more of this current regime. They&#8217;re not thinking about billions of human-like intelligences on a server which can copy and merge all their learnings. And to be clear, I expect this (aka actual AGI) in the next decade or two. That&#8217;s fucking crazy!</p><h3>Economic diffusion lag is cope for missing capabilities</h3><p>Sometimes people will say that the reason that AIs aren&#8217;t more widely deployed across firms and already providing lots of value (outside of coding) is that technology takes a long time to diffuse. I think this is cope. People are using this cope to gloss over the fact that these models just lack the capabilities necessary for broad economic value.</p><p>Steven Byrnes has an <a href="https://www.lesswrong.com/posts/xJWBofhLQjf3KmRgg/four-ways-learning-econ-makes-people-dumber-re-future-ai">excellent post</a> on this and many other points:</p><blockquote><p>New technologies take a long time to integrate into the economy? Well ask yourself: how do highly-skilled, experienced, and entrepreneurial immigrant humans manage to integrate into the economy immediately? Once you&#8217;ve answered that question, note that AGI will be able to do those things too.</p></blockquote><p>If these models were actually like humans on a server, they&#8217;d diffuse incredibly quickly. In fact, they&#8217;d be so much easier to integrate and onboard than a normal human employee (they could read your entire Slack and Drive in minutes and immediately distill all the skills your other AI employees have). Plus, hiring is very much like a <a href="https://en.wikipedia.org/wiki/The_Market_for_Lemons">lemons market</a>, where it&#8217;s hard to tell who the good people are, and hiring someone bad is quite costly. This is a dynamic you wouldn&#8217;t have to worry about when you just wanna spin up another instance of a vetted AGI model.</p><p>For these reasons, I expect it&#8217;s going to be much much easier to diffuse AI labor into firms than it is to hire a person. And companies hire lots of people all the time. If the capabilities were actually at AGI level, people would be willing to spend trillions of dollars a year buying tokens (knowledge workers cumulatively earn 10s of trillions of dollars of wages a year). The reason that lab revenue are 4 orders of magnitude off right now is that the models are nowhere near as capable as human knowledge workers.</p><h3>Goal post shifting is justified</h3><p>AI bulls will often criticize AI bears for repeatedly moving the goal posts. This is often fair. AI has made a ton of progress in the last decade, and it&#8217;s easy to forget that.</p><p>But some amount of goal post shifting is justified. If you showed me Gemini 3 in 2020, I would have been certain that it could automate half of knowledge work. We keep solving what we thought were the sufficient bottlenecks to AGI (general understanding, few shot learning, reasoning), and yet we still don&#8217;t have AGI (defined as, say, being able to completely automate 95% of knowledge work jobs). What is the rational response?</p><p>It&#8217;s totally reasonable to look at this and say, &#8220;Oh actually there&#8217;s more to intelligence and labor than I previously realized. And while we&#8217;re really close to (and in many ways have surpassed) what I would have defined as AGI in the past, the fact that model companies are not making trillions is revenue clearly reveals that my previous definition of AGI was too narrow.&#8221;</p><p>I expect this to keep happening into the future. I expect that by 2030 that the labs will have made significant progress on my hobby horse of continual learning, and the models will start earning 100s of billions in revenue, but they won&#8217;t have automated all knowledge work, and I&#8217;ll be like, &#8220;We&#8217;ve made a lot of progress, but we&#8217;re not at AGI yet. We also need X, Y, and Z thing to get to trillions in revenue.&#8221;</p><p>Models keep getting more impressive at the rate the short timelines people predict, but more useful at the rate the long timelines people predict.</p><h3>RL scaling is laundering the prestige of pretraining scaling</h3><p>With pretraining, we had this extremely clean and general trend in improvement in loss across multiple orders of magnitude of compute (albeit on a power law, which is as weak as exponential growth is strong). People are trying to launder the presitge of pretraining scaling, which was almost as predictable as a physical law of the universe, to justify bullish projections about RLVR, for which we have no well fit publicly known trend. When intrepid researchers do try to piece together the implications from scarce public datapoints, they get quite bearish results. For example, Toby Ord has a <a href="https://www.tobyord.com/writing/how-well-does-rl-scale">great post</a> where he cleverly connects the dots between different o-series benchmark charts, which suggested &#8220;we need something like a 1,000,000x scale-up of total RL compute to give a boost similar to a GPT level&#8221;.</p><h3>Comparison to human distribution will make us at first overestimate (and then underestimate) AI</h3><p>There is huge variance in the amount of value that different humans can add, especially in white collar with its <a href="https://en.wikipedia.org/wiki/O-ring_theory_of_economic_development">O-ring dynamics</a>. The village idiot adds ~0 value to knowledge work, while top AI researchers are worth billions of dollars to Mark Zuckerberg.</p><p>AI models at any given snapshot of time, however, are roughly equally capable. Humans have all this variance, whereas AI models don&#8217;t. Because a disproportionate share of value-add in knowledge work comes from the top percentile humans, if we try to compare the intelligence of these AI models to the median human, then we will systematically overestimate the value they can generate. But by the same token, when models finally do match top human performance, their impact might be quite explosive.</p><h3>Broadly deployed intelligence explosion</h3><p>People have spent a lot of time talking about a software only singularity (where AI models write the code for a smarter successor system), a software + hardware singularity (where AIs also improve their successor&#8217;s computing hardware), or variations therein.</p><p>All these scenarios neglect what I think will be the main driver of further improvements atop AGI: continual learning. Again, think about how humans become more capable at anything. It&#8217;s mostly from experience in the relevant domain.</p><p>Over conversation, <a href="https://www.beren.io/">Beren Millidge</a> made the interesting suggestion that the future might look continual learning agents going out, doing jobs and generating value, and then bringing all their learnings back to the hive mind model, which does some kind of batch distillations on all these agents. The agents themselves could be quite specialized - containing what Karpathy called &#8220;the cognitive core&#8221; plus knowledge and skills relevant to the job they&#8217;re being deployed to do.</p><p>&#8220;Solving&#8221; continual learning won&#8217;t be a singular one-and-done achievement. Instead, it will feel like solving in context learning. GPT-3 demonstrated that in context learning could be very powerful (its ICL capabilities were so remarkable that the title of the GPT-3 <a href="https://arxiv.org/abs/2005.14165">paper</a> is &#8216;Language Models are Few-Shot Learners&#8217;). But of course, we didn&#8217;t &#8220;solve&#8221; in-context learning when GPT-3 came out - and indeed there&#8217;s plenty of progress still to be made, from comprehension to context length. I expect a similar progression with continual learning. Labs will probably release something next year which they call continual learning, and which will in fact count as progress towards continual learning. But human level continual learning may take another 5 to 10 years of further progress.</p><p>This is why I don&#8217;t expect some kind of runaway gains to the first model that cracks continual learning, thus getting more and more widely deployed and capable. If you had fully solved continual learning drop out of nowhere, then sure, it&#8217;s &#8220;game set match&#8221;, as Satya put it. But that&#8217;s not what&#8217;s going to happen. Instead, some lab is going to figure out how to get some initial traction on the problem. Playing around with this feature will make it clear how it was implemented, and the other labs will soon replicate this breakthrough and improve it slightly.</p><p>There&#8217;ll also probably be diminishing returns from learning-from-deployment. Each of the first 1000 consultant agents are each learning a ton from deployment. Less so the next 1000. And is there such a long tail to consultant work that the millionth deployed instance is likely to see something super important the other 999,999 instances missed? In fact, I wouldn&#8217;t be surprised if continual learning also ends up leading to a power law, but with respect to the number of instances deployed.</p><p>Besides, I just have some prior that competition will stay fierce, informed by the observation that all these previous supposed flywheels (user engagement on chat, synthetic data, etc) have done very little to diminish the greater and greater competition between model companies. Every month (or less), the big three will rotate around the podium, with other competitors not that far behind. There is some force (potentially talent poaching, rumor mills, or reverse engineering) which has so far neutralized any runaway advantages a single lab might have had.</p>]]></content:encoded></item><item><title><![CDATA[Podcast Strategy Doc (December 2025)]]></title><description><![CDATA[Back to The Lunar Society mission]]></description><link>https://www.dwarkesh.com/p/dec-strategy-doc</link><guid isPermaLink="false">https://www.dwarkesh.com/p/dec-strategy-doc</guid><dc:creator><![CDATA[Dwarkesh Patel]]></dc:creator><pubDate>Mon, 01 Dec 2025 18:35:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!QEPJ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F90fa9666-5b8b-4685-a8fb-4b64cb7e0333_1080x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>The mission</h3><p>I originally titled my podcast The Lunar Society. I changed it to Dwarkesh Podcast eventually because people kept thinking it was a crypto podcast (&#8221;to the moon!!!&#8221;). I named it after The Lunar Society of Birmingham, an informal club that met in the late 18th century. Members included James Watt, Matthew Boulton, Erasmus Darwin, Joseph Priestley, and Josiah Wedgwood. These were the scientists, inventors, and philosophers who had made first contact with the Industrial Revolution which was just starting to take shape around them. And they discussed everything from steam engines to abolition to chemistry to education reform.</p><p>Someday people will look back on this period the way we look back on the Enlightenment. Great thinkers having important debates right as the world was about to undergo these massive technological, economic, and political revolutions. And some of these thinkers actually managed to get a couple of the big things right.</p><p>Whatever happens next, I want the debates to have happened on this podcast, and to have happened well.</p><h3>We are moving from the age of podcasts to the age of essays</h3><p>I wanna make essays a first class citizen of what I do. This is for a couple of reasons:</p><ul><li><p>Interviews are best when I have some take that I can bounce against my guest. You only get to see Federer&#8217;s skill when he&#8217;s rallying against a decent player, and certainly not if he&#8217;s just bouncing the ball against a wall.</p></li><li><p>As AI becomes more and more closed off, the best people will not be in a position where they can explain their thinking clearly. This is why the Karpathy episode was so incredible. It&#8217;s rare to get an industry expert without any particular thing to pitch, and who can talk openly about the research. But I&#8217;m not aware of anyone else who is Karpathy-tier, and who is not obliged to keep his or her mouth shut about a couple of things.</p></li><li><p>My essays have done much better than my expectations, in terms of reach, correctness and impact. I wrote the continual learning essay on a whim one afternoon, because I wanted to articulate why all these LLM scripts I&#8217;ve written for my business haven&#8217;t been helpful. And I&#8217;m still a little shocked to realize that I had stumbled upon (at least part of) <a href="https://www.dwarkesh.com/p/ilya-sutskever-2?open=false#%C2%A7ssis-model-will-learn-from-deployment">what Ilya is working on at SSI</a>. It&#8217;s not a crazy insight by any means, but it&#8217;s notable that you can just think about stuff, and there&#8217;s a good chance you&#8217;ll figure out what&#8217;s up. Btw, after I released the essay, both <a href="https://dataconomy.com/2025/08/07/gpt-5-is-officially-out/#:~:text=He%20elaborated%2C%20%E2%80%9CThis%20is%20clearly,a%20model%20that%20continuously%20learns">Sam Altman</a> and <a href="https://x.com/The_AI_Investor/status/1967025620426387639">Demis Hassabis</a> have said that continual learning is a major bottleneck on the path to AGI. Of course, there&#8217;s no way to know whether they read my essay. But honestly, even if they hadn&#8217;t, I&#8217;d still be pretty stoked if I had independently pointed my finger at the exact same bottleneck as these guys, despite all their additional context.</p></li><li><p>Which brings to my next point. I feel like there&#8217;s actually not that many secrets. The researchers and CEOs of the AI labs are a couple months ahead of you. This just doesn&#8217;t amount to any substantial secret knowledge that, if only you knew, you&#8217;d also have 2027 timelines. A ton of progress has been made in the last 3 years since ChatGPT, but none of it was super shocking based on the rumor mill and some connecting of the dots. And then there&#8217;s the big picture questions about AI&#8217;s impacts, where your thinking might very plausibly be much better than people at the labs, just because it takes time to think, and these people are busy running a damn company.</p></li><li><p>Some of the questions I&#8217;m most interested in simply can&#8217;t be answered extemporaneously by any human being on the planet. They require knowledge across multiple different fields, and couple hours (to days) of crunching the numbers or thinking through shit.</p></li><li><p>Because often enough my guests can&#8217;t just answer pretty complicated fractal questions in a satisfying way on the spot, I get frustrated with the whole enterprise. The main angst I&#8217;ve kept receding back to over and over is, &#8220;Okay what did I actually learn from this interview? And if <em>I</em> didn&#8217;t get that much concrete insight and understanding out of it, despite a week+ of research and hours of conversation, what hope is there for the audience? And if no one learned anything, what the fuck are we doing here?&#8221; I feel much essays survive this cynicism much better. For example, I&#8217;m often frustrated that social scientists won&#8217;t speculate with me about what their insights imply about AI civilization, or historians about how history might have turned out differently given different counterfactuals. But it&#8217;s ridiculous to count on a scholar who is thinking about AGI for first time in his life to start shooting off some galaxy brain implications from his theory. But <em>I</em> can go read their books, and use my understanding of the technology to come up with some hot takes.</p></li><li><p>I can easily co-release my essays as narrations on my podcast and YouTube feed, so actually the essays are super complementary to this audio/video audience I&#8217;ve built up.</p></li></ul><h3>Gratitude</h3><p>In the spirit of Thanksgiving: a lottery winner who then won another lottery is less lucky than I am.</p><p>Every once in a while, I&#8217;ll be grabbing dinner with a writer whose work I was obsessed with in college. And a part of me is just like, &#8220;What the fuck is happening right now?&#8221; Many of my greatest intellectual heroes are now my direct friends and teachers. My job is to spend a week learning about whatever I&#8217;m most interested in, and then talk to the world expert on that topic. A job I would <em>pay</em> to do has rewarded me - intellectually, financially, socially - beyond my wildest expectations. And there&#8217;s millions of people who are into this stuff! This audience contains some of the smartest people in the world, including many of the people <em>I</em> am a huge fan of. Then there&#8217;s my team. It&#8217;s unreal how talented, agentic, tasteful, and detail-oriented my colleagues are. I genuinely have no idea how I convinced people this good to come run <em>a podcast</em>.</p>]]></content:encoded></item><item><title><![CDATA[Ilya Sutskever — We're moving from the age of scaling to the age of research]]></title><description><![CDATA[&#8220;These models somehow just generalize dramatically worse than people. It's a very fundamental thing.&#8221;]]></description><link>https://www.dwarkesh.com/p/ilya-sutskever-2</link><guid isPermaLink="false">https://www.dwarkesh.com/p/ilya-sutskever-2</guid><dc:creator><![CDATA[Dwarkesh Patel]]></dc:creator><pubDate>Tue, 25 Nov 2025 17:04:02 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/179924094/ed79e4064d403255f7b90f5c4f4b63d1.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Ilya &amp; I discuss SSI&#8217;s strategy, the problems with pre-training, how to improve the generalization of AI models, and how to ensure AGI goes well.</p><p>Watch on <a href="https://youtu.be/aR20FWCCjAs">YouTube</a>; listen on <a href="https://podcasts.apple.com/us/podcast/dwarkesh-podcast/id1516093381?i=1000738363711">Apple Podcasts</a> or <a href="https://open.spotify.com/episode/7naOOba8SwiUNobGz8mQEL?si=39dd68f346ea4d49">Spotify</a>.</p><div id="youtube2-aR20FWCCjAs" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;aR20FWCCjAs&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/aR20FWCCjAs?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h2>Sponsors</h2><ul><li><p><a href="https://gemini.google">Gemini 3</a> is the first model I&#8217;ve used that can find connections I haven&#8217;t anticipated. I recently wrote a blog post on RL&#8217;s information efficiency, and Gemini 3 helped me think it all through. It also generated the relevant charts and ran toy ML experiments for me with zero bugs. Try Gemini 3 today at <a href="https://gemini.google">gemini.google</a></p></li><li><p><a href="https://labelbox.com/dwarkesh">Labelbox</a> helped me create a tool to transcribe our episodes! I&#8217;ve struggled with transcription in the past because I don&#8217;t just want verbatim transcripts, I want transcripts reworded to read like essays. Labelbox helped me generate the <em>exact</em> data I needed for this. If you want to learn how Labelbox can help you (or if you want to try out the transcriber tool yourself), go to <a href="https://labelbox.com/dwarkesh">labelbox.com/dwarkesh</a></p></li><li><p><a href="https://sardine.ai/dwarkesh">Sardine</a> is an AI risk management platform that brings together thousands of device, behavior, and identity signals to help you assess a user&#8217;s risk of fraud &amp; abuse. Sardine also offers a suite of agents to automate investigations so that as fraudsters use AI to scale their attacks, you can use AI to scale your defenses. Learn more at <a href="https://sardine.ai/dwarkesh">sardine.ai/dwarkesh</a></p></li></ul><p>To sponsor a future episode, visit <a href="https://www.dwarkesh.com/advertise">dwarkesh.com/advertise</a>.</p><h2>Timestamps</h2><p><a href="https://www.dwarkesh.com/i/179924094/explaining-model-jaggedness">(00:00:00) &#8211; Explaining model jaggedness</a></p><p><a href="https://www.dwarkesh.com/i/179924094/emotions-and-value-functions">(00:09:39) - Emotions and value functions</a></p><p><a href="https://www.dwarkesh.com/i/179924094/what-are-we-scaling">(00:18:49) &#8211; What are we scaling?</a></p><p><a href="https://www.dwarkesh.com/i/179924094/why-humans-generalize-better-than-models">(00:25:13) &#8211; Why humans generalize better than models</a></p><p><a href="https://www.dwarkesh.com/i/179924094/straight-shotting-superintelligence">(00:35:45) &#8211; Straight-shotting superintelligence</a></p><p><a href="https://www.dwarkesh.com/i/179924094/ssis-model-will-learn-from-deployment">(00:46:47) &#8211; SSI&#8217;s model will learn from deployment</a></p><p><a href="https://www.dwarkesh.com/i/179924094/alignment">(00:55:07) &#8211; Alignment</a></p><p><a href="https://www.dwarkesh.com/i/179924094/we-are-squarely-an-age-of-research-company">(01:18:13) &#8211; &#8220;We are squarely an age of research company&#8221;</a></p><p><a href="https://www.dwarkesh.com/i/179924094/self-play-and-multi-agent">(01:29:23) &#8211; Self-play and multi-agent</a></p><p><a href="https://www.dwarkesh.com/i/179924094/research-taste">(01:32:42) &#8211; Research taste</a></p><h2>Transcript</h2><h3>00:00:00 &#8211; Explaining model jaggedness</h3><p><strong>Ilya Sutskever </strong><em>00:00:00</em></p><p>You know what&#8217;s crazy? That all of this is real.</p><p><strong>Dwarkesh Patel </strong><em>00:00:04</em></p><p>Meaning what?</p><p><strong>Ilya Sutskever </strong><em>00:00:05</em></p><p>Don&#8217;t you think so?<strong> </strong>All this AI stuff and all this Bay Area&#8230; that it&#8217;s happening. Isn&#8217;t it straight out of science fiction?</p><p><strong>Dwarkesh Patel </strong><em>00:00:14</em></p><p>Another thing that&#8217;s crazy is how normal the <a href="https://www.lesswrong.com/w/ai-takeoff">slow takeoff</a> feels. The idea that we&#8217;d be investing <a href="https://am.jpmorgan.com/us/en/asset-management/adv/insights/market-insights/market-updates/on-the-minds-of-investors/is-ai-already-driving-us-growth/">1% of GDP in AI</a>, I feel like it would have felt like a bigger deal, whereas right now it just feels...</p><p><strong>Ilya Sutskever </strong><em>00:00:26</em></p><p>We get used to things pretty fast, it turns out.<strong> </strong>But also it&#8217;s kind of abstract. What does it mean? It means that you see it in the news, that such and such company announced such and such dollar amount. That&#8217;s all you see. It&#8217;s not really felt in any other way so far.</p><p><strong>Dwarkesh Patel </strong><em>00:00:45</em></p><p>Should we actually begin here? I think this is an interesting discussion.</p><p><strong>Ilya Sutskever </strong><em>00:00:47</em></p><p>Sure.</p><p><strong>Dwarkesh Patel </strong><em>00:00:48</em></p><p>I think your point, about how from the average person&#8217;s point of view nothing is that different, will continue being true even into the <a href="https://en.wikipedia.org/wiki/Technological_singularity">singularity</a>.</p><p><strong>Ilya Sutskever </strong><em>00:00:57</em></p><p>No, I don&#8217;t think so.</p><p><strong>Dwarkesh Patel </strong><em>00:00:58</em></p><p>Okay, interesting.</p><p><strong>Ilya Sutskever </strong><em>00:01:00</em></p><p>The thing which I was referring to not feeling different is, okay, such and such company announced some difficult-to-comprehend dollar amount of investment. I don&#8217;t think anyone knows what to do with that.</p><p>But I think the impact of AI is going to be felt. AI is going to be diffused through the economy. There&#8217;ll be very strong economic forces for this, and I think the impact is going to be felt very strongly.</p><p><strong>Dwarkesh Patel </strong><em>00:01:30</em></p><p>When do you expect that impact? I think the models seem smarter than their economic impact would imply.</p><p><strong>Ilya Sutskever </strong><em>00:01:38</em></p><p>Yeah. This is one of the very confusing things about the models right now. How to reconcile the fact that they are doing so well on <a href="https://www.lesswrong.com/posts/2PiawPFJeyCQGcwXG/a-starter-guide-for-evals">evals</a>? You look at the evals and you go, &#8220;Those are pretty hard evals.&#8221; They are doing so well. But the economic impact seems to be dramatically behind. It&#8217;s very difficult to make sense of, how can the model, on the one hand, do these amazing things, and then on the other hand, repeat itself twice in some situation?</p><p>An example would be, let&#8217;s say you use vibe coding to do something. You go to some place and then you get a bug. Then you tell the model, &#8220;Can you please fix the bug?&#8221; And the model says, &#8220;Oh my God, you&#8217;re so right. I have a bug. Let me go fix that.&#8221; And it introduces a second bug. Then you tell it, &#8220;You have this new second bug,&#8221; and it tells you, &#8220;Oh my God, how could I have done it? You&#8217;re so right again,&#8221; and brings back the first bug, and you can alternate between those. How is that possible? I&#8217;m not sure, but it does suggest that something strange is going on.</p><p>I have two possible explanations. The more whimsical explanation is that maybe <a href="https://en.wikipedia.org/wiki/Reinforcement_learning">RL training</a> makes the models a little too single-minded and narrowly focused, a little bit too unaware, even though it also makes them aware in some other ways. Because of this, they can&#8217;t do basic things.</p><p>But there is another explanation. Back when people were doing <a href="https://csrc.nist.gov/glossary/term/pre_training">pre-training</a>, the question of what data to train on was answered, because that answer was everything. When you do pre-training, you need all the data. So you don&#8217;t have to think if it&#8217;s going to be this data or that data.</p><p>But when people do RL training, they do need to think. They say, &#8220;Okay, we want to have this kind of RL training for this thing and that kind of RL training for that thing.&#8221; From what I hear, all the companies have teams that just produce new RL environments and just add it to the training mix. The question is, well, what are those? There are so many degrees of freedom. There is such a huge variety of RL environments you could produce.</p><p>One thing you could do, and I think this is something that is done inadvertently, is that people take inspiration from the evals. You say, &#8220;Hey, I would love our model to do really well when we release it. I want the evals to look great. What would be RL training that could help on this task?&#8221; I think that is something that happens, and it could explain a lot of what&#8217;s going on.</p><p>If you combine this with generalization of the models actually being inadequate, that has the potential to explain a lot of what we are seeing, this disconnect between eval performance and actual real-world performance, which is something that we don&#8217;t today even understand, what we mean by that.</p><p><strong>Dwarkesh Patel </strong><em>00:05:00</em></p><p>I like this idea that the real <a href="https://en.wikipedia.org/wiki/Reward_hacking">reward hacking</a> is the human researchers who are too focused on the evals.</p><p>I think there are two ways to understand, or to try to think about, what you have just pointed out. One is that if it&#8217;s the case that simply by becoming superhuman at a coding competition, a model will not automatically become more tasteful and exercise better judgment about how to improve your codebase, well then you should expand the suite of environments such that you&#8217;re not just testing it on having the best performance in coding competition. It should also be able to make the best kind of application for X thing or Y thing or Z thing.</p><p>Another, maybe this is what you&#8217;re hinting at, is to say, &#8220;Why should it be the case in the first place that becoming superhuman at coding competitions doesn&#8217;t make you a more tasteful programmer more generally?&#8221; Maybe the thing to do is not to keep stacking up the amount and diversity of environments, but to figure out an approach which lets you learn from one environment and improve your performance on something else.</p><p><strong>Ilya Sutskever </strong><em>00:06:08</em></p><p>I have a human analogy which might be helpful. Let&#8217;s take the case of competitive programming, since you mentioned that. Suppose you have two students. One of them decided they want to be the best competitive programmer, so they will practice 10,000 hours for that domain. They will solve all the problems, memorize all the proof techniques, and be very skilled at quickly and correctly implementing all the algorithms. By doing so, they became one of the best.</p><p>Student number two thought, &#8220;Oh, competitive programming is cool.&#8221; Maybe they practiced for 100 hours, much less, and they also did really well. Which one do you think is going to do better in their career later on?</p><p><strong>Dwarkesh Patel </strong><em>00:06:56</em></p><p>The second.</p><p><strong>Ilya Sutskever </strong><em>00:06:57</em></p><p>Right. I think that&#8217;s basically what&#8217;s going on. The models are much more like the first student, but even more. Because then we say, the model should be good at competitive programming so let&#8217;s get every single competitive programming problem ever. And then let&#8217;s do some data augmentation so we have even more competitive programming problems, and we train on that. Now you&#8217;ve got this great competitive programmer.</p><p>With this analogy, I think it&#8217;s more intuitive. Yeah, okay, if it&#8217;s so well trained, all the different algorithms and all the different proof techniques are right at its fingertips. And it&#8217;s more intuitive that with this level of preparation, it would not necessarily generalize to other things.</p><p><strong>Dwarkesh Patel </strong><em>00:07:39</em></p><p>But then what is the analogy for what the second student is doing before they do the 100 hours of <a href="https://en.wikipedia.org/wiki/Fine-tuning_(deep_learning)">fine-tuning</a>?</p><p><strong>Ilya Sutskever </strong><em>00:07:48</em></p><p>I think they have &#8220;it.&#8221; The &#8220;it&#8221; factor. When I was an undergrad, I remember there was a student like this that studied with me, so I know it exists.</p><p><strong>Dwarkesh Patel </strong><em>00:08:01</em></p><p>I think it&#8217;s interesting to distinguish &#8220;it&#8221; from whatever pre-training does. One way to understand what you just said about not having to choose the data in pre-training is to say it&#8217;s actually not dissimilar to the 10,000 hours of practice. It&#8217;s just that you get that 10,000 hours of practice for free because it&#8217;s already somewhere in the pre-training distribution. But maybe you&#8217;re suggesting there&#8217;s actually not that much generalization from pre-training. There&#8217;s just so much data in pre-training, but it&#8217;s not necessarily generalizing better than RL.</p><p><strong>Ilya Sutskever </strong><em>00:08:31</em></p><p>The main strength of pre-training is that: A, there is so much of it, and B, you don&#8217;t have to think hard about what data to put into pre-training. It&#8217;s very natural data, and it does include in it a lot of what people do: people&#8217;s thoughts and a lot of the <a href="https://en.wikipedia.org/wiki/Feature_(machine_learning)">features</a>. It&#8217;s like the whole world as projected by people onto text, and pre-training tries to capture that using a huge amount of data.</p><p>Pre-training is very difficult to reason about because it&#8217;s so hard to understand the manner in which the model relies on pre-training data. Whenever the model makes a mistake, could it be because something by chance is not as supported by the pre-training data? &#8220;Support by pre-training&#8221; is maybe a loose term. I don&#8217;t know if I can add anything more useful on this. I don&#8217;t think there is a human analog to pre-training.</p><h3>00:09:39 &#8211; Emotions and value functions</h3><p><strong>Dwarkesh Patel </strong><em>00:09:39</em></p><p>Here are analogies that people have proposed for what the human analogy to pre-training is. I&#8217;m curious to get your thoughts on why they&#8217;re potentially wrong. One is to think about the first 18, or 15, or 13 years of a person&#8217;s life when they aren&#8217;t necessarily economically productive, but they are doing something that is making them understand the world better and so forth. The other is to think about <a href="https://gwern.net/backstop">evolution as doing some kind of search</a> for 3 billion years, which then results in a human lifetime instance.</p><p>I&#8217;m curious if you think either of these are analogous to pre-training. How would you think about what lifetime human learning is like, if not pre-training?</p><p><strong>Ilya Sutskever </strong><em>00:10:22</em></p><p>I think there are some similarities between both of these and pre-training, and pre-training tries to play the role of both of these. But I think there are some big differences as well. The <a href="https://www.glennklockwood.com/garden/LLM-training-datasets">amount of pre-training data</a> is very, very staggering.</p><p><strong>Dwarkesh Patel </strong><em>00:10:39</em></p><p>Yes.</p><p><strong>Ilya Sutskever </strong><em>00:10:40</em></p><p>Somehow a human being, after even 15 years with a tiny fraction of the pre-training data, they know much less. But whatever they do know, they know much more deeply somehow. Already at that age, you would not make mistakes that our AIs make.</p><p>There is another thing. You might say, could it be something like evolution? The answer is maybe. But in this case, I think evolution might actually have an edge. I remember reading about this case. One way in which neuroscientists can learn about the brain is by studying people with brain damage to different parts of the brain. Some people have the most strange symptoms you could imagine. It&#8217;s actually really, really interesting.</p><p>One case that comes to mind that&#8217;s relevant. I read about this person who had some kind of <a href="https://www.thecut.com/2016/06/how-only-using-logic-destroyed-a-man.html">brain damage, a stroke or an accident, that took out his emotional processing</a>. So he stopped feeling any emotion. He still remained very articulate and he could solve little puzzles, and on tests he seemed to be just fine. But he felt no emotion. He didn&#8217;t feel sad, he didn&#8217;t feel anger, he didn&#8217;t feel animated. He became somehow extremely bad at making any decisions at all. It would take him hours to decide on which socks to wear. He would make very bad financial decisions.</p><p>What does it say about the <a href="https://en.wikipedia.org/wiki/Somatic_marker_hypothesis#">role of our built-in emotions in making us a viable agent</a>, essentially? To connect to your question about pre-training, maybe if you are good enough at getting everything out of pre-training, you could get that as well. But that&#8217;s the kind of thing which seems... Well, it may or may not be possible to get that from pre-training.</p><p><strong>Dwarkesh Patel </strong><em>00:12:56</em></p><p>What is &#8220;that&#8221;? Clearly not just directly emotion. It seems like some almost <a href="https://en.wikipedia.org/wiki/Reinforcement_learning#State-value_function">value function</a>-like thing which is telling you what the end reward for any decision should be. You think that doesn&#8217;t sort of implicitly come from pre-training?</p><p><strong>Ilya Sutskever </strong><em>00:13:15</em></p><p>I think it could. I&#8217;m just saying it&#8217;s not 100% obvious.</p><p><strong>Dwarkesh Patel </strong><em>00:13:19</em></p><p>But what is that? How do you think about emotions? What is the <a href="https://en.wikipedia.org/wiki/Machine_learning">ML</a> analogy for emotions?</p><p><strong>Ilya Sutskever </strong><em>00:13:26</em></p><p>It should be some kind of a value function thing. But I don&#8217;t think there is a great ML analogy because right now, value functions don&#8217;t play a very prominent role in the things people do.</p><p><strong>Dwarkesh Patel </strong><em>00:13:36</em></p><p>It might be worth defining for the audience what a value function is, if you want to do that.</p><p><strong>Ilya Sutskever </strong><em>00:13:39</em></p><p>Certainly, I&#8217;ll be very happy to do that. When people do <a href="https://en.wikipedia.org/wiki/Reinforcement_learning">reinforcement learning</a>, the way reinforcement learning is done right now, how do people train those <a href="https://www.ibm.com/think/topics/ai-agents">agents</a>? You have your <a href="https://en.wikipedia.org/wiki/Neural_network_(machine_learning)">neural net</a> and you give it a problem, and then you tell the model, &#8220;Go solve it.&#8221; The model takes maybe thousands, hundreds of thousands of actions or thoughts or something, and then it produces a solution. The solution is graded.</p><p>And then the score is used to provide a training signal for every single action in your trajectory. That means that if you are doing something that goes for a long time&#8212;if you&#8217;re training a task that takes a long time to solve&#8212;it will do no learning at all until you come up with the proposed solution. That&#8217;s how reinforcement learning is done naively. That&#8217;s how <a href="https://en.wikipedia.org/wiki/OpenAI_o1">o1</a>, <a href="https://en.wikipedia.org/wiki/DeepSeek#R1">R1</a> ostensibly are done.</p><p>The value function says something like, &#8220;Maybe I could sometimes, not always, tell you if you are doing well or badly.&#8221; The notion of a value function is more useful in some domains than others. For example, when you play chess and you lose a piece, I messed up. You don&#8217;t need to play the whole game to know that what I just did was bad, and therefore whatever preceded it was also bad.</p><p>The value function lets you short-circuit the wait until the very end. Let&#8217;s suppose that you are doing some kind of a math thing or a programming thing, and you&#8217;re trying to explore a particular solution or direction. After, let&#8217;s say, a thousand steps of thinking, you concluded that this direction is unpromising. As soon as you conclude this, you could already get a reward signal a thousand timesteps previously, when you decided to pursue down this path. You say, &#8220;Next time I shouldn&#8217;t pursue this path in a similar situation,&#8221; long before you actually came up with the proposed solution.</p><p><strong>Dwarkesh Patel </strong><em>00:15:52</em></p><p>This was in the <a href="https://arxiv.org/abs/2501.12948">DeepSeek R1 paper</a>&#8212; that the space of trajectories is so wide that maybe it&#8217;s hard to learn a mapping from an intermediate trajectory and value. And also given that, in coding for example you&#8217;ll have the wrong idea, then you&#8217;ll go back, then you&#8217;ll change something.</p><p><strong>Ilya Sutskever </strong><em>00:16:12</em></p><p>This sounds like such lack of faith in <a href="https://en.wikipedia.org/wiki/Deep_learning">deep learning</a>. Sure it might be difficult, but nothing deep learning can&#8217;t do. My expectation is that a value function should be useful, and I fully expect that they will be used in the future, if not already.</p><p>What I was alluding to with the person whose emotional center got damaged, it&#8217;s more that maybe what it suggests is that the value function of humans is modulated by emotions in some important way that&#8217;s hardcoded by evolution. And maybe that is important for people to be effective in the world.</p><p><strong>Dwarkesh Patel </strong><em>00:17:00</em></p><p>That&#8217;s the thing I was planning on asking you. There&#8217;s something really interesting about emotions of the value function, which is that it&#8217;s impressive that they have this much utility while still being rather simple to understand.</p><p><strong>Ilya Sutskever </strong><em>00:17:15</em></p><p>I have two responses. I do agree that compared to the kind of things that we learn and the things we are talking about, the kind of AI we are talking about, emotions are relatively simple. They might even be so simple that maybe you could map them out in a human-understandable way. I think it would be cool to do.</p><p>In terms of utility though, I think there is a thing where there is this complexity-robustness tradeoff, where complex things can be very useful, but simple things are very useful in a very broad range of situations. One way to interpret what we are seeing is that we&#8217;ve got these emotions that evolved mostly from our mammal ancestors and then fine-tuned a little bit while we were hominids, just a bit. We do have a decent amount of social emotions though which mammals may lack. But they&#8217;re not very sophisticated. And because they&#8217;re not sophisticated, they serve us so well in this very different world compared to the one that we&#8217;ve been living in.</p><p>Actually, they also make mistakes. For example, our emotions&#8230; Well actually, I don&#8217;t know. Does hunger count as an emotion? It&#8217;s debatable. But I think, for example, our intuitive feeling of hunger is not succeeding in guiding us correctly in this world with an abundance of food.</p><h3>00:18:49 &#8211; What are we scaling?</h3><p><strong>Dwarkesh Patel </strong><em>00:18:49</em></p><p>People have been talking about scaling data, scaling <a href="https://www.ibm.com/think/topics/model-parameters">parameters</a>, scaling compute. Is there a more general way to think about scaling? What are the other scaling axes?</p><p><strong>Ilya Sutskever </strong><em>00:19:00</em></p><p>Here&#8217;s a perspective that I think might be true. The way ML used to work is that people would just tinker with stuff and try to get interesting results. That&#8217;s what&#8217;s been going on in the past.</p><p>Then the scaling insight arrived. <a href="https://en.wikipedia.org/wiki/Neural_scaling_law">Scaling laws</a>, <a href="https://en.wikipedia.org/wiki/GPT-3">GPT-3</a>, and suddenly <a href="https://amzn.to/4psigkM">everyone realized we should scale.</a> This is an example of how language affects thought. &#8220;Scaling&#8221; is just one word, but it&#8217;s such a powerful word because it informs people what to do. They say, &#8220;Let&#8217;s try to scale things.&#8221; So you say, what are we scaling? Pre-training was the thing to scale. It was a particular scaling recipe.</p><p>The big breakthrough of pre-training is the realization that this recipe is good. You say, &#8220;Hey, if you mix some compute with some data into a neural net of a certain size, you will get results. You will know that you&#8217;ll be better if you just scale the recipe up.&#8221; This is also great. Companies love this because it gives you a very low-risk way of investing your resources.</p><p>It&#8217;s much harder to invest your resources in research. Compare that. If you research, you need to be like, &#8220;Go forth researchers and research and come up with something&#8221;, versus get more data, get more compute. You know you&#8217;ll get something from pre-training.</p><p>Indeed, it looks like, based on various things some people say on Twitter, maybe <a href="https://x.com/OriolVinyalsML/status/1990854455802343680">it appears that Gemini have found a way to get more out of pre-training</a>. At some point though, <a href="https://www.theverge.com/2024/12/13/24320811/what-ilya-sutskever-sees-openai-model-data-training">pre-training will run out of data</a>. The data is very clearly finite. What do you do next? Either you do some kind of souped-up pre-training, a different recipe from the one you&#8217;ve done before, or you&#8217;re doing RL, or maybe something else. But now that compute is big, compute is now very big, in some sense we are back to the age of research.</p><p>Maybe here&#8217;s another way to put it. Up until 2020, from 2012 to 2020, it was the age of research. Now, from 2020 to 2025, it was the <a href="https://amzn.to/49vUsb0">age of scaling</a>&#8212;maybe plus or minus, let&#8217;s add error bars to those years&#8212;because people say, &#8220;This is amazing. You&#8217;ve got to scale more. Keep scaling.&#8221; The one word: scaling.</p><p>But now the scale is so big. Is the belief really, &#8220;Oh, it&#8217;s so big, but if you had 100x more, everything would be so different?&#8221; It would be different, for sure. But is the belief that if you just 100x the scale, everything would be transformed? I don&#8217;t think that&#8217;s true. So it&#8217;s back to the age of research again, just with big computers.</p><p><strong>Dwarkesh Patel </strong><em>00:22:06</em></p><p>That&#8217;s a very interesting way to put it. But let me ask you the question you just posed then. What are we scaling, and what would it mean to have a recipe? I guess I&#8217;m not aware of a very clean relationship that almost looks like a law of physics which existed in pre-training. There was a power law between data or compute or parameters and loss. What is the kind of relationship we should be seeking, and how should we think about what this new recipe might look like?</p><p><strong>Ilya Sutskever </strong><em>00:22:38</em></p><p>We&#8217;ve already witnessed a transition from one type of scaling to a different type of scaling, from pre-training to RL. Now people are scaling RL. Now based on what people say on Twitter, they spend more compute on RL than on pre-training at this point, because RL can actually consume quite a bit of compute. You do very long <a href="https://robotics.stackexchange.com/questions/16596/what-is-the-definition-of-rollout-in-neural-network-or-openai-gym">rollouts</a>, so it takes a lot of compute to produce those rollouts. Then you get a relatively small amount of learning per rollout, so you really can spend a lot of compute.</p><p>I wouldn&#8217;t even call it scaling. I would say, &#8220;Hey, what are you doing? Is the thing you are doing the most productive thing you could be doing? Can you find a more productive way of using your compute?&#8221; We&#8217;ve discussed the value function business earlier. Maybe once people get good at value functions, they will be using their resources more productively. If you find a whole other way of training models, you could say, &#8220;Is this scaling or is it just using your resources?&#8221; I think it becomes a little bit ambiguous.</p><p>In the sense that, when people were in the age of research back then, it was, &#8220;Let&#8217;s try this and this and this. Let&#8217;s try that and that and that. Oh, look, something interesting is happening.&#8221; I think there will be a return to that.</p><p><strong>Dwarkesh Patel </strong><em>00:24:10</em></p><p>If we&#8217;re back in the era of research, stepping back, what is the part of the recipe that we need to think most about? When you say value function, people are already trying the current recipe, but then having <a href="https://arxiv.org/abs/2411.15594">LLM-as-a-Judge</a> and so forth. You could say that&#8217;s a value function, but it sounds like you have something much more fundamental in mind. Should we even rethink pre-training at all and not just add more steps to the end of that process?</p><p><strong>Ilya Sutskever </strong><em>00:24:35</em></p><p>The discussion about value function, I think it was interesting. I want to emphasize that I think the value function is something that&#8217;s going to make RL more efficient, and I think that makes a difference. But I think anything you can do with a value function, you can do without, just more slowly. The thing which I think is the most fundamental is that these models somehow just generalize dramatically worse than people. It&#8217;s super obvious. That seems like a very fundamental thing.</p><h3>00:25:13 &#8211; Why humans generalize better than models</h3><p><strong>Dwarkesh Patel </strong><em>00:25:13</em></p><p>So this is the crux: generalization. There are two sub-questions. There&#8217;s one which is about <a href="https://ai.stackexchange.com/questions/5246/what-is-sample-efficiency-and-how-can-importance-sampling-be-used-to-achieve-it">sample efficiency</a>: why should it take so much more data for these models to learn than humans? There&#8217;s a second question. Even separate from the amount of data it takes, why is it so hard to teach the thing we want to a model than to a human? For a human, we don&#8217;t necessarily need a verifiable reward to be able to&#8230; You&#8217;re probably mentoring a bunch of researchers right now, and you&#8217;re talking with them, you&#8217;re showing them your code, and you&#8217;re showing them how you think. From that, they&#8217;re picking up your way of thinking and how they should do research.</p><p>You don&#8217;t have to set a verifiable reward for them that&#8217;s like, &#8220;Okay, this is the next part of the curriculum, and now this is the next part of your curriculum. Oh, this training was unstable.&#8221; There&#8217;s not this schleppy, bespoke process. Perhaps these two issues are actually related in some way, but I&#8217;d be curious to explore this second thing, which is more like <a href="https://www.ibm.com/think/topics/continual-learning">continual learning</a>, and this first thing, which feels just like sample efficiency.</p><p><strong>Ilya Sutskever </strong><em>00:26:19</em></p><p>You could actually wonder that one possible explanation for the human sample efficiency that needs to be considered is evolution. Evolution has given us a small amount of the most useful information possible. For things like vision, hearing, and locomotion, I think there&#8217;s a pretty strong case that evolution has given us a lot.</p><p>For example, human dexterity far exceeds&#8230; I mean robots can become dexterous too if you subject them to a huge amount of training in simulation. But to train a robot in the real world to quickly pick up a new skill like a person does seems very out of reach. Here you could say, &#8220;Oh yeah, locomotion. All our ancestors needed great locomotion, squirrels. So with locomotion, maybe we&#8217;ve got some unbelievable prior.&#8221;</p><p>You could make the same case for vision. I believe <a href="https://en.wikipedia.org/wiki/Yann_LeCun">Yann LeCun</a> made the point that children learn to drive after 10 hours of practice, which is true. But our vision is so good. At least for me, I remember myself being a five-year-old. I was very excited about cars back then. I&#8217;m pretty sure my car recognition was more than adequate for driving already as a five-year-old. You don&#8217;t get to see that much data as a five-year-old. You spend most of your time in your parents&#8217; house, so you have very low data diversity.</p><p>But you could say maybe that&#8217;s evolution too. But in language and math and coding, probably not.</p><p><strong>Dwarkesh Patel </strong><em>00:28:00</em></p><p>It still seems better than models. Obviously, models are better than the average human at language, math, and coding. But are they better than the average human at learning?</p><p><strong>Ilya Sutskever </strong><em>00:28:09</em></p><p>Oh yeah. Oh yeah, absolutely. What I meant to say is that language, math, and coding&#8212;and especially math and coding&#8212;suggests that whatever it is that makes people good at learning is probably not so much a complicated prior, but something more, some fundamental thing.</p><p><strong>Dwarkesh Patel </strong><em>00:28:29</em></p><p>I&#8217;m not sure I understood. Why should that be the case?</p><p><strong>Ilya Sutskever </strong><em>00:28:32</em></p><p>So consider a skill in which people exhibit some kind of great reliability. If the skill is one that was very useful to our ancestors for many millions of years, hundreds of millions of years, you could argue that maybe humans are good at it because of evolution, because we have a prior, an evolutionary prior that&#8217;s encoded in some very non-obvious way that somehow makes us so good at it.</p><p>But if people exhibit great ability, reliability, robustness, and ability to learn in a domain that really did not exist until recently, then this is more an indication that people might have just better machine learning, period.</p><p><strong>Dwarkesh Patel </strong><em>00:29:29</em></p><p>How should we think about what that is? What is the ML analogy? There are a couple of interesting things about it. It takes fewer samples. It&#8217;s more unsupervised. A child learning to drive a car&#8230; Children are not learning to drive a car. A teenager learning how to drive a car is not exactly getting some prebuilt, verifiable reward. It comes from their interaction with the machine and with the environment. It takes much fewer samples. It seems more unsupervised. It seems more robust?</p><p><strong>Ilya Sutskever </strong><em>00:30:07</em></p><p>Much more robust. The robustness of people is really staggering.</p><p><strong>Dwarkesh Patel </strong><em>00:30:12</em></p><p>Do you have a unified way of thinking about why all these things are happening at once? What is the ML analogy that could realize something like this?</p><p><strong>Ilya Sutskever </strong><em>00:30:24</em></p><p>One of the things that you&#8217;ve been asking about is how can the teenage driver self-correct and learn from their experience without an external teacher? The answer is that they have their value function. They have a general sense which is also, by the way, extremely robust in people. Whatever the human value function is, with a few exceptions around addiction, it&#8217;s actually very, very robust.</p><p>So for something like a teenager that&#8217;s learning to drive, they start to drive, and they already have a sense of how they&#8217;re driving immediately, how badly they are, how unconfident. And then they see, &#8220;Okay.&#8221; And then, of course, the learning speed of any teenager is so fast. After 10 hours, you&#8217;re good to go.</p><p><strong>Dwarkesh Patel </strong><em>00:31:17</em></p><p>It seems like humans have some solution, but I&#8217;m curious about how they are doing it and why is it so hard? How do we need to reconceptualize the way we&#8217;re training models to make something like this possible?</p><p><strong>Ilya Sutskever </strong><em>00:31:27</em></p><p>That is a great question to ask, and it&#8217;s a question I have a lot of opinions about. But unfortunately, we live in a world where not all machine learning ideas are discussed freely, and this is one of them. There&#8217;s probably a way to do it. I think it can be done. The fact that people are like that, I think it&#8217;s a proof that it can be done.</p><p>There may be another blocker though, which is that there is a possibility that the human neurons do more compute than we think. If that is true, and if that plays an important role, then things might be more difficult. But regardless, I do think it points to the existence of some machine learning principle that I have opinions on. But unfortunately, circumstances make it hard to discuss in detail.</p><p><strong>Dwarkesh Patel </strong><em>00:32:28</em></p><p>Nobody listens to this podcast, Ilya.</p><h3>00:35:45 &#8211; Straight-shotting superintelligence</h3><p><strong>Dwarkesh Patel </strong><em>00:35:45</em></p><p>I&#8217;m curious. If you say we are back in an era of research, you were there from 2012 to 2020. What is the vibe now going to be if we go back to the era of research?</p><p>For example, even after <a href="https://en.wikipedia.org/wiki/AlexNet">AlexNet</a>, the amount of compute that was used to run experiments kept increasing, and the size of frontier systems kept increasing. Do you think now that this era of research will still require tremendous amounts of compute? Do you think it will require going back into the archives and reading old papers?</p><p>You were at Google and OpenAI and Stanford, these places, when there was more of a vibe of research? What kind of things should we be expecting in the community?</p><p><strong>Ilya Sutskever </strong><em>00:36:38</em></p><p>One consequence of the age of scaling is that scaling sucked out all the air in the room. Because scaling sucked out all the air in the room, everyone started to do the same thing. We got to the point where we are in a world where there are more companies than ideas by quite a bit. Actually on that, there is this Silicon Valley saying that says that ideas are cheap, execution is everything. People say that a lot, and there is truth to that. But then I saw someone say on Twitter something like, &#8220;If ideas are so cheap, how come no one&#8217;s having any ideas?&#8221; And I think it&#8217;s true too.</p><p>If you think about research progress in terms of bottlenecks, there are several bottlenecks. One of them is ideas, and one of them is your ability to bring them to life, which might be compute but also engineering. If you go back to the &#8216;90s, let&#8217;s say, you had people who had pretty good ideas, and if they had much larger computers, maybe they could demonstrate that their ideas were viable. But they could not, so they could only have a very, very small demonstration that did not convince anyone. So the bottleneck was compute.</p><p>Then in the age of scaling, compute has increased a lot. Of course, there is a question of how much compute is needed, but compute is large. Compute is large enough such that it&#8217;s not obvious that you need that much more compute to prove some idea. I&#8217;ll give you an analogy. AlexNet was built on two <a href="https://en.wikipedia.org/wiki/Graphics_processing_unit">GPUs</a>. That was the total amount of compute used for it. The <a href="https://en.wikipedia.org/wiki/Transformer_(deep_learning)">transformer</a> was built on 8 to 64 GPUs. No single transformer paper experiment used more than 64 GPUs of 2017, which would be like, what, two GPUs of today? The <a href="https://en.wikipedia.org/wiki/Residual_neural_network">ResNet</a>, right? You could argue that the <a href="https://en.wikipedia.org/wiki/OpenAI_o1">o1 reasoning</a> was not the most compute-heavy thing in the world.</p><p>So for research, you definitely need some amount of compute, but it&#8217;s far from obvious that you need the absolutely largest amount of compute ever for research. You might argue, and I think it is true, that if you want to build the absolutely best system then it helps to have much more compute. Especially if everyone is within the same paradigm, then compute becomes one of the big differentiators.</p><p><strong>Dwarkesh Patel </strong><em>00:39:41</em></p><p>I&#8217;m asking you for the history, because you were actually there. I&#8217;m not sure what actually happened. It sounds like it was possible to develop these ideas using minimal amounts of compute. But the transformer didn&#8217;t immediately become famous. It became the thing everybody started doing and then started experimenting on top of and building on top of because it was validated at higher and higher levels of compute.</p><p><strong>Ilya Sutskever </strong><em>00:40:06</em></p><p>Correct.</p><p><strong>Dwarkesh Patel </strong><em>00:40:07</em></p><p>And if you at <a href="https://en.wikipedia.org/wiki/Safe_Superintelligence_Inc.">SSI</a> have 50 different ideas, how will you know which one is the next transformer and which one is brittle, without having the kinds of compute that other frontier labs have?</p><p><strong>Ilya Sutskever </strong><em>00:40:22</em></p><p>I can comment on that. The short comment is that you mentioned SSI. Specifically for us, the amount of compute that SSI has for research is really not that small. I want to explain why. Simple math can explain why the amount of compute that we have is comparable for research than one might think. I&#8217;ll explain.</p><p><a href="https://techcrunch.com/2025/04/12/openai-co-founder-ilya-sutskevers-safe-superintelligence-reportedly-valued-at-32b/">SSI has raised $3 billion</a>, which is a lot by any absolute sense. But you could say, &#8220;Look at the other companies raising much more.&#8221; But a lot of their compute goes for <a href="https://cloud.google.com/discover/what-is-ai-inference">inference</a>. These big numbers, these big loans, it&#8217;s earmarked for inference. That&#8217;s number one. Number two, if you want to have a product on which you do inference, you need to have a big staff of engineers, salespeople. A lot of the research needs to be dedicated to producing all kinds of product-related features. So then when you look at what&#8217;s actually left for research, the difference becomes a lot smaller.</p><p>The other thing is, if you are doing something different, do you really need the absolute maximal scale to prove it? I don&#8217;t think that&#8217;s true at all. I think that in our case, we have sufficient compute to prove, to convince ourselves and anyone else, that what we are doing is correct.</p><p><strong>Dwarkesh Patel </strong><em>00:42:02</em></p><p>There have been public estimates that companies like OpenAI spend on the order of $5-6 billion a year just so far, on experiments. This is separate from the amount of money they&#8217;re spending on inference and so forth. So it seems like they&#8217;re spending more a year running research experiments than you guys have in total funding.</p><p><strong>Ilya Sutskever </strong><em>00:42:22</em></p><p>I think it&#8217;s a question of what you do with it. It&#8217;s a question of what you do with it. In their case, in the case of others, there is a lot more demand on the training compute. There&#8217;s a lot more different work streams, there are different modalities, there is just more stuff. So it becomes fragmented.</p><p><strong>Dwarkesh Patel </strong><em>00:42:44</em></p><p>How will SSI make money?</p><p><strong>Ilya Sutskever </strong><em>00:42:46</em></p><p>My answer to this question is something like this. Right now, we just focus on the research, and then the answer to that question will reveal itself. I think there will be lots of possible answers.</p><p><strong>Dwarkesh Patel </strong><em>00:43:01</em></p><p>Is SSI&#8217;s plan still to straight shot superintelligence?</p><p><strong>Ilya Sutskever </strong><em>00:43:04</em></p><p>Maybe. I think that there is merit to it. I think there&#8217;s a lot of merit because it&#8217;s very nice to not be affected by the day-to-day market competition. But I think there are two reasons that may cause us to change the plan. One is pragmatic, if timelines turned out to be long, which they might. Second, I think there is a lot of value in the best and most powerful AI being out there impacting the world. I think this is a meaningfully valuable thing.</p><p><strong>Dwarkesh Patel </strong><em>00:43:48</em></p><p>So then why is your default plan to straight shot superintelligence? Because it sounds like OpenAI, Anthropic, all these other companies, their explicit thinking is, &#8220;Look, we have weaker and weaker intelligences that the public can get used to and prepare for.&#8221; Why is it potentially better to build a superintelligence directly?</p><p><strong>Ilya Sutskever </strong><em>00:44:08</em></p><p>I&#8217;ll make the case for and against. The case for is that one of the challenges that people face when they&#8217;re in the market is that they have to participate in the rat race. The rat race is quite difficult in that it exposes you to difficult trade-offs which you need to make. It is nice to say, &#8220;We&#8217;ll insulate ourselves from all this and just focus on the research and come out only when we are ready, and not before.&#8221; But the counterpoint is valid too, and those are opposing forces. The counterpoint is, &#8220;Hey, it is useful for the world to see powerful AI. It is useful for the world to see powerful AI because that&#8217;s the only way you can communicate it.&#8221;</p><p><strong>Dwarkesh Patel </strong><em>00:44:57</em></p><p>Well, I guess not even just that you can communicate the idea&#8212;</p><p><strong>Ilya Sutskever </strong><em>00:45:00</em></p><p>Communicate the AI, not the idea. Communicate the AI.</p><p><strong>Dwarkesh Patel </strong><em>00:45:04</em></p><p>What do you mean, &#8220;communicate the AI&#8221;?</p><p><strong>Ilya Sutskever </strong><em>00:45:06</em></p><p>Let&#8217;s suppose you write an essay about AI, and the essay says, &#8220;AI is going to be this, and AI is going to be that, and it&#8217;s going to be this.&#8221; You read it and you say, &#8220;Okay, this is an interesting essay.&#8221; Now suppose you see an AI doing this, an AI doing that. It is incomparable. Basically I think that there is a big benefit from AI being in the public, and that would be a reason for us to not be quite straight shot.</p><p><strong>Dwarkesh Patel </strong><em>00:45:37</em></p><p>I guess it&#8217;s not even that, but I do think that is an important part of it. The other big thing is that I can&#8217;t think of another discipline in human engineering and research where the end artifact was made safer mostly through just thinking about how to make it safe, as opposed to, why airplane crashes per mile are so much lower today than they were decades ago. Why is it so much harder to find a bug in <a href="https://en.wikipedia.org/wiki/Linux">Linux</a> than it would have been decades ago? I think it&#8217;s mostly because these systems were deployed to the world. You noticed failures, those failures were corrected and the systems became more robust.</p><p>I&#8217;m not sure why AGI and superhuman intelligence would be any different, especially given&#8212;and I hope we&#8217;re going to get to this&#8212;it seems like the harms of superintelligence are not just about having some <a href="https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer">malevolent paper clipper</a> out there. But this is a really powerful thing and we don&#8217;t even know how to conceptualize how people interact with it, what people will do with it. Having gradual access to it seems like a better way to maybe spread out the impact of it and to help people prepare for it.</p><h3>00:46:47 &#8211; SSI&#8217;s model will learn from deployment</h3><p><strong>Ilya Sutskever </strong><em>00:46:47</em></p><p>Well I think on this point, even in the straight shot scenario, you would still do a gradual release of it, that&#8217;s how I would imagine it. Gradualism would be an inherent component of any plan. It&#8217;s just a question of what is the first thing that you get out of the door. That&#8217;s number one.</p><p>Number two, I believe <a href="https://www.dwarkesh.com/p/timelines-june-2025">you have advocated for continual learning more than other people</a>, and I actually think that this is an important and correct thing. Here is why. I&#8217;ll give you another example of how language affects thinking. In this case, it will be two words that have shaped everyone&#8217;s thinking, I maintain. First word: <a href="https://en.wikipedia.org/wiki/Artificial_general_intelligence">AGI</a>. Second word: pre-training. Let me explain.</p><p>The term AGI, why does this term exist? It&#8217;s a very particular term. Why does it exist? There&#8217;s a reason. The reason that the term AGI exists is, in my opinion, not so much because it&#8217;s a very important, essential descriptor of some end state of intelligence, but because it is a reaction to a different term that existed, and the term is <a href="https://en.wikipedia.org/wiki/Weak_artificial_intelligence">narrow AI</a>. If you go back to ancient history of <a href="https://en.wikipedia.org/wiki/Artificial_intelligence_in_video_games">gameplay and AI</a>, of <a href="https://www.ibm.com/history/early-games">checkers AI</a>, <a href="https://en.wikipedia.org/wiki/History_of_chess_engines">chess AI</a>, <a href="https://en.wikipedia.org/wiki/AlphaStar_(software)">computer games AI</a>, everyone would say, look at this narrow intelligence. Sure, the <a href="https://en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparov">chess AI can beat Kasparov</a>, but it can&#8217;t do anything else. It is so narrow, artificial narrow intelligence. So in response, as a reaction to this, some people said, this is not good. It is so narrow. What we need is general AI, an AI that can just do all the things. That term just got a lot of traction.</p><p>The second thing that got a lot of traction is pre-training, specifically the recipe of pre-training. I think the way people do RL now is maybe undoing the conceptual imprint of pre-training. But pre-training had this property. You do more pre-training and the model gets better at everything, more or less uniformly. General AI. Pre-training gives AGI.</p><p>But the thing that happened with AGI and pre-training is that in some sense they overshot the target. If you think about the term &#8220;AGI&#8221;, especially in the context of pre-training, you will realize that a human being is not an AGI. Yes, there is definitely a foundation of skills, but a human being lacks a huge amount of knowledge. Instead, we rely on continual learning.</p><p>So when you think about, &#8220;Okay, so let&#8217;s suppose that we achieve success and we produce some kind of safe superintelligence.&#8221; The question is, how do you define it? Where on the curve of continual learning is it going to be?</p><p>I produce a superintelligent 15-year-old that&#8217;s very eager to go. They don&#8217;t know very much at all, a great student, very eager. You go and be a programmer, you go and be a doctor, go and learn. So you could imagine that the deployment itself will involve some kind of a learning trial-and-error period. It&#8217;s a process, as opposed to you dropping the finished thing.</p><p><strong>Dwarkesh Patel </strong><em>00:50:45</em></p><p>I see. You&#8217;re suggesting that the thing you&#8217;re pointing out with superintelligence is not some finished mind which knows how to do every single job in the economy. Because the way, say, the original <a href="https://openai.com/charter/">OpenAI charter</a> or whatever defines AGI is like, it can do every single job, every single thing a human can do. You&#8217;re proposing instead a mind which can learn to do every single job, and that is superintelligence.</p><p><strong>Ilya Sutskever </strong><em>00:51:15</em></p><p>Yes.</p><p><strong>Dwarkesh Patel </strong><em>00:51:16</em></p><p>But once you have the learning algorithm, it gets deployed into the world the same way a human laborer might join an organization.</p><p><strong>Ilya Sutskever </strong><em>00:51:25</em></p><p>Exactly.</p><p><strong>Dwarkesh Patel </strong><em>00:51:26</em></p><p>It seems like one of these two things might happen, maybe neither of these happens. One, this super-efficient learning algorithm becomes superhuman, becomes as good as you and potentially even better, at the task of ML research. As a result the algorithm itself becomes more and more superhuman.</p><p>The other is, even if that doesn&#8217;t happen, if you have a single model&#8212;this is explicitly your vision&#8212;where instances of a model which are deployed through the economy doing different jobs, learning how to do those jobs, continually learning on the job, picking up all the skills that any human could pick up, but picking them all up at the same time, and then amalgamating their learnings, you basically have a model which functionally becomes superintelligent even without any sort of recursive self-improvement in software. Because you now have one model that can do every single job in the economy and humans can&#8217;t merge our minds in the same way. So do you expect some sort of intelligence explosion from broad deployment?</p><p><strong>Ilya Sutskever </strong><em>00:52:30</em></p><p>I think that it is likely that we will have rapid economic growth. I think with broad deployment, there are two arguments you could make which are conflicting. One is that once indeed you get to a point where you have an AI that can learn to do things quickly and you have many of them, then there will be a strong force to deploy them in the economy unless there will be some kind of a regulation that stops it, which by the way there might be.</p><p>But the idea of very rapid economic growth for some time, I think it&#8217;s very possible from broad deployment. The question is how rapid it&#8217;s going to be. I think this is hard to know because on the one hand you have this very efficient worker. On the other hand, the world is just really big and there&#8217;s a lot of stuff, and that stuff moves at a different speed. But then on the other hand, now the AI could&#8230; So I think very rapid economic growth is possible. We will see all kinds of things like different countries with different rules and the ones which have the friendlier rules, the economic growth will be faster. Hard to predict.</p><h3>00:55:07 &#8211; Alignment</h3><p><strong>Dwarkesh Patel </strong><em>00:55:07</em></p><p>It seems to me that this is a very precarious situation to be in. In the limit, we know that this should be possible. If you have something that is as good as a human at learning, but which can merge its brains&#8212;merge different instances in a way that humans can&#8217;t merge&#8212;already, this seems like a thing that should physically be possible. Humans are possible, digital computers are possible. You just need both of those combined to produce this thing.</p><p>It also seems this kind of thing is extremely powerful. Economic growth is one way to put it. A <a href="https://en.wikipedia.org/wiki/Dyson_sphere">Dyson sphere</a> is a lot of economic growth. But another way to put it is that you will have, in potentially a very short period of time... You hire people at SSI, and in six months, they&#8217;re net productive, probably. A human learns really fast, and this thing is becoming smarter and smarter very fast. How do you think about making that go well? Why is SSI positioned to do that well? What is SSI&#8217;s plan there, is basically what I&#8217;m trying to ask.</p><p><strong>Ilya Sutskever </strong><em>00:56:10</em></p><p>One of the ways in which my thinking has been changing is that I now place more importance on AI being deployed incrementally and in advance. One very difficult thing about AI is that we are talking about systems that don&#8217;t yet exist and it&#8217;s hard to imagine them.</p><p>I think that one of the things that&#8217;s happening is that in practice, it&#8217;s very hard to feel the AGI. It&#8217;s very hard to feel the AGI. We can talk about it, but imagine having a conversation about how it is like to be old when you&#8217;re old and frail. You can have a conversation, you can try to imagine it, but it&#8217;s just hard, and you come back to reality where that&#8217;s not the case. I think that a lot of the issues around AGI and its future power stem from the fact that it&#8217;s very difficult to imagine. Future AI is going to be different. It&#8217;s going to be powerful. Indeed, the whole problem, what is the problem of AI and AGI? The whole problem is the power. The whole problem is the power.</p><p>When the power is really big, what&#8217;s going to happen? One of the ways in which I&#8217;ve changed my mind over the past year&#8212;and that change of mind, I&#8217;ll hedge a little bit, may back-propagate into the plans of our company&#8212;is that if it&#8217;s hard to imagine, what do you do? You&#8217;ve got to be showing the thing. You&#8217;ve got to be showing the thing. I maintain that most people who work on AI also can&#8217;t imagine it because it&#8217;s too different from what people see on a day-to-day basis.</p><p>I do maintain, here&#8217;s something which I predict will happen. This is a prediction. I maintain that as AI becomes more powerful, people will change their behaviors. We will see all kinds of unprecedented things which are not happening right now. I&#8217;ll give some examples. I think for better or worse, the frontier companies will play a very important role in what happens, as will the government. The kind of things that I think you&#8217;ll see, which you see the beginnings of, are companies that are fierce competitors starting to collaborate on AI safety. You may have seen <a href="https://techcrunch.com/2025/08/27/openai-co-founder-calls-for-ai-labs-to-safety-test-rival-models/">OpenAI and Anthropic doing a first small step</a>, but that did not exist. That&#8217;s something which I predicted in one of my talks about three years ago, that such a thing will happen. I also maintain that as AI continues to become more powerful, more visibly powerful, there will also be a desire from governments and the public to do something. I think this is a very important force, of showing the AI.</p><p>That&#8217;s number one. Number two, okay, so the AI is being built. What needs to be done? One thing that I maintain that will happen is that right now, people who are working on AI, I maintain that the AI doesn&#8217;t feel powerful because of its mistakes. I do think that at some point the AI will start to feel powerful actually. I think when that happens, we will see a big change in the way all AI companies approach safety. They&#8217;ll become much more paranoid. I say this as a prediction that we will see happen. We&#8217;ll see if I&#8217;m right. But I think this is something that will happen because they will see the AI becoming more powerful. Everything that&#8217;s happening right now, I maintain, is because people look at today&#8217;s AI and it&#8217;s hard to imagine the future AI.</p><p>There is a third thing which needs to happen. I&#8217;m talking about it in broader terms, not just from the perspective of SSI because you asked me about our company. The question is, what should the companies aspire to build? What should they aspire to build? There has been one big idea that everyone has been locked into, which is the self-improving AI. Why did it happen? Because there are fewer ideas than companies. But I maintain that there is something that&#8217;s better to build, and I think that everyone will want that.</p><p>It&#8217;s the AI that&#8217;s robustly aligned to care about sentient life specifically. I think in particular, there&#8217;s a case to be made that it will be easier to build an AI that cares about sentient life than an AI that cares about human life alone, because the AI itself will be sentient. And if you think about things like <a href="https://en.wikipedia.org/wiki/Mirror_neuron">mirror neurons</a> and <a href="https://plato.stanford.edu/entries/moral-animal/">human empathy for animals</a>, which you might argue it&#8217;s not big enough, but it exists. I think it&#8217;s an emergent property from the fact that we model others with the same circuit that we use to model ourselves, because that&#8217;s the most efficient thing to do.</p><p><strong>Dwarkesh Patel </strong><em>01:02:06</em></p><p>So even if you got an AI to care about sentient beings&#8212;and it&#8217;s not actually clear to me that that&#8217;s what you should try to do if you solved <a href="https://en.wikipedia.org/wiki/AI_alignment">alignment</a>&#8212;it would still be the case that most sentient beings will be AIs. There will be trillions, eventually quadrillions, of AIs. Humans will be a very small fraction of sentient beings. So it&#8217;s not clear to me if the goal is some kind of human control over this future civilization, that this is the best criterion.</p><p><strong>Ilya Sutskever </strong><em>01:02:37</em></p><p>It&#8217;s true. It&#8217;s possible it&#8217;s not the best criterion. I&#8217;ll say two things. Number one, care for sentient life, I think there is merit to it. It should be considered. I think it would be helpful if there was some kind of short list of ideas that the companies, when they are in this situation, could use. That&#8217;s number two.</p><p>Number three, I think it would be really materially helpful if the power of the most powerful superintelligence was somehow capped because it would address a lot of these concerns. The question of how to do it, I&#8217;m not sure, but I think that would be materially helpful when you&#8217;re talking about really, really powerful systems.</p><p><strong>Dwarkesh Patel </strong><em>01:03:35</em></p><p>Before we continue the alignment discussion, I want to double-click on that. How much room is there at the top? How do you think about superintelligence? Do you think, using this learning efficiency idea, maybe it is just extremely fast at learning new skills or new knowledge? Does it just have a bigger pool of strategies? Is there a single cohesive &#8220;it&#8221; in the center that&#8217;s more powerful or bigger? If so, do you imagine that this will be sort of godlike in comparison to the rest of human civilization, or does it just feel like another agent, or another cluster of agents?</p><p><strong>Ilya Sutskever </strong><em>01:04:10</em></p><p>This is an area where different people have different intuitions. I think it will be very powerful, for sure. What I think is most likely to happen is that there will be multiple such AIs being created roughly at the same time. I think that if the cluster is big enough&#8212;like if the cluster is literally continent-sized&#8212;that thing could be really powerful, indeed. If you literally have a continent-sized cluster, those AIs can be very powerful. All I can tell you is that if you&#8217;re talking about extremely powerful AIs, truly dramatically powerful, it would be nice if they could be restrained in some ways or if there were some kind of agreement or something.</p><p>What is the concern of superintelligence? What is one way to explain the concern? If you imagine a system that is sufficiently powerful, really sufficiently powerful&#8212;and you could say you need to do something sensible like care for sentient life in a very single-minded way&#8212;we might not like the results. That&#8217;s really what it is.</p><p>Maybe, by the way, the answer is that you do not build an RL agent in the usual sense. I&#8217;ll point several things out. I think human beings are semi-RL agents. We pursue a reward, and then the emotions or whatever make us tire out of the reward and we pursue a different reward. The market is a very short-sighted kind of agent. Evolution is the same. Evolution is very intelligent in some ways, but very dumb in other ways. The government has been designed to be a never-ending fight between three parts, which has an effect. So I think things like this.</p><p>Another thing that makes this discussion difficult is that we are talking about systems that don&#8217;t exist, that we don&#8217;t know how to build. That&#8217;s the other thing and that&#8217;s actually my belief. I think what people are doing right now will go some distance and then peter out. It will continue to improve, but it will also not be &#8220;it&#8221;. The &#8220;It&#8221; we don&#8217;t know how to build, and a lot hinges on understanding reliable generalization.</p><p>I&#8217;ll say another thing. One of the things that you could say about what causes alignment to be difficult is that your ability to learn human values is fragile. Then your ability to optimize them is fragile. You actually learn to optimize them. And can&#8217;t you say, &#8220;Are these not all instances of unreliable generalization?&#8221; Why is it that human beings appear to generalize so much better? What if generalization was much better? What would happen in this case? What would be the effect? But those questions are right now still unanswerable.</p><p><strong>Dwarkesh Patel </strong><em>01:07:21</em></p><p>How does one think about what AI going well looks like? You&#8217;ve scoped out how AI might evolve. We&#8217;ll have these sort of continual learning agents. AI will be very powerful. Maybe there will be many different AIs. How do you think about lots of continent-sized compute intelligences going around? How dangerous is that? How do we make that less dangerous? And how do we do that in a way that protects an equilibrium where there might be misaligned AIs out there and bad actors out there?</p><p><strong>Ilya Sutskever </strong><em>01:07:58</em></p><p>Here&#8217;s one reason why I liked &#8220;AI that cares for sentient life&#8221;. We can debate on whether it&#8217;s good or bad. But if the first N of these dramatic systems do care for, love, humanity or something, care for sentient life, obviously this also needs to be achieved. This needs to be achieved. So if this is achieved by the first N of those systems, then I can see it go well, at least for quite some time.</p><p>Then there is the question of what happens in the long run. How do you achieve a long-run equilibrium? I think that there, there is an answer as well. I don&#8217;t like this answer, but it needs to be considered.</p><p>In the long run, you might say, &#8220;Okay, if you have a world where powerful AIs exist, in the short term, you could say you have universal high income. You have universal high income and we&#8217;re all doing well.&#8221; But what do the Buddhists say? &#8220;Change is the only constant.&#8221; Things change. There is some kind of government, political structure thing, and it changes because these things have a shelf life. Some new government thing comes up and it functions, and then after some time it stops functioning. That&#8217;s something that we see happening all the time.</p><p>So I think for the long-run equilibrium, one approach is that you could say maybe every person will have an AI that will do their bidding, and that&#8217;s good. If that could be maintained indefinitely, that&#8217;s true. But the downside with that is then the AI goes and earns money for the person and advocates for their needs in the political sphere, and maybe then writes a little report saying, &#8220;Okay, here&#8217;s what I&#8217;ve done, here&#8217;s the situation,&#8221; and the person says, &#8220;Great, keep it up.&#8221; But the person is no longer a participant. Then you can say that&#8217;s a precarious place to be in.</p><p>I&#8217;m going to preface by saying I don&#8217;t like this solution, but it is a solution. The solution is if people become part-AI with some kind of <a href="https://en.wikipedia.org/wiki/Neuralink">Neuralink</a>++. Because what will happen as a result is that now the AI understands something, and we understand it too, because now the understanding is transmitted wholesale. So now if the AI is in some situation, you are involved in that situation yourself fully. I think this is the answer to the equilibrium.</p><p><strong>Dwarkesh Patel </strong><em>01:10:47</em></p><p>I wonder if the fact that emotions which were developed millions&#8212;or in many cases, billions&#8212;of years ago in a totally different environment are still guiding our actions so strongly is an example of alignment success.</p><p>To spell out what I mean&#8212;I don&#8217;t know whether it&#8217;s more accurate to call it a value function or reward function&#8212;but the <a href="https://en.wikipedia.org/wiki/Brainstem">brainstem</a> has a directive where it&#8217;s saying, &#8220;Mate with somebody who&#8217;s more successful.&#8221; The <a href="https://en.wikipedia.org/wiki/Cerebral_cortex">cortex</a> is the part that understands what success means in the modern context. But the brainstem is able to align the cortex and say, &#8220;However you recognize success to be&#8212;and I&#8217;m not smart enough to understand what that is&#8212; you&#8217;re still going to pursue this directive.&#8221;</p><p><strong>Ilya Sutskever </strong><em>01:11:36</em></p><p>I think there&#8217;s a more general point. I think it&#8217;s actually really mysterious how evolution encodes high-level desires. It&#8217;s pretty easy to understand how evolution would endow us with the desire for food that smells good because smell is a chemical, so just pursue that chemical. It&#8217;s very easy to imagine evolution doing that thing.</p><p>But evolution also has endowed us with all these social desires. We really care about being seen positively by society. We care about being in good standing. All these social intuitions that we have, I feel strongly that they&#8217;re baked in. I don&#8217;t know how evolution did it because it&#8217;s a high-level concept that&#8217;s represented in the brain.</p><p>Let&#8217;s say you care about some social thing, it&#8217;s not a low-level signal like smell. It&#8217;s not something for which there is a sensor. The brain needs to do a lot of processing to piece together lots of bits of information to understand what&#8217;s going on socially. Somehow evolution said, &#8220;That&#8217;s what you should care about.&#8221; How did it do it?</p><p>It did it quickly, too. All these sophisticated social things that we care about, I think they evolved pretty recently. Evolution had an easy time hard-coding this high-level desire. I&#8217;m unaware of a good hypothesis for how it&#8217;s done. I had some ideas I was kicking around, but none of them are satisfying.</p><p><strong>Dwarkesh Patel </strong><em>01:13:26</em></p><p>What&#8217;s especially impressive is it was desire that you learned in your lifetime, it makes sense because your brain is intelligent. It makes sense why you would be able to learn intelligent desires. Maybe this is not your point, but one way to understand it is that the desire is built into the genome, and the genome is not intelligent. But you&#8217;re somehow able to describe this feature. It&#8217;s not even clear how you define that feature, and you can build it into the genes.</p><p><strong>Ilya Sutskever </strong><em>01:13:55</em></p><p>Essentially, or maybe I&#8217;ll put it differently. If you think about the tools that are available to the genome, it says, &#8220;Okay, here&#8217;s a recipe for building a brain.&#8221; You could say, &#8220;Here is a recipe for connecting the dopamine neurons to the smell sensor.&#8221; And if the smell is a certain kind of good smell, you want to eat that.</p><p>I could imagine the genome doing that. I&#8217;m claiming that it is harder to imagine. It&#8217;s harder to imagine the genome saying you should care about some complicated computation that your entire brain, a big chunk of your brain, does. That&#8217;s all I&#8217;m claiming. I can tell you a speculation of how it could be done. Let me offer a speculation, and I&#8217;ll explain why the speculation is probably false.</p><p>So the brain has brain regions. We have our <a href="https://en.wikipedia.org/wiki/Cerebral_cortex">cortex</a>. It has all those brain regions. The cortex is uniform, but the brain regions and the neurons in the cortex kind of speak to their neighbors mostly. That explains why you get brain regions. Because if you want to do some kind of <a href="https://en.wikipedia.org/wiki/Language_processing_in_the_brain">speech processing</a>, all the neurons that do speech need to talk to each other. And because neurons can only speak to their nearby neighbors, for the most part, it has to be a region.</p><p>All the regions are mostly located in the same place from person to person. So maybe evolution hard-coded literally a location on the brain. So it says, &#8220;Oh, when the GPS coordinates of the brain such and such, when that fires, that&#8217;s what you should care about.&#8221; Maybe that&#8217;s what evolution did because that would be within the toolkit of evolution.</p><p><strong>Dwarkesh Patel </strong><em>01:15:35</em></p><p>Yeah, although there are examples where, for example, people who are born blind have that area of their cortex adopted by another sense. I have no idea, but I&#8217;d be surprised if the desires or the reward functions which require a visual signal no longer worked for people who have their different areas of their cortex co-opted.</p><p>For example, if you no longer have vision, can you still feel the sense that I want people around me to like me and so forth, which usually there are also visual cues for.</p><p><strong>Ilya Sutskever </strong><em>01:16:12</em></p><p>I fully agree with that. I think there&#8217;s an even stronger counterargument to this theory. There are people who get half of their brains removed in childhood, and they still have all their brain regions. But they all somehow move to just one hemisphere, which suggests that the brain regions, their location is not fixed and so that theory is not true.</p><p>It would have been cool if it was true, but it&#8217;s not. So I think that&#8217;s a mystery. But it&#8217;s an interesting mystery. The fact is that somehow evolution was able to endow us to care about social stuff very, very reliably. Even people who have all kinds of strange mental conditions and deficiencies and emotional problems tend to care about this also.</p><h3>01:18:13 &#8211; &#8220;We are squarely an age of research company&#8221;</h3><p><strong>Dwarkesh Patel </strong><em>01:18:13</em></p><p>What is SSI planning on doing differently? Presumably your plan is to be one of the frontier companies when this time arrives. Presumably you started SSI because you&#8217;re like, &#8220;I think I have a way of approaching how to do this safely in a way that the other companies don&#8217;t.&#8221; What is that difference?</p><p><strong>Ilya Sutskever </strong><em>01:18:36</em></p><p>The way I would describe it is that there are some ideas that I think are promising and I want to investigate them and see if they are indeed promising or not. It&#8217;s really that simple. It&#8217;s an attempt. If the ideas turn out to be correct&#8212;these ideas that we discussed around understanding generalization&#8212;then I think we will have something worthy.</p><p>Will they turn out to be correct? We are doing research. We are squarely an &#8220;age of research&#8221; company. We are making progress. We&#8217;ve actually made quite good progress over the past year, but we need to keep making more progress, more research. That&#8217;s how I see it. I see it as an attempt to be a voice and a participant.</p><p><strong>Dwarkesh Patel </strong><em>01:19:29</em></p><p><a href="https://www.cnbc.com/2025/07/03/ilya-sutskever-is-ceo-of-safe-superintelligence-after-meta-hired-gross.html">Your cofounder and previous CEO left to go to Meta recently</a>, and people have asked, &#8220;Well, if there were a lot of breakthroughs being made, that seems like a thing that should have been unlikely.&#8221; I wonder how you respond.</p><p><strong>Ilya Sutskever </strong><em>01:19:45</em></p><p>For this, I will simply remind a few facts that may have been forgotten. I think these facts which provide the context explain the situation. The context was that we were fundraising at a $32 billion valuation, and then <a href="https://www.theverge.com/command-line-newsletter/690720/meta-buy-thinking-machines-perplexity-safe-superintelligence">Meta came in and offered to acquire us</a>, and I said no. But my former cofounder in some sense said yes. As a result, he also was able to enjoy a lot of near-term liquidity, and he was the only person from SSI to join Meta.</p><p><strong>Dwarkesh Patel </strong><em>01:20:27</em></p><p>It sounds like SSI&#8217;s plan is to be a company that is at the frontier when you get to this very important period in human history where you have superhuman intelligence. You have these ideas about how to make superhuman intelligence go well. But other companies will be trying their own ideas. What distinguishes SSI&#8217;s approach to making superintelligence go well?</p><p><strong>Ilya Sutskever </strong><em>01:20:49</em></p><p>The main thing that distinguishes SSI is its technical approach. We have a different technical approach that I think is worthy and we are pursuing it.</p><p>I maintain that in the end there will be a convergence of strategies. I think there will be a convergence of strategies where at some point, as AI becomes more powerful, it&#8217;s going to become more or less clearer to everyone what the strategy should be. It should be something like, you need to find some way to talk to each other and you want your first actual real superintelligent AI to be aligned and somehow care for sentient life, care for people, democratic, one of those, some combination thereof.</p><p>I think this is the condition that everyone should strive for. That&#8217;s what SSI is striving for. I think that this time, if not already, all the other companies will realize that they&#8217;re striving towards the same thing.<strong> </strong>We&#8217;ll see. I think that the world will truly change as AI becomes more powerful. I think things will be really different and people will be acting really differently.</p><p><strong>Dwarkesh Patel </strong><em>01:22:14</em></p><p>Speaking of forecasts, what are your forecasts to this system you&#8217;re describing, which can learn as well as a human and subsequently, as a result, become superhuman?</p><p><strong>Ilya Sutskever </strong><em>01:22:26</em></p><p>I think like 5 to 20.</p><p><strong>Dwarkesh Patel </strong><em>01:22:28</em></p><p>5 to 20 years?</p><p><strong>Ilya Sutskever </strong><em>01:22:29</em></p><p>Mhm.</p><p><strong>Dwarkesh Patel </strong><em>01:22:30</em></p><p>I just want to unroll how you might see the world coming. It&#8217;s like, we have a couple more years where these other companies are continuing the current approach and it stalls out. &#8220;Stalls out&#8221; here meaning they earn no more than low hundreds of billions in revenue? How do you think about what stalling out means?</p><p><strong>Ilya Sutskever </strong><em>01:22:49</em></p><p>I think stalling out will look like&#8230;it will all look very similar among all the different companies. It could be something like this. I&#8217;m not sure because I think even with stalling out, I think these companies could make a stupendous revenue. Maybe not profits because they will need to work hard to differentiate each other from themselves, but revenue definitely.</p><p><strong>Dwarkesh Patel </strong><em>01:23:20</em></p><p>But something in your model implies that when the correct solution does emerge, there will be convergence between all the companies. I&#8217;m curious why you think that&#8217;s the case.</p><p><strong>Ilya Sutskever </strong><em>01:23:32</em></p><p>I was talking more about convergence on their alignment strategies. I think eventual convergence on the technical approach is probably going to happen as well, but I was alluding to convergence to the alignment strategies. What exactly is the thing that should be done?</p><p><strong>Dwarkesh Patel </strong><em>01:23:46</em></p><p>I just want to better understand how you see the future unrolling. Currently, we have these different companies, and you expect their approach to continue generating revenue but not get to this human-like learner. So now we have these different forks of companies. We have you, we have <a href="https://en.wikipedia.org/wiki/Thinking_Machines_Lab">Thinking Machines</a>, there&#8217;s a bunch of other labs. Maybe one of them figures out the correct approach. But then the release of their product makes it clear to other people how to do this thing.</p><p><strong>Ilya Sutskever </strong><em>01:24:09</em></p><p>I think it won&#8217;t be clear how to do it, but it will be clear that something different is possible, and that is information. People will then be trying to figure out how that works. I do think though that one of the things not addressed here, not discussed, is that with each increase in the AI&#8217;s capabilities, I think there will be some kind of changes, but I don&#8217;t know exactly which ones, in how things are being done. I think it&#8217;s going to be important, yet I can&#8217;t spell out what that is exactly.</p><p><strong>Dwarkesh Patel </strong><em>01:24:49</em></p><p>By default, you would expect the company that has that model to be getting all these gains because they have the model that has the skills and knowledge that it&#8217;s building up in the world. What is the reason to think that the benefits of that would be widely distributed and not just end up at whatever model company gets this continuous learning loop going first?</p><p><strong>Ilya Sutskever </strong><em>01:25:13</em></p><p>Here is what I think is going to happen. Number one, let&#8217;s look at how things have gone so far with the AIs of the past. One company produced an advance and the other company scrambled and produced some similar things after some amount of time and they started to compete in the market and push the prices down. So I think from the market perspective, something similar will happen there as well.</p><p>We are talking about the good world, by the way. What&#8217;s the good world? It&#8217;s where we have these powerful human-like learners that are also&#8230; By the way, maybe there&#8217;s another thing we haven&#8217;t discussed on the spec of the superintelligent AI that I think is worth considering. It&#8217;s that you make it narrow, it can be useful and narrow at the same time. You can have lots of narrow superintelligent AIs.</p><p>But suppose you have many of them and you have some company that&#8217;s producing a lot of profits from it. Then you have another company that comes in and starts to compete. The way the competition is going to work is through specialization. Competition loves specialization. You see it in the market, you see it in evolution as well. You&#8217;re going to have lots of different niches and you&#8217;re going to have lots of different companies who are occupying different niches. In this world we might say one AI company is really quite a bit better at some area of really complicated economic activity and a different company is better at another area. And the third company is really good at litigation.</p><p><strong>Dwarkesh Patel </strong><em>01:27:18</em></p><p>Isn&#8217;t this contradicted by what human-like learning implies? It&#8217;s that it can learn&#8230;</p><p><strong>Ilya Sutskever </strong><em>01:27:21</em></p><p>It can, but you have accumulated learning. You have a big investment. You spent a lot of compute to become really, really good, really phenomenal at this thing. Someone else spent a huge amount of compute and a huge amount of experience to get really good at some other thing. You apply a lot of human learning to get there, but now you are at this high point where someone else would say, &#8220;Look, I don&#8217;t want to start learning what you&#8217;ve learned.&#8221;</p><p><strong>Dwarkesh Patel </strong><em>01:27:48</em></p><p>I guess that would require many different companies to begin at the human-like continual learning agent at the same time so that they can start their different tree search in different branches. But if one company gets that agent first, or gets that learner first, it does then seem like&#8230; Well, if you just think about every single job in the economy, having an instance learning each one seems tractable for a company.</p><p><strong>Ilya Sutskever </strong><em>01:28:19</em></p><p>That&#8217;s a valid argument. My strong intuition is that it&#8217;s not how it&#8217;s going to go. The argument says it will go this way, but my strong intuition is that it will not go this way. In theory, there is no difference between theory and practice. In practice, there is. I think that&#8217;s going to be one of those.</p><p><strong>Dwarkesh Patel </strong><em>01:28:41</em></p><p>A lot of people&#8217;s models of recursive self-improvement literally, explicitly state we will have a million Ilyas in a server that are coming up with different ideas, and this will lead to a superintelligence emerging very fast.</p><p>Do you have some intuition about how parallelizable the thing you are doing is? What are the gains from making copies of Ilya?</p><p><strong>Ilya Sutskever </strong><em>01:29:02</em></p><p>I don&#8217;t know. I think there&#8217;ll definitely be diminishing returns because you want people who think differently rather than the same. If there were literal copies of me, I&#8217;m not sure how much more incremental value you&#8217;d get. People who think differently, that&#8217;s what you want.</p><h3>01:29:23 &#8211; Self-play and multi-agent</h3><p><strong>Dwarkesh Patel </strong><em>01:29:23</em></p><p>Why is it that if you look at different models, even released by totally different companies trained on potentially non-overlapping datasets, it&#8217;s actually crazy how similar LLMs are to each other?</p><p><strong>Ilya Sutskever </strong><em>01:29:38</em></p><p>Maybe the datasets are not as non-overlapping as it seems.</p><p><strong>Dwarkesh Patel </strong><em>01:29:41</em></p><p>But there&#8217;s some sense in which even if an individual human might be less productive than the future AI, maybe there&#8217;s something to the fact that human teams have more diversity than teams of AIs might have. How do we elicit meaningful diversity among AIs? I think just raising the temperature just results in gibberish. You want something more like different scientists have different prejudices or different ideas. How do you get that kind of diversity among AI agents?</p><p><strong>Ilya Sutskever </strong><em>01:30:06</em></p><p>So the reason there has been no diversity, I believe, is because of pre-training. All the pre-trained models are pretty much the same because they pre-train on the same data. Now RL and <a href="https://www.interconnects.ai/p/the-state-of-post-training-2025">post-training</a> is where some differentiation starts to emerge because different people come up with different RL training.</p><p><strong>Dwarkesh Patel </strong><em>01:30:26</em></p><p>I&#8217;ve heard you <a href="https://www.lesswrong.com/posts/hMHFKgX5uqD4PE59c/an-observation-on-self-play">hint in the past</a> about <a href="https://en.wikipedia.org/wiki/Self-play">self-play</a> as a way to either get data or match agents to other agents of equivalent intelligence to kick off learning. How should we think about why there are no public proposals of this kind of thing working with <a href="https://en.wikipedia.org/wiki/Large_language_model">LLMs</a>?</p><p><strong>Ilya Sutskever </strong><em>01:30:49</em></p><p>I would say there are two things to say. The reason why I thought self-play was interesting is because it offered a way to create models using compute only, without data. If you think that data is the ultimate bottleneck, then using compute only is very interesting. So that&#8217;s what makes it interesting.</p><p>The thing is that self-play, at least the way it was done in the past&#8212;when you have agents which somehow compete with each other&#8212;it&#8217;s only good for developing a certain set of skills. It is too narrow. It&#8217;s only good for negotiation, conflict, certain social skills, strategizing, that kind of stuff. If you care about those skills, then self-play will be useful.</p><p>Actually, I think that self-play did find a home, but just in a different form. So things like debate, <a href="https://arxiv.org/abs/2407.13692">prover-verifier</a>, you have some kind of an <a href="https://arxiv.org/abs/2411.15594">LLM-as-a-Judge</a> which is also incentivized to find mistakes in your work. You could say this is not exactly self-play, but this is a related adversarial setup that people are doing, I believe.</p><p>Really self-play is a special case of more general competition between agents. The natural response to competition is to try to be different. So if you were to put multiple agents together and you tell them, &#8220;You all need to work on some problem and you are an agent and you&#8217;re inspecting what everyone else is working,&#8221; they&#8217;re going to say, &#8220;Well, if they&#8217;re already taking this approach, it&#8217;s not clear I should pursue it. I should pursue something differentiated.&#8221; So I think something like this could also create an incentive for a diversity of approaches.</p><h3>01:32:42 &#8211; Research taste</h3><p><strong>Dwarkesh Patel </strong><em>01:32:42</em></p><p>Final question: What is research taste? You&#8217;re obviously the person in the world who is considered to have the best taste in doing research in AI. You were the co-author on the biggest things that have happened in the history of deep learning, from AlexNet to GPT-3 to so on. What is it, how do you characterize how you come up with these ideas?</p><p><strong>Ilya Sutskever </strong><em>01:33:14</em></p><p>I can comment on this for myself. I think different people do it differently. One thing that guides me personally is an aesthetic of how AI should be, by thinking about how people are, but thinking correctly. It&#8217;s very easy to think about how people are incorrectly, but what does it mean to think about people correctly?</p><p>I&#8217;ll give you some examples. The idea of the <a href="https://en.wikipedia.org/wiki/Artificial_neuron">artificial neuron</a> is directly inspired by the brain, and it&#8217;s a great idea. Why? Because you say the brain has all these different organs, it has the <a href="https://en.wikipedia.org/wiki/Gyrification">folds</a>, but the folds probably don&#8217;t matter. Why do we think that the neurons matter? Because there are many of them. It kind of feels right, so you want the neuron. You want some local learning rule that will change the connections between the neurons. It feels plausible that the brain does it.</p><p>The idea of the <a href="https://web.stanford.edu/~jlmcc/papers/PDP/Chapter3.pdf">distributed representation</a>. The idea that the brain responds to experience therefore our neural net should learn from experience. The brain learns from experience, the neural net should learn from experience. You kind of ask yourself, is something fundamental or not fundamental? How things should be.</p><p>I think that&#8217;s been guiding me a fair bit, thinking from multiple angles and looking for almost beauty, beauty and simplicity. Ugliness, there&#8217;s no room for ugliness. It&#8217;s beauty, simplicity, elegance, correct inspiration from the brain. All of those things need to be present at the same time. The more they are present, the more confident you can be in a top-down belief.</p><p>The top-down belief is the thing that sustains you when the experiments contradict you. Because if you trust the data all the time, well sometimes you can be doing the correct thing but there&#8217;s a bug. But you don&#8217;t know that there is a bug. How can you tell that there is a bug? How do you know if you should keep debugging or you conclude it&#8217;s the wrong direction? It&#8217;s the top-down. You can say things have to be this way. Something like this has to work, therefore we&#8217;ve got to keep going. That&#8217;s the top-down, and it&#8217;s based on this multifaceted beauty and inspiration by the brain.</p><p><strong>Dwarkesh Patel </strong><em>01:35:31</em></p><p>Alright, we&#8217;ll leave it there.</p><p><strong>Ilya Sutskever </strong><em>01:35:33</em></p><p>Thank you so much.</p><p><strong>Dwarkesh Patel </strong><em>01:35:34</em></p><p>Ilya, thank you so much.</p><p><strong>Ilya Sutskever </strong><em>01:35:36</em></p><p>Alright. Appreciate it.</p><p><strong>Dwarkesh Patel </strong><em>01:35:37</em></p><p>That was great.</p><p><strong>Ilya Sutskever </strong><em>01:35:38</em></p><p>Yeah, I enjoyed it.</p><p><strong>Dwarkesh Patel </strong><em>01:35:39</em></p><p>Yes, me too.</p>]]></content:encoded></item><item><title><![CDATA[RL is even more information inefficient than you thought]]></title><description><![CDATA[And implications for RLVR progress]]></description><link>https://www.dwarkesh.com/p/bits-per-sample</link><guid isPermaLink="false">https://www.dwarkesh.com/p/bits-per-sample</guid><dc:creator><![CDATA[Dwarkesh Patel]]></dc:creator><pubDate>Mon, 17 Nov 2025 16:54:09 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!J6SR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea6b8a9c-18d2-4a0f-a940-f04f83fcdd3c_989x690.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Recently, <a href="https://www.tobyord.com/writing/inefficiency-of-reinforcement-learning">people</a> have been <a href="https://thinkingmachines.ai/blog/lora/#how-much-capacity-is-needed-by-supervised-and-reinforcement-learning">talking</a> about how it takes way more FLOPs to get a single sample in RL than it does in supervised learning. In pretraining, you get a signal on every single token you train on. In RL, you have to unroll a whole thinking trajectory that&#8217;s 10s of 1000s of tokens long in order to get a single reward signal at the end (for example, did the unit test for my code pass/did I get the right answer to this math problem/etc).</p><p>But this is only half the problem.  Here&#8217;s a simple way to compare the learning efficiency of reinforcement learning versus supervised learning:</p><p>Bits/FLOP = Samples/Flop * Bits/Sample.</p><p>What I haven&#8217;t heard people talk about is the other term in our equation: Bits/Sample. And for most of training, the information density per sample is way way lower for RL.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dwarkesh.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.dwarkesh.com/subscribe?"><span>Subscribe now</span></a></p><h3>Putting things in plain English</h3><p>In supervised learning (aka pretraining), you&#8217;re just soaking up bits. Every token is a hint at the structure of language, and the mind crafting that language, and the world that mind is seeing. Early in training, when you have a totally random model, you&#8217;re just maximally uncertain over all of this content. So each token is just blowing your mind. And you&#8217;re getting this exact signal of how wrong you were about the right answer, and what parameters you need to update to be less wrong.</p><p>Suppose you start with a randomly initialized model, and you kickstart training. If you&#8217;re doing next-token-prediction using supervised learning on &#8220;The sky is&#8221;, the training loop goes, &#8220;It&#8217;s actually &#8216;blue&#8217;. You said the probability of &#8216;blue&#8217; is .001%. Make the connections that were suggesting &#8216;blue&#8217; way way stronger. Alright, next token.&#8221;</p><p>In RL with policy gradient, you upweight all the trajectories where you get the answer right, and downweight all the trajectories where you get the answer wrong. But a model that&#8217;s not already very smart is just astonishingly unlikely to get the answer right.</p><p>If you were doing next-token-prediction on &#8220;The sky is&#8221; with RL, the training loop would be something like, &#8220;Okay, &#8216;halcyon&#8217; is wrong. Don&#8217;t do the thing that led to saying &#8216;halycon&#8217; &#8230; Okay &#8216;serendipity&#8217; is wrong &#8230;&#8221; Rinse and repeat this guesswork for somewhere around the number of tokens you have in your vocabulary (on the order of 100,000).</p><h3>The details</h3><p>Let&#8217;s think about how maximum bits/sample change as the pass rate (p) changes. Pass rate here means how likely you are to say the correct answer. To keep this simple, let&#8217;s say the answer is a token long. Then the pass rate when you have a totally untrained model is just 1/ (size of your vocabulary).</p><p>In supervised learning, you get told exactly what the right label is for each sample. The amount of new information you learn corresponds to how surprised you are to learn the correct answer - the lower your pass rate (aka prior probability of the correct answer), the more you learned from seeing the correct label. The basic formula for entropy tells us that you can learn -log(p) bits/sample from supervised learning.</p><p>In RL, you only get told whether you got the right answer or not. The amount of new information you can extract is bounded by how uncertain you are about this binary outcome. If you almost always pass (p &#8776; 1) or almost always fail (p &#8776; 0), each trial is very unlikely to surprise you. You&#8217;ll learn most when the probability of passing is like a coin toss (p &#8776; 0.5). The basic formula for the information content of a binary random variable tells us that you can learn at most Entropy(p) = -p log(p) - (1-p) log(1-p)<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> bits/sample from RL. </p><p>Okay let&#8217;s plot this.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!r_Hv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8137f76-7d2c-467f-9755-cc20e766bbad_989x690.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!r_Hv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8137f76-7d2c-467f-9755-cc20e766bbad_989x690.png 424w, https://substackcdn.com/image/fetch/$s_!r_Hv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8137f76-7d2c-467f-9755-cc20e766bbad_989x690.png 848w, https://substackcdn.com/image/fetch/$s_!r_Hv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8137f76-7d2c-467f-9755-cc20e766bbad_989x690.png 1272w, https://substackcdn.com/image/fetch/$s_!r_Hv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8137f76-7d2c-467f-9755-cc20e766bbad_989x690.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!r_Hv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8137f76-7d2c-467f-9755-cc20e766bbad_989x690.png" width="989" height="690" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b8137f76-7d2c-467f-9755-cc20e766bbad_989x690.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:690,&quot;width&quot;:989,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:53460,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.dwarkesh.com/i/179158054?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8137f76-7d2c-467f-9755-cc20e766bbad_989x690.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!r_Hv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8137f76-7d2c-467f-9755-cc20e766bbad_989x690.png 424w, https://substackcdn.com/image/fetch/$s_!r_Hv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8137f76-7d2c-467f-9755-cc20e766bbad_989x690.png 848w, https://substackcdn.com/image/fetch/$s_!r_Hv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8137f76-7d2c-467f-9755-cc20e766bbad_989x690.png 1272w, https://substackcdn.com/image/fetch/$s_!r_Hv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8137f76-7d2c-467f-9755-cc20e766bbad_989x690.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Doesn&#8217;t look terrible. Yes, pretraining is much better for half of the pass rate range, but then RL is better for the other half. However, this graph is super misleading. Because what the power law (in scaling laws) implies is that you need an equivalent amount of compute to cross each order of magnitude improvement in the pass rate. If it took you X many FLOPs to go from 1/100,000 pass rate to 1/10,000, then it will take you X many FLOPs to go from 1/10,000 pass rate to 1/1,000. So, we should actually chart the pass rate on a log scale - again, to account for how each increment in the x-axis corresponds to the same number of FLOPs.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qLHC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca871c67-c896-4f33-a5aa-2513d012363e_990x690.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qLHC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca871c67-c896-4f33-a5aa-2513d012363e_990x690.png 424w, https://substackcdn.com/image/fetch/$s_!qLHC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca871c67-c896-4f33-a5aa-2513d012363e_990x690.png 848w, https://substackcdn.com/image/fetch/$s_!qLHC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca871c67-c896-4f33-a5aa-2513d012363e_990x690.png 1272w, https://substackcdn.com/image/fetch/$s_!qLHC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca871c67-c896-4f33-a5aa-2513d012363e_990x690.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qLHC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca871c67-c896-4f33-a5aa-2513d012363e_990x690.png" width="990" height="690" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ca871c67-c896-4f33-a5aa-2513d012363e_990x690.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:690,&quot;width&quot;:990,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:51074,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.dwarkesh.com/i/179158054?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca871c67-c896-4f33-a5aa-2513d012363e_990x690.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qLHC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca871c67-c896-4f33-a5aa-2513d012363e_990x690.png 424w, https://substackcdn.com/image/fetch/$s_!qLHC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca871c67-c896-4f33-a5aa-2513d012363e_990x690.png 848w, https://substackcdn.com/image/fetch/$s_!qLHC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca871c67-c896-4f33-a5aa-2513d012363e_990x690.png 1272w, https://substackcdn.com/image/fetch/$s_!qLHC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca871c67-c896-4f33-a5aa-2513d012363e_990x690.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Oh boy, is that a sad picture. The regime where RL has comparable information density per sample to pre-training is this tiny slice at the very end of training, when you&#8217;ve got a pretty reasonable model anyways.</p><p>And again, I want to emphasize that this is totally separate from the point that getting a single sample from RL (aka unrolling a full trajectory before getting any signal) might take upwards of a million times more compute.</p><h3>It&#8217;s even worse than this - variance</h3><p>The situation for RL early in training is actually even worse than described above. When the pass rate is low, your gradient estimate is going to be incredibly noisy and unpredictable. Either you don&#8217;t sample the correct answer at all in your batch, in which you get almost no information. Or you do, and you get this giant spike. You&#8217;re getting jerked around, which is terrible for performant training.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>Interestingly, pretraining has the exact inverse problem. There, variance is super high at the END of training. As pretraining progresses, you exhaust more and more of the reducible loss (things your model can actually learn about the data). What remains is mostly the irreducible loss. The irreducible loss is the intrinsic unpredictability of internet text.</p><p>How should the prompt, &#8220;Bob&#8217;s favorite color is&#8221; end?  Depends on Bob. There&#8217;s not some correct answer which your super smart model can actually get good at predicting. But your super smart model is still getting a gradient update on whatever random answer someone put on the internet. And this noise is drowning out the true signal that the couple of actually learnable tokens in the batch are giving you. I don&#8217;t know if this is accurate, but it seems like this explosion of variance at the end of pretraining is relevant to why batch sizes are increased as pretraining progresses.</p><h3>Getting to the Goldilocks zone in RL</h3><p>If RL works best in the regime where your pass rate is &gt;&gt;1%, then this raises the question, how can we construct the RL training to get (and keep) models in this learning flow state?</p><p>For example, we can think of pretraining AND inference scaling as increasing the pass rate during RL, allowing you to extract far more bits per sample.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!J6SR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea6b8a9c-18d2-4a0f-a940-f04f83fcdd3c_989x690.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!J6SR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea6b8a9c-18d2-4a0f-a940-f04f83fcdd3c_989x690.png 424w, https://substackcdn.com/image/fetch/$s_!J6SR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea6b8a9c-18d2-4a0f-a940-f04f83fcdd3c_989x690.png 848w, https://substackcdn.com/image/fetch/$s_!J6SR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea6b8a9c-18d2-4a0f-a940-f04f83fcdd3c_989x690.png 1272w, https://substackcdn.com/image/fetch/$s_!J6SR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea6b8a9c-18d2-4a0f-a940-f04f83fcdd3c_989x690.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!J6SR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea6b8a9c-18d2-4a0f-a940-f04f83fcdd3c_989x690.png" width="989" height="690" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ea6b8a9c-18d2-4a0f-a940-f04f83fcdd3c_989x690.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:690,&quot;width&quot;:989,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!J6SR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea6b8a9c-18d2-4a0f-a940-f04f83fcdd3c_989x690.png 424w, https://substackcdn.com/image/fetch/$s_!J6SR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea6b8a9c-18d2-4a0f-a940-f04f83fcdd3c_989x690.png 848w, https://substackcdn.com/image/fetch/$s_!J6SR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea6b8a9c-18d2-4a0f-a940-f04f83fcdd3c_989x690.png 1272w, https://substackcdn.com/image/fetch/$s_!J6SR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea6b8a9c-18d2-4a0f-a940-f04f83fcdd3c_989x690.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It&#8217;s been noted that <a href="https://arxiv.org/pdf/2012.03107">curriculum learning in not especially helpful for pretraining</a>, but <a href="https://arxiv.org/pdf/1707.05300">often essential for RL</a>. This makes total sense when you think about how RL is only getting meaningful bits per sample in this Goldilocks zone of pass rate, so you really want to order the learning such that the difficulty of challenges increases in tandem with the model&#8217;s intelligence.</p><p>Our pass rate framework also gives us good intuitions for why self play has been so productive in the history of RL. If you&#8217;re competing against a player who is almost as good as you, you are balancing around a 50% pass rate, which peaks out the bits you get from a random binary variable.</p><p>But self play is not the only way we can imagine of keeping pass rate high through training. Perhaps we can come up with some proxy evaluation which is much more dense. Density here can mean one of two things:</p><ol><li><p>Samples/FLOP density: You estimate the final reward using this proxy evaluation, but much earlier on in the episode, saving you the compute of unrolling the full trajectory. This is what a value function does.</p></li><li><p>Bits/Sample density: You come up with a proxy objective which is much easier to pass than the actual test under question. The simplest example I can think of is a process-reward model which says, &#8220;Hey, this rollout got the wrong answer, but I can see that its reasoning was on the right track at the start. So let&#8217;s up-weight those early tokens.&#8221;</p></li></ol><p>Section 4.2 of the<a href="https://arxiv.org/abs/2501.12948"> Deepseek R1 paper</a> why so far, it&#8217;s been hard to develop useful proxy objectives like this for LLMs.</p><h3>Fewer bits, sure, but very valuable bits</h3><p>To be fair to RL, while you may be learning far fewer Bits/FLOP in RL, the bits you learn are very important. They are not apples-to-apples comparable to the bits in pretraining. This is for two key reasons:</p><ol><li><p>Pre-training is teaching you what the data manifold of the internet looks like, which is only partially and indirectly related to, &#8220;How do I perform economically valuable tasks?&#8221; Whereas RL has the promise of giving you the good stuff directly.</p></li><li><p>Even if the pre-training corpus contains the instructions about how to accomplish a specific task, it does not have the thinking trace which teaches the model about how to correct its mistakes, or leverage its jagged and non-human repertoire of skills to accomplish the task.</p></li></ol><p>The rebuttal is that those bits are only available for a small fraction of the pass rate range (again, weighted on a log scale to account for how pass rate is trash for most of training).</p><p>By the way, now we can understand all these claims about how RLVR is <a href="https://arxiv.org/abs/2510.07364v3">only eliciting the capabilities already latent in the pretrained model</a>. Of course that&#8217;s the case. If the pretrained model didn&#8217;t have a high enough pass rate to begin with, then RL would have atrocious bits/sample, and thus not be able to learn at all. Move 37 is obviously one famous example where RL did teach a model a de-novo strategy. It&#8217;s worth noting that AlphaGo was trained on self play (see above re how self play increases pass rate), and that AlphaGo was surprisingly compute intensive <a href="https://epoch.ai/data/ai-models">for its time</a>.</p><h3>The jaggedness of RL</h3><p>People have pointed out that RLVR is empirically just leading models to associate a thought pattern to a problem type rather than instilling a more general policy of stepping back and thinking through the best approach.</p><p>Think about it. How is it possible that we have models which are world-class at coding competitions but at the same time leave extremely foreseeable bugs and technical debt all throughout the codebase?</p><p>What explains this weird jaggedness? Perhaps RLVR can&#8217;t distinguish trajectories that were generated from a more generalizable procedure vs just greedily matching the problem shape to some associated thought process.</p><p>When you&#8217;re doing policy gradient rollouts, this more complex general policy is extremely unlikely to be ever be sampled, whereas the simple heuristic policy does get sampled and grows in frequency until it reaches <a href="https://en.wikipedia.org/wiki/Fixation_(population_genetics)">fixation</a>. Meanwhile, the general policy recedes further and further from sight.</p><p>Then the question is, how do we build a short bridge between simple heuristic solutions and the more complex general strategy? And will that bridge just spontaneously emerge as time horizons expand, thus potentially requiring generalization?</p><p>My concern is that this general policy of stepping back and making tasteful judgements based on your understanding of the world will continue to be hard to spot-light using verifiable rewards, even on longer time horizon tasks. And so the solution to this jaggedness will require a more robust training procedure, not just scaling RLVR.</p><h3>Human learning</h3><p>Here we&#8217;re only talking about the bits/sample learned from model free RL - aka from some binary outcome at the end of an episode. But of course humans are obviously learning way more efficiently than this. Think about a repeat entrepreneur. We say that she has a ton of hard-won wisdom and experience. Very little of that learning comes from the one bit of outcome from her previous episode (whether the startup succeeded or not).</p><p>It&#8217;s not clear what the ML analog is for human learning from experience. Clearly, our observations and reflections update our world model (independent of the outcome at the end). And this is playing a very important role in our learning.</p><p>Maybe we shouldn&#8217;t be asking how we model free RL to &#8776;50% pass rate, so that can squeeze out a full drop of information from the outcome. Maybe we should be asking, how do humans wring out the buckets of information from the environment?</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.dwarkesh.com/p/bits-per-sample?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.dwarkesh.com/p/bits-per-sample?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Basically, this equation is saying, Information learned from a binary outcome = p(sample is correct) * (information gained when sample is correct) + p(sample is incorrect) * (information gained when sample is incorrect).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Thank you to Lukas Berglund for spotting that my previous exposition on this point was incorrect.</p></div></div>]]></content:encoded></item></channel></rss>