Maria, this feels like a genuinely important shift.
As you said, websites are no longer just “UI for humans”—they’re increasingly becoming interfaces for AI agents, which means we need an explicit meta-layer of identity, context, and intent that agents can parse without digging through complex HTML.
A simple Markdown structure that tells agents who you are and what you do looks like an emerging identity protocol for the agentic web.
I wrote about a related idea—where AI explicitly referenced a “meta-layer” in its own words and why that matters:
If this kind of structure becomes standardized, the web may evolve beyond SEO into a true agent-readable interface layer for how knowledge and identity are represented online.
We also forget that we're getting huge gains from exponentially scaling up compute over the past few years, a trend that can only continue for maybe 4 more years, after which it'll be linear as we can't have 1TW data centers (distributed or not) built at a pace that matches current scaling. So the primary driver for scaling will be human ingenuity or hopefully AI AI engineers. So I personally think it's likely that we'll either have AGI in 5 years, or decades.
If there is a flaw in Dwarkesh’s reasoning, it is that he is unduly swayed by the problem of sample efficiency. Certainly, LLMs are much less sample efficient than humans and this makes them feel “dumb” in an important sense. But LLMs also become much more sample efficient with size (and presumably quality). Gemini 3 might be much more sample efficient than Gemini 2.5. And Gemini 4 or 5 might approach human levels of sample efficiency. We don’t know yet. But if they do, then we won’t need those armies of Phds to train the LLM for every little task.
Maria, this feels like a genuinely important shift.
As you said, websites are no longer just “UI for humans”—they’re increasingly becoming interfaces for AI agents, which means we need an explicit meta-layer of identity, context, and intent that agents can parse without digging through complex HTML.
A simple Markdown structure that tells agents who you are and what you do looks like an emerging identity protocol for the agentic web.
I wrote about a related idea—where AI explicitly referenced a “meta-layer” in its own words and why that matters:
👉 https://northstarai.substack.com/p/ai-spoke-of-a-meta-layer-in-its-own
If this kind of structure becomes standardized, the web may evolve beyond SEO into a true agent-readable interface layer for how knowledge and identity are represented online.
We also forget that we're getting huge gains from exponentially scaling up compute over the past few years, a trend that can only continue for maybe 4 more years, after which it'll be linear as we can't have 1TW data centers (distributed or not) built at a pace that matches current scaling. So the primary driver for scaling will be human ingenuity or hopefully AI AI engineers. So I personally think it's likely that we'll either have AGI in 5 years, or decades.
If there is a flaw in Dwarkesh’s reasoning, it is that he is unduly swayed by the problem of sample efficiency. Certainly, LLMs are much less sample efficient than humans and this makes them feel “dumb” in an important sense. But LLMs also become much more sample efficient with size (and presumably quality). Gemini 3 might be much more sample efficient than Gemini 2.5. And Gemini 4 or 5 might approach human levels of sample efficiency. We don’t know yet. But if they do, then we won’t need those armies of Phds to train the LLM for every little task.
"We’re losing money on every sale, but we’ll make it up in volume.”
Sums up what's happening in a bunch of areas, not just scaling