Sad that the smartest people in AI can't explicitly say out loud that this current iteration of AI is just a fancy, often clunky, knowledge retrieval tool. Which is very different than 'intelligence'. Admitting this would stop the gravy train, sure, but the alternative is worse.
I believe the evolutionary “value function” he keeps referring to is the motivation to have self directed focus to respond to feedback from our environment, and persist to take action and make adjustments, so that we can improve our likelihood of survival, or, our social standing.
Yes, the danger of course being group think such as "Climate Change" being used to hijack for "power and money". Satan is the great deceiver, and Ai or SSi both possess the potential for pursuing ends that are not in alignment with Truth. Just search Youtube for plenty of low hanging fruit to see examples of "recycling" by individuals to separate their trash only for it to be dumped into the same garbage dumpsters (shortcuts).
More cracks in the Singularity myth. This idea of infinite intelligence was needed to catalyze AI. It is now the obstacle we need to overcome. Ilya's superintelligent 15-year-old is what I've been calling the Plurality—intelligence that evolves by adapting to constraints, not by transcending them.
One thing that I feel machines are missing compared to humans is the notion of passing time. Human coalesce memories around different timeframes, short and long, and are able to associate them. Building associations between notable, surprising, experiences happening in a variety of timeframes. The value functions could be seen as a kind of fuzzy associative memory based on past experiences, used to predict the outcome of a situation from weak signals. Even emotions could be described this way.
“Maybe what it suggests is that The value function of humans is connected to emotions and hard-coded by evolution” 💡 I love listening to @Ilya so low on hype and so dense on significance
Many of the limits discussed here — jagged generalization, unclear value structure, and the question of what exactly we are scaling — seem to point to a missing level of description.
One way to frame this is to step outside model internals and ask: what kinds of informational situations must a system be able to control, via behavior change, in order to remain viable over time?
A recent comparative analysis across 1,530 species suggests that cognition may be better understood as control over five recurrent informational task domains, acquired in a fixed order under survival constraints — rather than as a property of biological substrate. This framing has been formalized as the Five Task Model.
From this perspective, “world models” are not representations for their own sake, but tools for managing task-relevant informational change across these domains — which seems closely aligned with several of the questions raised in this conversation.
According to Ilya the optimal way to release AGI would to be to do it gradually in stages..for the reasons he stated and for some others. What we've seen with AI so far is release after release of new improved AI. So isn't that what a gradual release of AGI might look like? Is it possible they already have AGI? By they I mean one of the top players or possibly two or more of the top players who have reached an agreement. Maybe they're slowly working on safety or some other issue before the release of AGI. Hey, just a thought 😁
The next frontier for general intelligence and value lies in judgment and intuition.
This requires extending how models are trained and evaluated—to recognize conceptual patterns, weigh up evidence, be guided by principles and values, engage in continuous learning and feedback loops to strengthen experience and instinct.
My understanding is that what he’s really pointing to is this:
humans don’t actually understand how intelligence emerges. What we mostly do is stack existing data and hope generalization appears.
As for whether the missing principle should be embedded into the current system or require an entirely new framework, I don’t see this as a contradiction.
A system only becomes practically valuable once it is independent enough to form a closed logical loop of its own. Only then does it make sense to embed it into existing systems. From an operational perspective, this is simply the most efficient path.
Especially in a world where human consciousness is already saturated by massive amounts of information, a full “reset” or paradigm overthrow is something only a small number of individuals can realistically accomplish.
For most people, what actually works is reaching critical nodes within the existing system—points that trigger awareness, adjustment, or redirection—rather than rebuilding everything from scratch.
Idk if you read these DP BUT listened to your Lane podcast and you were giving molecular biologist energy. I looked up your background and there was none… so wow..amazing prep made amazing convo…
Ilya is slowly coming around to the true foundation of intelligence, Biblical morality.
I'm really glad you kept that casual behind the scenes first minute in the video.
What's happening with this slow takeoff is so epochal but we barely stop to consider it.
Fascinating to see even Ilya feeling like it's crazy that "it's happening and straight out of science fiction"
Sad that the smartest people in AI can't explicitly say out loud that this current iteration of AI is just a fancy, often clunky, knowledge retrieval tool. Which is very different than 'intelligence'. Admitting this would stop the gravy train, sure, but the alternative is worse.
Plenty of people have pointed out the llms won't reach AGI.
I believe the evolutionary “value function” he keeps referring to is the motivation to have self directed focus to respond to feedback from our environment, and persist to take action and make adjustments, so that we can improve our likelihood of survival, or, our social standing.
Yes, the danger of course being group think such as "Climate Change" being used to hijack for "power and money". Satan is the great deceiver, and Ai or SSi both possess the potential for pursuing ends that are not in alignment with Truth. Just search Youtube for plenty of low hanging fruit to see examples of "recycling" by individuals to separate their trash only for it to be dumped into the same garbage dumpsters (shortcuts).
Ilya is great. Such a deep and original thinker.
He repeatedly refuses to answer the question: How will AGI actually be built? He has no answer.
I've reviewed the details on my AI blog, here:
oscarmdavies.substack.com/p/on-the-sutskever-and-dwarkesh-interview
More cracks in the Singularity myth. This idea of infinite intelligence was needed to catalyze AI. It is now the obstacle we need to overcome. Ilya's superintelligent 15-year-old is what I've been calling the Plurality—intelligence that evolves by adapting to constraints, not by transcending them.
https://techforlife.com/p/the-plurality-a-better-myth-for-ai
One thing that I feel machines are missing compared to humans is the notion of passing time. Human coalesce memories around different timeframes, short and long, and are able to associate them. Building associations between notable, surprising, experiences happening in a variety of timeframes. The value functions could be seen as a kind of fuzzy associative memory based on past experiences, used to predict the outcome of a situation from weak signals. Even emotions could be described this way.
“Maybe what it suggests is that The value function of humans is connected to emotions and hard-coded by evolution” 💡 I love listening to @Ilya so low on hype and so dense on significance
Many of the limits discussed here — jagged generalization, unclear value structure, and the question of what exactly we are scaling — seem to point to a missing level of description.
One way to frame this is to step outside model internals and ask: what kinds of informational situations must a system be able to control, via behavior change, in order to remain viable over time?
A recent comparative analysis across 1,530 species suggests that cognition may be better understood as control over five recurrent informational task domains, acquired in a fixed order under survival constraints — rather than as a property of biological substrate. This framing has been formalized as the Five Task Model.
From this perspective, “world models” are not representations for their own sake, but tools for managing task-relevant informational change across these domains — which seems closely aligned with several of the questions raised in this conversation.
According to Ilya the optimal way to release AGI would to be to do it gradually in stages..for the reasons he stated and for some others. What we've seen with AI so far is release after release of new improved AI. So isn't that what a gradual release of AGI might look like? Is it possible they already have AGI? By they I mean one of the top players or possibly two or more of the top players who have reached an agreement. Maybe they're slowly working on safety or some other issue before the release of AGI. Hey, just a thought 😁
Interesting discussion.
The next frontier for general intelligence and value lies in judgment and intuition.
This requires extending how models are trained and evaluated—to recognize conceptual patterns, weigh up evidence, be guided by principles and values, engage in continuous learning and feedback loops to strengthen experience and instinct.
machines do not learn like humans. having all the data doesn't give you all the answers. how does a machine learn discernment?
My understanding is that what he’s really pointing to is this:
humans don’t actually understand how intelligence emerges. What we mostly do is stack existing data and hope generalization appears.
As for whether the missing principle should be embedded into the current system or require an entirely new framework, I don’t see this as a contradiction.
A system only becomes practically valuable once it is independent enough to form a closed logical loop of its own. Only then does it make sense to embed it into existing systems. From an operational perspective, this is simply the most efficient path.
Especially in a world where human consciousness is already saturated by massive amounts of information, a full “reset” or paradigm overthrow is something only a small number of individuals can realistically accomplish.
For most people, what actually works is reaching critical nodes within the existing system—points that trigger awareness, adjustment, or redirection—rather than rebuilding everything from scratch.
++ Good Post, Also, start here how to build tech, Crash Courses, 100+ Most Asked ML System Design Case Studies and LLM System Design
How to Build Tech
https://open.substack.com/pub/howtobuildtech/p/how-to-build-tech-10-how-to-actually?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
https://open.substack.com/pub/howtobuildtech/p/how-to-build-tech-06-how-to-actually?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
https://open.substack.com/pub/howtobuildtech/p/how-to-build-tech-05-how-to-actually?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
https://open.substack.com/pub/howtobuildtech/p/how-to-build-tech-04-how-to-actually?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
https://open.substack.com/pub/howtobuildtech/p/how-to-build-tech-03-how-to-actually?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
https://open.substack.com/pub/howtobuildtech/p/how-to-build-tech-01-the-heart-of?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
https://open.substack.com/pub/howtobuildtech/p/how-to-build-tech-02-how-to-actually?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
Crash Courses
https://open.substack.com/pub/crashcourses/p/crash-course-07-hands-on-crash-course?utm_campaign=post-expanded-share&utm_medium=web
https://open.substack.com/pub/crashcourses/p/crash-course-06-part-2-hands-on-crash?utm_campaign=post-expanded-share&utm_medium=web
https://open.substack.com/pub/crashcourses/p/crash-course-04-hands-on-crash-course?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
https://open.substack.com/pub/crashcourses/p/crash-course-03-hands-on-crash-course?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
https://open.substack.com/pub/crashcourses/p/crash-course-02-a-complete-crash?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
https://open.substack.com/pub/crashcourses/p/crash-course-01-a-complete-crash?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
LLM System Design
https://open.substack.com/pub/naina0405/p/very-important-llm-system-design-577?utm_campaign=post-expanded-share&utm_medium=web
https://open.substack.com/pub/naina0405/p/very-important-llm-system-design-4ea?utm_campaign=post-expanded-share&utm_medium=web
https://open.substack.com/pub/naina0405/p/very-important-llm-system-design-499?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
https://open.substack.com/pub/naina0405/p/very-important-llm-system-design-63c?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
https://open.substack.com/pub/naina0405/p/very-important-llm-system-design-bdd?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
https://open.substack.com/pub/naina0405/p/very-important-llm-system-design-661?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
https://open.substack.com/pub/naina0405/p/very-important-llm-system-design-83b?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
https://open.substack.com/pub/naina0405/p/very-important-llm-system-design-799?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
https://open.substack.com/pub/naina0405/p/very-important-llm-system-design-612?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
https://open.substack.com/pub/naina0405/p/very-important-llm-system-design-7e6?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
https://open.substack.com/pub/naina0405/p/very-important-llm-system-design-67d?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
https://open.substack.com/pub/naina0405/p/most-important-llm-system-design-b31?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
https://naina0405.substack.com/p/launching-llm-system-design-large?r=14q3sp
https://naina0405.substack.com/p/launching-llm-system-design-2-large?r=14q3sp
[https://open.substack.com/pub/naina0405/p/llm-system-design-3-large-language?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
https://open.substack.com/pub/naina0405/p/important-llm-system-design-4-heart?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
https://open.substack.com/pub/naina0405/p/very-important-llm-system-design-63c?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
Idk if you read these DP BUT listened to your Lane podcast and you were giving molecular biologist energy. I looked up your background and there was none… so wow..amazing prep made amazing convo…