I found his perspective to be a bit parsimonious. We have a sample of one and he's extending that out to be the only option. I think that's so far proven to be somewhat naive. Anyway, I'm team Dwarkesh. Interesting video and good follow-up.
I am broadly in agreement to the part about continual training leading to the most probably path to AGI. I think we should see movement on this as soon as next year. The company I work for - NonBioS - is on track to start testing long context - continual trained systems as soon as year end. My estimate is that this might result in AGI (or a clear roadmap to it) as soon as 2027.
I found his perspective to be a bit parsimonious. We have a sample of one and he's extending that out to be the only option. I think that's so far proven to be somewhat naive. Anyway, I'm team Dwarkesh. Interesting video and good follow-up.
I am broadly in agreement to the part about continual training leading to the most probably path to AGI. I think we should see movement on this as soon as next year. The company I work for - NonBioS - is on track to start testing long context - continual trained systems as soon as year end. My estimate is that this might result in AGI (or a clear roadmap to it) as soon as 2027.
I greatly admire the clarity of your exposition. "Your writing", I'm trying to say. See, I can see it but just can't do it!