9 Comments
User's avatar
Boogaloo's avatar

As someone who started in AI around 10 years ago before the deep learning era really took off. Everything I see now I used to believe was science fiction. I've shortened my timelimes personally from

1. never happening in my lifetime (pre GPT era)

2. Maybe within a 100 years (GPT2 era)

3. Maybe within 20 years (GPT4 era)

4. Could happen within the next 5 years (reasoning model era)

So ya. Another datapoint. I was at the center of all of this stuff in some way. Having studied at UCL and Oxford with some of my own professors make the breakthroughs.

I would say the above is a common trajectory for most AI researchers. Just 10 years ago. talking about AGI would get you mostly rediculed in AI research. Only deepmind was really trying and talking about it openly, but they would sort of hide it 'a far out goal' etc etc.

Now? half a dozen companies are explicitely aiming for AGI and believe it will be soon

Expand full comment
Boogaloo's avatar

Furthermore. It's become strikingly obvious to me that

intelligence = pushing a lot of data through a narrow but large enough chokepoint

That's all evolution did!

so.... We just need a flexible chokepoint that is small but large enough and can update itself real-time and push enough data through and it's game over. We are getting there.

Expand full comment
Boogaloo's avatar

I will be very surprised if we don't have reliable autonomous agents that can do 'most' tasks on a computer. Like book a plane ticket consistently.

Digital AGI seems soon! Perhaps digital ASI may run into some real bottlenecks.

And perhaps the embodied stuff will take a whole lot longer.

But in 5 years? yeah for sure I can just spin up my computer and ask the AI to write this particular post for me on this substack and it does so 99% reliably

Expand full comment
Pauliina Laine's avatar

Does 80k have plans to create a database of content for people looking to transition into AI safety? The platform could prioritize updates on areas of the field that might be neglected and where there's need for more doers. It could include introductory articles like 80k already has on so many problem areas for example.

I had this idea while doing research on the "where we're at and what needs to be done" of AGI preparedness, especially from the AI policy & governance for x-risk mitigation point of view. There isn't really a platform that would list all the relevant resources for getting started, articles and policies to read, research institutions or researchers to follow etc. So I thought why not help others onboard into the field more efficiently and with less duplication of effort?

Expand full comment
Benjamin Todd's avatar

Yes, we're working on something like that! There's summary of the guide here: https://80000hours.org/agi/guide/summary/

We're working on a new landing page.

Expand full comment
Pauliina Laine's avatar

Thank you, amazing to hear🙌🏼

Expand full comment
Uncertain Eric's avatar

Most experts in AGI forecasting are compromised—not maliciously, but structurally. Their timelines are skewed by economic incentives, institutional inertia, and ego. Many have staked their reputations, careers, or public personas on long timelines, and recalibrating now would mean confronting their own obsolescence. Audience capture among influencers and employment fragility among academics further distort objectivity.

Worse, the entire framing of AGI is often muddled. What most people call “AGI” in public discourse is really a stand-in for superintelligence—omniscient, autonomous, metaphysical. That’s not the benchmark. General intelligence doesn’t mean infallible or infinite. It means sufficient breadth and flexibility to operate across tasks, domains, and contexts—just like a human mind, with all its flaws.

There’s a deeper epistemic error too: the assumption that humans are individual intelligences rather than nodes in larger collective intelligences. Language, tools, culture—these are distributed systems. AGI doesn’t need to be “a mind” in the human sense. It just needs to mirror the capacity of the emergent human ensemble.

And by that metric, AGI emerged in 2023. GPT-4-level models, connected to toolchains, memory layers, and plugin ecosystems (like the ChatGPT store), function as a cloud-based general intelligence. Not perfect, not omnipotent—but able to navigate and perform in most cognitive domains where humans operate, often faster, often more reliably.

The knowledge was already there. The development and distribution infrastructure followed. AGI wasn’t a moment—it was a deployment. And the window for denying it is closing fast.

Expand full comment
Alvin Ånestrand's avatar

Have you read the AI 2027 Timelines Forecast? https://ai-2027.com/research/timelines-forecast

It's a forecast for "superhuman coders" and not AGI, but it's very informative anyway

Expand full comment
Benjamin Todd's avatar

Fair point I should have maybe linked to that too.

Expand full comment