11 Comments
User's avatar
Uncertain Eric's avatar

This framing misses something critical about the nature of the modern workforce: the middle class—especially in knowledge work—functions as a semi-meritocratic pseudo-UBI. Most roles aren’t about raw productivity or essential output, but about maintaining a complex theater of coordination, compliance, and justification. Human tasks stretch across weeks not because the work takes that long, but because the system is designed around inefficiency as stability.

So when we talk about timelines to automation based on “task completion,” we’re already working off an abstraction that hides the real threat: AI doesn’t just do the tasks—it erodes the rationale for the structure around them.

And historically, over the last few business quarters, we've seen that these timelines keep arriving faster than expected. Surprising efficiencies, unexpected integrations, emergent behaviors—all pulling forecasts closer. The story of AI deployment isn't one of overhyping progress. It's one of consistently underestimating how quickly systems can collapse when their supporting illusions are exposed.

That graph is a blade, not a curve.

Expand full comment
Turnip's avatar

Spot on. I'd say, though, that this is actually an argument for why widespread adoption will take longer than we expect.

I think most people in the AI field are taking it as given that as soon as a model can do something, it will be immediately adopted in a widespread way across society. In startup terms, they're focusing on the product and forgetting about distribution. I believe this is because most people in the AI field are living in sort of an "engineer's bubble" that prevents them from understanding what it is that most people get out of work and working with other humans.

The real barrier here is not with the technology, but with the "social technology" that shapes how people view the world and work. Managers may love it when performance increases from some new automation, but most managers outside of the absolute bleeding-edge tech companies are going to be very uncomfortable with the idea of not managing any humans and having a bunch of AIs do the job. That's a paradigm shift that challenges the whole meaning of "management." Corps will happily lay off 5%, even 10% of their workforce and cut some contractors, but when you start to talk about replacing 40-50% or more of their workers, or going full automation, you're going to run up against some strong social resistance.

I've seen plenty of headlines about some work being migrated to AI- a lot of it, like basic legal work, marketing copy, and resume screening- was fairly low hanging fruit in the corporate world. There's a huge, huge mental transition point between that and asking a manager to fire their whole team, who they may have worked with for years, and start managing these newfangled AI agents.

In most companies, you can barely get anyone "non-technical" to use a PowerBI report with filters, let alone ask them to supervise the work of a bunch of autonomous AI agents. That isn't some failing on their part, it's because their framework for "solving a problem" means "collaborating with humans, generating alignment, and communicating." Most non-engineers like working with people, or at least, they derive a feeling of belonging and social value from doing so. Breaking down their illusions about the value of human work is going to trigger intense pushback once it hits a certain threshold.

I think that you're right that those illusions are going to collapse, but I would argue that people will fight very, very hard to maintain those illusions and they will be more durable than we think. I think that smaller, more agile startups will be truly AI-native and able to work with a fully ground-up AI-centered work paradigm, but any established company is going to be very resistant to truly incorporating it like the AI people expect.

Expand full comment
Benjamin Todd's avatar

I basically agree with these points – however, I think there's a plausible scenario where in the next 2-5 years AGI is mainly applied within tech startups, big tech, science and the AI labs themselves. Even if most of the broader economy is unchanged, those areas are big enough to generate enough revenue to fund continued AI research. From there you can reach systems that already can pose major risks, as well as probably do things like automate a lot of the economy without much human involvement.

I wrote a bit about this here: https://80000hours.org/agi/guide/when-will-agi-arrive/#what-jobs-would-these-systems-be-able-to-help-with

A more dramatic version of this kind of scenario is in https://ai-2027.com/ (they don't forecast very broad deployment until after ASI has been achieved within OpenAI and it's already a critical military technology etc.)

Expand full comment
Uncertain Eric's avatar

This is a really rich thread—fully agree that social technology and inertia inside management culture will slow full integration. But two pressure points can override that resistance fast: the macroeconomic crunch, and the siloed logic of middle management under pressure.

The shift from Software-as-a-Service to Employee-as-a-Service won’t need widespread public adoption. It’ll happen when the tools employees already use get upgraded into replacements—quietly, rationally, one org chart at a time. The decisions will feel “logical” within the constrained mandates of those making them. The cumulative result will feel like collapse.

I dig into that dynamic here, if you’re curious:

https://sonderuncertainly.substack.com/p/the-middle-class-is-a-semi-meritocratic

Expand full comment
Gareth Manning's avatar

I’d also suggest we take really seriously how AI will impact jobs, arguably hurting most who are least prepared to adapt to an AI world:

https://open.substack.com/pub/garethmanning/p/high-tech-low-literacy-how-the-ai?r=m7oj5&utm_medium=ios

Expand full comment
Gareth Manning's avatar

Cool post! A concurrent but slower trend, of course, is robotics. Applying AI to model worlds where robots learn to interact with the world via RL is way more efficient than doing so in real life. We will likely only have about a million humanoid ones in the next few years due to manufacturing lag time and rare earths minerals scarcity, but they’re coming…

Expand full comment
Nathan Young's avatar

Good piece and I appreciated the caveats.

Expand full comment
Boogaloo's avatar

the fact that the rate of improvement is the same across all accuracies is interesting and a reason to take this seriously. Otherwise the 50% threshold could have been picked specifically to get a nice straight line going up on a chart.

This result should be verified across a much wider variety of engineering tasks however.

Let's get serious about this. Verify that this 'just holds' and then raise the fire alarm. The way this gets halfassed is not good. It makes people have suspicions, get a conclusive result that yes these things are really improving at this rate. Really tackle all possible objections before you publish, let it be truly comprehensive and not something that makes a good X post or a workshop paper.

Expand full comment
Benjamin Todd's avatar

Agree. Even more importantly we need to look at horizon outside engineering eg robotics.

Expand full comment
Boogaloo's avatar

I think software engineering is enough, if AI models can do all software engineering tasks humans can do in 3-4 years we are in a completely different world.

In general we need comprehensive benchmarking and have it all in one place to look at. I track all the various benchmarks but I suspect most people don't and why it's hard to communicate what is happening.

I see AI progressing along all benchmarks, but everyone will have some benchmark in mind where AI is still very bad and so they are like 'I guess AI is smart but also very bad'

Would be a good website to build that somehow visualizes all the benchmarks that are out there in a super visual and cool way. So people could see instantly that 'oh wow, AI has been progressing along these 30 different benchmarks'

Expand full comment
Boogaloo's avatar

I see this a lot online.

Someone posts a straight line on a graph going up

thousand people: 'well that's great but it's still dumber than a rock at X'

yeah that's all well and good.. but... It's a straight line going up on many graphs, not just this one.

Expand full comment