Discussion about this post

User's avatar
Gaurav Yadav's avatar

This is a typical form of coping with something that is pretty unprecedented. I have lots of empathy here for people who say things like it doesn't have real world applications but like you say I think they're wrong. It does seem that there will be this continuous dynamic where AI systems fail at certain things and such failures will continuously pointed out as signs that we should not worry until things get crazy. I imagine the new METR paper for example will do this.

Expand full comment
Chris W's avatar

It’s the claims by the industry that language models can soon be reliable general autonomous agents (AI in the sci-fi sense) that are overhyped to the point of mass psychosis.

In this sense the mania has come before significant real world adoption and in contrast to the Internet where the bubble mentality set in after almost ten years of accelerating adoption.

So it is an extraordinary situation around LLMs, caused by the massive Eliza effect in combination with the huge but delimited capabilities of chatbots.

Expand full comment
4 more comments...

No posts