When I see people claiming genAI hasn't found ‘real world application’, I can’t help wondering what planet they’re on. By all the metrics I can find, AI looks like the most rapidly adopted technology in history. Here’s some data.
1. ChatGPT is probably the fastest growing product in history. This is a chart comparing how long it took prominent tech companies to reach 100 million users.
2. ChatGPT just became the fifth most visited website in the world, with over 5 *billion* monthly visitors, more than Wikipedia or Netflix. AI doesn’t have 'millions' of users, but rather hundreds of millions every week, under three years from launch, and it's still growing 20% per month.
3. Collectively AI startups are growing actual revenue maybe 5-times faster than previous hyped tech companies.
4. Several AI startups have already reached $100m ARR even faster than chatGPT.
5. Frontier labs are growing revenue 3x per year. (Interestingly, this is easily enough to continue the trend of larger and larger training runs.)
6. Surveys show genAI is probably the fastest adopted technology in history. Two years after chatGPT, about 40% of working age people in the US had used genAI, and about 10% per using it daily (and it’s higher today). That's much faster than smart phones, the internet or PCs.
7. Now at Google, over 50% of code characters approved were originally generated by an LLM. Microsoft’s CEO also in April said 20-30% of internal code is AI generated.
And I haven't even brought up how AI was used to WIN A FRICKIN NOBEL PRIZE.
Finally, it takes time to adjust, so current adoption is always going to lag a long way behind what's possible. It’s a backwards looking indicator.
Yes, it's true investment in AI runs ahead of its current revenues ($100s of billions vs $10s of billions), but that's a rational response by investors. Investments should be made based on the expectation of future returns, not current returns. Investors are simply betting that current trends in revenue will continue another 2-3 years.
GenAI continues to have many limitations, but saying “it’s not really useful” when hundreds of millions of people enthusiastically use it all the time seems totally false. It’s time to get serious about what can do, what it might be able to do in the near future, and what that’s going to mean for society.
This is a typical form of coping with something that is pretty unprecedented. I have lots of empathy here for people who say things like it doesn't have real world applications but like you say I think they're wrong. It does seem that there will be this continuous dynamic where AI systems fail at certain things and such failures will continuously pointed out as signs that we should not worry until things get crazy. I imagine the new METR paper for example will do this.
It’s the claims by the industry that language models can soon be reliable general autonomous agents (AI in the sci-fi sense) that are overhyped to the point of mass psychosis.
In this sense the mania has come before significant real world adoption and in contrast to the Internet where the bubble mentality set in after almost ten years of accelerating adoption.
So it is an extraordinary situation around LLMs, caused by the massive Eliza effect in combination with the huge but delimited capabilities of chatbots.