The most obvious reason AI stocks might crash is that stocks often crash.
Nvidia’s price fell 60% just in 2022, along with other AI companies. It also fell more than 50% in 2020 at the start of the COVID outbreak, and in 2018. So, we should expect there’s a good chance it falls 50% again in the coming years.
Nvidia’s volatility is about 60%, which means – even assuming efficient markets – it has about a 15% chance of falling more than 50% in a year.1
And more speculatively, booms and busts seem more likely for stocks that have gone up a ton, and when new technologies are being introduced.
That’s what we saw with the introduction of the internet and the dot com bubble, as well as with crypto.2
(Here are two attempts to construct economic models for why. This phenomenon also seems related to the existence of momentum in financial prices, as well as bubbles in general.)
Further, as I argued, current spending on AI chips requires revenues from AI software to reach hundreds of billions within a couple of years, and (at current trends) approach a trillion by 2030. There’s plenty of scope to not hit that trajectory, which could cause a sell off.
Note the question isn’t just whether the current and next generation of AI models are useful (they definitely are), but rather:
Are they so useful their value can be measured in the trillions?
Do they have a viable business model that lets them capture enough of that value?
Will they get there fast enough relative to market expectations?
My own take is that the market is still underpricing the long term impact of AI (which is why I about half my equity exposure is in AI companies, especially chip makers), and I also think it’s quite plausible that AI software will be generating more than a trillion dollars of revenue by 2030.
But it also seems like there’s a good chance that short-term deployment isn’t this fast, and the market gets disappointed on the way. If AI revenues merely failed to double in a year, that could be enough to prompt a sell off.
I think this could happen even if capabilities keep advancing (e.g. maybe because real world deployment is slow), though a slow down in AI capabilities and new “AI winter” would also most likely to cause a crash.
A crash could also be caused by a broader economic recession, rise in interest rates, or anything that causes investors to become more risk-averse – like a crash elsewhere in the market or geopolitical issue.
The end of stock bubbles often have no obvious trigger. At some point, the stock of buyers gets depleted, prices start to move down, and that causes others to sell, and so on.
Why does this matter?
A crash in AI stocks could cause a modest lengthening of AI timelines, by reducing investment capital. For example, startups that aren’t yet generating revenue could find it hard to raise from VCs and fail.
A crash in AI stocks (depending on its cause) might also tell us that market expectations for the near-term deployment of AI have declined.
This means it’s important to take the possibility of a crash into account when forecasting AI, and in particular to be cautious about extrapolating growth rates in investment from the last year or so indefinitely forward.
Perhaps more importantly, just like the 2022 crypto crash, an AI crash could have implications for people working on AI safety.
First, the wealth of many donors to AI safety is pretty correlated with AI stocks. For instance as far as I can tell Good Ventures still has legacy investments in Meta, and others have stakes in Anthropic. (In some cases people are deliberately mission hedging.)
Moreover, if AI stocks crash, it’ll most likely be at a time when other stocks (and especially other speculative investments like crypto) are also falling. Donors might see their capital halve.
That means an AI crash could easily cause a tightening in the funding landscape. This tightening probably wouldn’t be as big as 2023, but may still be noticeable. If you’re running an AI safety org, it’s important to have a plan for this.
Second, an AI crash could cause a shift in public sentiment. People who’ve been loudly sounding caution about AI systems could get branded as alarmists, or people who fell for another “bubble”, and look pretty dumb for a while.
Likewise, it would likely become harder to push through policy change for some years as some of the urgency would drop out of the issue.
I don’t think this response will necessarily be rational – I’m just saying it’s what the general public will think. A 50% decline in AI stock prices could maybe lengthen my estimates for when transformative AI will arrive by a couple of years, but it wouldn’t have a huge impact on my all considered view about how many resources should go into AI safety.
Finally, don’t forget about second order effects. A tightening in the funding landscape means projects get cut, which hurts morale. A turn in public sentiment against AI safety means slower progress and more media attacks…which also hurts morale. Lower morale leads to further community drama…which leads to more media attacks. And so on. In this way, an economic issue can go on to cause a much wider range of problems.
One saving grace is that these problems will be happening at a time when AI timelines are lengthening, and so hopefully the risks are going down — partially offsetting the damage. (This is the whole idea of mission hedging – have more resources if AI progress is more rapid than expected, and have less otherwise.) However, we could see some of these negative effects without much change in timelines. And either way it could be a difficult experience for those in AI safety, so it’s worth being psychologically prepared, and taking cheap steps to become more resilient.
What can we do about this, practically? I’m not sure there’s that much, but being on record that a crash in AI stocks is possible seems helpful. It also makes me want to be more cautious about hyping short term capabilities, since that makes it sound like the case for AI Safety depends on them. If you’re an advocate it could be worth thinking about what you’d do if there were an abrupt shift in public opinion.
It’s easy to get caught up in the sentiment of the day. But sentiment can shift, quickly. And shifts in sentiment can easily turn into real differences in economic reality.
Volatility is the standard deviation of returns over a year. I’m using the implied volatility, which can be estimated from option prices. This is better than the historical realised volatility because it’s a forward-looking estimate. The historical record will be missing even bigger price moves that could have happened, but didn’t.
I’d love to see further work estimating the base rate of crashes.
Similar energy:
https://www.vox.com/future-perfect/367435/artificial-intelligence-openai-chatgpt-boom-bust-safety-superintelligence-google
See discussion of this post on the EA Forum:
https://forum.effectivealtruism.org/posts/LJzvCWnwnSSxrjXzi/ai-stocks-could-crash-and-that-could-have-implications-for