The market expects AI software to create trillions of dollars of value by 2027
Does that seem high or low to you?
Here’s a simple argument (which could fail in various ways but I think is suggestive):
Nvidia’s revenue was about $60bn last year, and will likely be $110bn this year.1
This implies that in total around $100bn will be spent on AI chips this year.2
Moreover, Nvidia’s PE ratio is about 75, compared to the average company at about 25,3 which roughly means the market expects its profits to rise another 3-fold compared to the average company.4 (I discuss complications at the end).
If Nvidia can maintain its current profit margin, this would roughly require Nvidia’s revenues to reach $180bn (and grow at average rates after that).
If Nvidia loses margin or market share, then if the market is right about Nvidia’s current price, the total AI chip market would need to be even larger.5
From this we can infer the future software revenues required for those chips to be profitable.
In a nutshell:
Nvidia’s current market cap implies the future AI chip market reaches over ~$180bn/year, then grows at average rates after that.
For a data centre to actually use these chips in servers costs another ~80% for other hardware and electricity, then the AI software company that rents the chips will typically have at least another 40% in labour costs.
This means with $200bn/year spent on AI chips, AI software revenues need reach $500bn/year for these groups to avoid losses, or $800bn/year to make normal profit margins. That would likely require consumers to be willing to pay up to several trillion for these services.
The typical lifetime of a GPU implies that revenues would need to reach these levels before 2028.
This isn’t just about Nvidia – other estimates (e.g. the price of Microsoft) seem consistent with these figures.
These revenues seem high in that they require a big scale up from today; but low if you think AI could start to automate a large fraction of jobs before 2030.
If market expectations are correct, then by 2027 the amount of money generated by AI will make it easy to fund $10bn+ training runs, potentially unlocking another wave of capabilities.
The AI chips bought today are mostly installed in clouds (e.g. Microsoft Azure) and then rented to AI software companies (e.g. Anthropic).
An Nvidia H100 SXN chip costs about $30,000.6 If it lasts for 3 years, that’s a capital cost of $1 per hour.7
Typically, at the moment, a cloud provider will rent the chip for more like $2 per hour.8 The extra goes towards:
Additional hardware required to run the server, perhaps 35% on top of the cost of the GPUs.9
The electricity required to run the data centre, which is about 30% on top of the capital costs.10
Other costs such as staffing and real estate.
A profit margin for the data centre, perhaps 15%.
In total, these add up to around 2.1x the capital spent on GPUs, lining up with the $2/hour figure.
Then if an AI software company like Anthropic rents a chip for $2/hour, they’re going to want to earn significantly more than that from the software they run on the chips.
If we assume the budget of AI software providers is about 70% compute and 30% other costs,11 then their total costs (mainly staff) must be about 40% on top of the data centre revenues. In addition, typically a software company would want to earn an operating profit margin of at least 30%.12
So this would require the AI software company to earn around $4 of revenue per $1 spent on GPUs.
Running with $4/hour, we can roughly say that if $200bn per year is being spent on AI chips,13 AI software companies will need to eventually earn about $500bn per year to avoid them and data centres making losses, or $800bn per year for normal profit margins. (I’ll discuss over what time frame in the next section.)
And then consumers won’t be willing to spend $800bn per year on AI software unless it’s generating a lot of value for them, and they have no other option.
It’s hard to know how much of the value created for consumers will be captured by the software companies as revenues. In theory, it could range from “almost all” to “almost none”.
In the long term, in a competitive market, typically the new technology gets copied and commoditised, driving the price down to near costs, and most of the value ends up as consumer surplus.
However, the AI software market is not at equilibrium. In practice, the leading models are far more useful than the generation behind, and the companies selling these models have a significant lead, so they should be able to capture a significant fraction of the value while they maintain that lead.
I’d love better advice on how to model this, but right now my guess is that the AI software companies will capture around 25% of the “consumer value” they create in the coming years (i.e. if consumers were willing to pay up to $100 for the AI software, they in fact pay $25).
If we run with that, then the total amount of consumer value created by the AI models running on the $200bn of chips would need to be about $3.2 trillion.
Consumer surplus would be $2.4 trillion (roughly 2.4% of world GDP — though the actual impact on GDP could be less).
Is this estimate just based on Nvidia? How else can we cross-check it?
One source is consultancy estimates of the size of the AI market in 2030. Most of these are around $1 trillion of revenues and a couple of trillions of dollars of GDP, so line up pretty well.14
We can also compare this to the market prices of AI software companies. For example, Microsoft has around 23% market share of the cloud market (and recently has purchased about 30% of leading GPUs).15 If they were to use these to earn 23% of AI software revenues, then given total software revenues of $800bn (as above), that would be $180bn per year. Over the last 12 months Microsoft’s revenue was $240bn, so another $180bn would be +75%, which seems roughly consistent with Microsoft’s current PE ratio of 35 (+40% the average), or the +60% Microsoft has risen since the start of 2023.
(Interestingly, the prices of OpenAI and Anthropic seem consistent with a somewhat smaller (though still large) AI software market. For example, if Anthropic can capture 5% of the AI software market, at normal margins it would make net income of $12bn, so at a PE ratio of 25, would be worth $300bn, which is over 10x its current market value. However, these investments are also riskier, so investors may well want several times the returns compared to a stock like Microsoft to be equally happy to hold them.)
Finally, almost all AI-exposed companies have gone up a lot since GPT-3, suggesting the market expects significant profits across the sector. I haven’t carefully calculated the aggregate gain of AI exposed stocks vs. the market, but it seems around $1 trillion. And market cap is the discounted value of future profits – revenues would need to be higher.
So I think we can say that the stock prices of Nvidia, large AI software companies, and other chip makers, as well as analyst estimates, are all consistent with AI software earning around $1 trillion a year of revenues.16
What timeframe is implicit in current spending?
Data centre GPUs probably last 3-6 years, but after two years they have to compete with the next generation of chips, so they probably earn most of their revenues in the first 3 years or so.
So, very roughly we can say revenues need to kick in within a couple of years of the GPU purchases.
A more precise method is to amortise the costs of each cohort of GPUs, and add them up.
For example, if $100bn is spent on GPUs in 2024, and we spread those costs out of over the next four years, that’s $25bn per year. Additional data centre costs are +80%, and additional software company costs are +40% on top of the capital costs, so total AI software revenues would need to average $63bn per year 2024-2028 to cover costs.
But that ignores the GPUs that were bought in 2023-2021. Doing a similar analysis for those adds another $80bn, making a total of ~$140bn in revenues required in 2024.
And this is just the amount required for the data centre and software companies to avoid losses. If they also want to make profit, the amount would need to be more like $210bn. (See this worked through in a spreadsheet. The bar chart at the top is just these figures plotted.)
Let’s look forward to 2027. Suppose total spending on GPUs exceeds $200bn by then — that’s the total increase implied by Nvidia’s stock price, and requires just 35% growth in sales per year (similar to the 5yr average).
Then, applying the same method suggests AI software revenues would need to be over $400bn per year to avoid losses and $600bn for normal profit margins. And that would likely require creating consumer value of over $2 trillion.
Another (weak) clue on timing is that the average analyst expects Nvidia to have ~25% revenue growth for the year ending Jan 2026 vs. Jan 2025. If growth is below that, the market might be disappointed.
One caveat to all the above is that large tech companies like Google and Microsoft spend tens of billions per year on R&D. They might view purchases of GPUs today effectively as the R&D cost of developing the next generation of AI models (to run on future chips), and be happy to make losses on them.
If they’re collectively willing to lose $100bn per year, then you can reduce all the figures above by $100bn.
However, that turns out not to change the basic picture very much – it just delays when revenues need to arrive by about a year.
Moreover, my impression is that the market has recently come to expect hundreds of billions of dollars of AI software profits rather than losses on GPUs. And any sign of weakness in short term margins or profits seem to be met with significant stock price falls.
So, while I can’t prove that the market hasn’t discounted in many years of losses to be followed by much greater profits in the future, it’s not my expectation.
What does this mean?
I mainly leave this up to you.
Nvidia’s price implies spending on GPUs will rise to around $200bn, but then plateau.
However, if you think AI is going to go on to automate a large fraction of the economy, and that might become widely recognised before 2030, that would mean tens of trillions of dollars of GDP (maybe more), which might mean trillions of spending on AI chips.
Personally I don’t think we have much reason to expect the market to efficiently price something like the singularity, so I don’t see the fact that it apparently hasn’t as much evidence against the chance of one. (Which is why around half my equity exposure is in AI companies.) Though you could take the opposite position.
Setting the singularity aside, if you think that the current and next generation of AI models aren’t on track to generate trillions of dollars of consumer value before 2027, then the market could get disappointed, and the price of AI stocks could fall.
Hitting $800bn a year of revenue represents a big scale up from where we are today. OpenAI, Anthropic, and DeepMind probably only have about $5bn of revenue today – 50 times less than the $800bn required.
The broadest definitions of the size of the AI market I’ve seen are around $100-$200bn in 2023, so with those, another 4-8x growth is needed.
At this scale, the models would need to be adding several percent to world GDP.
LLMs seem most useful for coding, marketing and customer service, and these jobs represent about 1%, 0.9% and 1.8% of the work force respectively (or about 2%, 1.8% and 0.9% of wages),17 so if these workers all become 30% more productive, that would be +0.9% to GDP, which would be about half of the way there.
Hitting +2% GDP seems totally plausible. AI software revenues are maybe doubling each year, so that gets you to 4x in two years and over 50x in 6. And the models seem to provide a real productivity boost.
In fact, I think there’s a chance AI speeds way past these market expectations (for example if AI agents start to work soon).
However, it also seems plausible the market gets disappointed along the way. If GPT-5 isn’t as useful as hoped, or revenues fail to double one year, then stock prices for AI companies could fall.
And this could be true even if the eventual outcome is an intelligence explosion.
Maybe current AI models are overhyped, but the eventual impact of AI is still underhyped.
Why does this matter?
I think it’s helpful to keep in mind there could still be AI busts in the short term. A bust could make AI risk advocates look like alarmists and drain political will for regulation (more).
I also find the economic perspective often gets neglected in forecasting AI.
For example, Metaculus projects the largest training run for an AI system will be around 10^28 FLOP in 2032. If that’s 6% of the compute available that year, then total spending on AI chips would need to be around $250bn. That in turn would imply that total revenues for AI models would need to be on track to hit $1 trillion. You can then ask yourself whether that seems plausible or not as a way to check your forecast (it does to me).
If you expect the largest training run to use over 10^30 FLOP in 2032, however, then —holding all else equal — that would require 100x the spending on AI chips, which could require model revenues of $100 trillion, which is 100% of current GDP. Which would probably mean the singularity has already started.
In the other direction, if Nvidia’s price doubles again, that would imply market expectations for the size of the AI chip and software market have doubled again, which could be a useful input into expectations about AI, especially in the short term.
Another big variable in AI forecasting is how much funding will be available for training runs in the future. If the market expects AI software to be generating hundreds of billions of revenue by 2027, then it will be easy to justify $10bn+ training runs, suggesting AI scaling can continue beyond this point.
However, a further expansion of AI beyond this level doesn’t seem to be currently priced into the market.
This means that from an investing point of view, if you think AI is on track to expand beyond low trillions of GDP, and people will realise this say within 10 years, then AI chip makers still look undervalued. However, the analysis also suggests that aggressive earnings growth in the next few years is already priced in, so in the meantime there’s plenty of scope for them to crash.
A few more nerdy notes
Is the above analysis just based on a bubble in a single name, Nvidia?
Not as far as I can tell. Nvidia has gone up the most, but that’s because it’s basically the only large company that’s getting most of its revenues from AI.
In other words, as demand for AI chips went up ~5-fold, Nvidia’s stock price rose about 5x (and then another 2-3x again because its margins expanded).
Taiwan Semiconductor only earned about 7% of its revenues from AI chips last year, so if AI chip demand increased 5-fold over the last two years, that only boosted TSM’s revenues by about 5.6%, which is why TSM didn’t go up anywhere near as much as Nvidia. (That will change in the future, I’m just talking about the past.)
Overall, many stocks have responded to the increase in AI demand (Microsoft, Google, TSM, ASML, Broadcom, AMD, SK Hynix, OpenAI etc), and my rough impression is they’ve responded in a way that’s consistent with Nvidia’s price change, once you take into account how much of their profits actually come from AI.
So, the above doesn’t seem to be about the irrational behaviour of a single stock. Though it’s true the broader AI move could be irrational.
OK, but aren’t the AI stocks as a whole in a bubble?
Maybe. The market could be way wrong about the future profitability of AI, in which case I’d invite you to buy some long dated puts on Nvidia.
We shouldn’t expect the market to be efficient when it comes to the singularity, so there’s no value in the exercise.
I agree we shouldn’t expect it to fully price in the singularity. However, I still think it’s useful to work out how much has been priced in, as a way of gauging what might happen in the near term, and how surprised people are likely to be.
Can we really infer the expected value of profits from stock prices?
Here’s a couple of rough thoughts on the complications for those who are interested.
First, we also need to consider the riskiness of returns. The price of Nvidia should represent some kind of average over future scenarios, and some of these probably include crashes.18
Nvidia’s implied volatility is about 60%. This means the market expects there’s about a 15% chance its price falls more than 50% over the next year.19 This is much riskier than the typical stock.
Moreover, the typical investor is pretty risk-averse, so they want to be compensated handsomely in the positive scenarios.
Let’s suppose the typical investor wants to make 10% a year. They think Nvidia has a 40% chance of falling 50% and a 60% chance of growing profits a lot in the next three years. How much do profits need to grow to make it equally attractive as other investments?
If risk-neutral, profits would need to grow 1.9x in the positive scenario for Nvidia to be equal to other investments.20
But the typical investor isn’t risk neutral – and losing half your money is pretty bad – so they’ll want profits to grow more than 1.9x in the good scenario.
The typical investor might, say, want to receive 14% per year for investing in Nvidia (based on Nvidia’s beta), which would mean profits would need to rise 2.1x in the good scenario for it to be an attractive investment.21 Overall, the expected value of future profits would need to be about 10% higher than what a risk-neutral investor would accept.22
What would happen in practice is that Nvidia’s price would be about 10% lower today, so that investors who buy it get that extra 10% in returns in compensation for the risk. That will make Nvidia’s PE ratio about 10% lower than it would be if without accounting for the extra risk.
In other words, if a stock is more risky than average, that will lower its PE ratio.
That means that if we compare its PE ratio to the average, and use that to estimate future earnings, its earnings will appear lower than the expected value. So the method I’ve taken will somewhat understate Nvidia’s future earnings.
(Worked example: company A will deliver $10 of future earnings with certainty, and company B will deliver the same expected value of future earnings but with a lot of risk. A might have a PE ratio of 20; while suppose B has a PE ratio of 10 to be equally attractive. If we assume company B’s PE ratio will converge to A, to do that its earnings would need to halve. But the expected value of earnings is actually the same. So this method has underestimated the expected value of future earnings by 50%.)
Another complication is interest rates. If AI causes a growth boom, that would most likely increase interest rates (though maybe not), which decreases the value of stocks. If the amount you can earn risk-free on cash goes from 5% to 10%, then average PE ratios would need to go from, say, 23 to 18 to keep stocks competitive with cash, which would reduce prices by 30% (the exact amount could be smaller or larger depending on average investor preferences).
That said, these corrections seem under a factor or 2, so I doubt either are big enough to significantly change the bottom line.
As of April 2024, according to Yahoo Finance, the average sell side analyst thinks Nvidia’s revenue over their financial year ending Jan 2025 is $110bn (there is significant visibility into GPU orders over the next year, so this figure has some grounding).
About 80% of this is for their data centre segment, which is mainly AI GPUs. Nvidia has about 90% market share of the total AI chip market, so the total AI chip market is roughly $110bn. (Note people often say Nvidia has 95%+ market share of the AI data centre market, but this ignores Google TPUs; and there’s currently a big ramp up of other Nvidia competitors going on.)
For example, see https://www.multpl.com/s-p-500-pe-ratio
A PE ratio of 75 means for each $75 of shares of Nvidia, the company earns $1 of net income. If we assume that in the long-term, Nvidia’s PE ratio converges to the same as the average company, then net income needs to grow about 3-fold, because if it earned $3 of net income per $75 of shares, the new PE ratio would be 25, close to the average. There are some complications I discuss at the end of the post, though I don’t think that significantly changes the bottom lines.
Nvidia currently earns about $50 of net income per $100 of revenue. If that fell to a more historically normal level of $35, then Nvidia would need to earn about 40% more revenue to have the same net income. Likewise, Nvidia’s market share of AI datacentre chips is about 90%. If that fell to 80%, then the total chip market would need to be about 12.5% larger in order for Nvidia’s revenues to remain the same.
The release price was around $17,500 in 2022, but more recently chips have been sold for $30,000 and sometimes over $40,000.
Semianalysis, “Nvidia Blackwell Perf”, April 2024 has a more detailed estimate, which comes out at around $1/hour for an H200 server for the GPUs alone (ignoring other hardware). A100s are a bit cheaper and B100s are a bit more expensive.
Note in some cases the AI software provider and the cloud provider are the same (e.g. Google, Meta, Microsoft have their own clouds and also sell their own software). Nvidia has also set up its own cloud, so it can earn the cloud provider profit margin on its own chips.
SemiAnalysis “Memory is the biggest looser”, May 2023. They estimate the total cost of an AI server with 8x H100s is about $269,000, and the GPUs are $195,000, so the total is about 35% higher than the GPUs alone.
Semianalysis “Nvidia Black Perf TCO analysis”, April 2024 estimates the electricity cost per month is about 30% on top of the amortised capital costs (or ~23% of the total costs).
I’d personally guess the compute costs are a lower fraction of the budget right now, but the fraction is climbing over time.
In “GPT-4 Architecture..”, Semianalysis estimates it costs OpenAI for GPT-4 to $4.9 of compute to generate a million output tokens, and they currently charge $30. If compute is half the costs, the operating margin would be 66%, though I guess the margin on GPT-3.5 Turbo is lower. This news article says insider sources estimate Anthropic’s margin is 50%. Trailing edge models might be earning significantly less due to much greater competition and commoditization.
I’m rounding up to $200bn because the method above gives the size of the AI chip market plus the annual average rate of earnings growth, which is about 5% per year.
In The Economic Potential of Generative AI, (June 2023) McKinsey estimates generative AI will add $2.6 to $4.4 trillion to GDP.
Statista’s Worldwide AI report estimates (as of April 2024) “The market size in the Artificial Intelligence market is projected to reach US$305.90bn in 2024. The market size is expected to show an annual growth rate (CAGR 2024-2030) of 15.83%, resulting in a market volume of US$738.80bn by 2030.”
Bloomberg Insight’s June 2023 report on Generative AI estimates it’s a $67bn market in 2023, and will grow to about $900bn in 2030.
Nextmsc AI market size report (Jan 2023) estimated the AI market was worth $96bn in 2021 and would be worth $1.8 trillion by 2030. However this also includes hardware, which could be half the total. A bunch of other research firms I’ve never heard of published similar figures.
For purchases in Q3 2023, Statista estimates Microsoft and Meta bought about 30% of the 500k Nvidia GPUs sold that quarter.
Another way to judge market expectations about the impact of AI is to look at interest rates. An additional couple of percent of GDP due to AI over 10 years only has a small effect on the overall rate of economic growth, so won’t be easy to pick up in macro variables.
The US labour force is about 167 million according to Statista. The BLS claims 1.8m worked as software developers in 2022 with wages around twice the average, 0.9m as marketing analysts and 0.4m as marketing managers and 3m as customer service reps with wages around half the average.
Everything could also go to zero in the event of unaligned AI. It’s hard to profit from that trade, so we shouldn’t expect Nvidia’s stock price to respond very much to the chance of unaligned AI.
If Nvidia’s expected return according to CAPM is about 10%, then one standard deviation down is 10%-60% = -50%.
The investor wants to earn 1.1^3 in expectation. This implies in the positive scenario she needs to earn 1.1^3 - 0.4*0.5 = 1.131. If there’s a 60% probability of the positive scenario, the upside needs to be 1.131/0.6 = 1.1885. (Note I’m assuming Nvidia’s PE ratio stays the same, so the future price it sells for is proportional to profits.)
Nvidia’s beta today is about 1.75 so according to the CAPM it needs to be priced to deliver the risk free rate + 1.75x the market return to compensate for its greater risk, which is about 5% + 5%*1.75 - 14%, using these toy figures.
0.6*2.1 + 0.4*0.5 = 1.58; and 0.6*1.9 + 0.4*0.5 = 1.34; and the ratio of the two is 1.11.
Thanks for this post — really interesting and helpful stuff.
I'm also not sure what to think about the consumer surplus issue, but here's one thought:
As argued in this post — https://www.adamsmith.org/blog/economics/why-were-very-much-richer-than-the-gdp-numbers-tell-us-we-are — I think it is plausible that consumer surplus is > GDP impact. So, even if your assumption that AI companies only capture 25% of consumer value is right, all that consumer value might not translate into GDP. This estimate is also extremely imprecise but I would lean towards thinking the revenue of the industry is a better approximation of the GDP impact — so more like $1 trillion (which I think lines up a bit better with the other sources you reference?).
Of course, the revenue=GDP approximation assumes that all the revenues created by AI companies are "new" revenues not taking away from other (non-AI) company revenues and also not adding to other company revenues. Both are unrealistic. Maybe a better way of getting at it would be trying to see how much market wide dividend futures have gone up in the past few years, and making some assumptions about a) how much of that is due to AI and b) how much of AI industry revenue will come from existing public companies. But writing that out, I guess it's also not great.
Here's a piece on looking at long-dated dividend futures if anyone's curious: https://www.federalreserve.gov/econres/notes/feds-notes/the-stock-market-real-economy-disconnect-a-closer-look-20201014.html
This is a really good way of looking at it, I wish I had thought of this.
$100B spent on AI chips is crazy, who's spending all that money and what are they using it on? I thought big AI models cost on the order of $100M. (Maybe that was the number a year ago, and by a year from now it will be much higher?)