Nvidia’s Record Earnings Show More Runway For AI Data Center Boom, Shifting Demand
Artificial intelligence chipmaker Nvidia’s data center sales have more than tripled. The record numbers signal an acceleration of an AI-driven data center building boom that is already underway.
Nvidia, which manufactures the vast majority of chips needed to support artificial intelligence, reported record quarterly earnings after the market closed Wednesday, with data center sales accounting for $22.6B of the company’s $26B in total revenue.
The company’s data center segment grew 427% year-over-year, pointing to an adoption cycle for AI technologies that is still in its early stages.
Data center operators and tenants are buying more chips than ever, which means the already massive wave of investment in developing new facilities to host this computing power will continue to grow. But Nvidia’s Q1 numbers also reflect an evolution of the AI demand landscape that is driving changes, both in which companies are pursuing new data center capacity and where those data centers will be built.
“The next industrial revolution has begun,” Nvidia CEO Jensen Huang said on its earnings call Wednesday. “Companies and countries are partnering with Nvidia to shift the trillion-dollar installed base of traditional data centers to accelerated computing and build a new type of data center, AI factories, to produce a new commodity, artificial intelligence.”
Nvidia saw total quarterly revenues jump 262% compared to a year earlier, growth that exceeded Wall Street’s already bullish expectations as well as the company’s own projections.
Data center revenue growth accelerated from previous record levels, growing nearly 20% faster than in the preceding quarter. The chipmaker projects total revenues in Q2 as high as $28.6B, with sequential growth in all market segments.
Skyrocketing demand for Nvidia’s processors has been driven primarily by the world’s largest tech companies — firms like Microsoft, Google and Amazon — as they race to get their hands on the tens of thousands of GPUs and other IT gear needed to create the generative AI models behind products like ChatGPT.
This AI arms race has spurred record demand for the data center capacity needed to support this massive investment in computing power. While U.S. data center capacity totaled 17 gigawatts at the end of 2022, that figure is expected to reach 35 gigawatts by 2030, according to a January report from Newmark. Similarly, Synergy Research projects that global data center inventory will triple within six years. Brokerages and power utilities across the U.S. are seeing sudden increases in their pipelines of data center projects expected in months ahead.
But beyond simply foreshadowing more large-scale data center development, Nvidia’s results also point to a shift in the kinds of companies driving demand for AI computing and data center inventory.
Cloud providers have traditionally accounted for the majority of Nvidia’s data center revenues, but that is no longer the case.
While cloud providers accounted for more than half of Nvidia’s data center sales as recently as last quarter, that share has dropped to around 45%. Instead, the record revenue growth of Nvidia’s data center segment was led by enterprise customers and social media companies.
The automotive industry has emerged as a key demand driver within Nvidia’s data center business, the firm’s leadership told analysts, as companies like Tesla build and lease data centers to develop autonomous driving systems and other AI applications.
“We expect automotive to be our largest enterprise vertical within data center this year, driving a multibillion revenue opportunity across [on-premises] and cloud consumption,” Nvidia Chief Financial Officer Colette Kress said on the earnings call.
This isn't to say that cloud providers are pumping the breaks on buying the computing power to support AI. If anything, they’re stepping on the accelerator.
Nvidia’s sales to cloud providers increased in the first quarter. At the same time, the three largest cloud providers all indicated last month they plan to spend billions more than anticipated on developing AI infrastructure, investments that Google CEO Sundar Pichai said are intended to “help us push the frontiers of AI models and enable innovation.”
Still, Wall Street has started to eye this kind of ballooning AI infrastructure spending warily, with investors increasingly looking for AI infrastructure spending to drive short-term revenue growth for cloud providers. There is thinning patience for multibillion-dollar long-term bets on AI that may not yield a significant return for years.
Nvidia’s leadership sought to address these concerns Wednesday, pointing to what they claimed was strong return on investment on Nvidia products for cloud providers that delivered $5 in hosting revenue for every dollar spent.
Nvidia’s earnings also reflect an accelerating shift from AI computing being used mainly to train generative AI models to computing power being deployed to help customers actually put those models to use — known as inference.
Nvidia estimates that inference drives about 40% of its data center revenue, but that number is growing. And this could have a significant impact on where new data center capacity is built.
Growing inference demand has been driven largely by social media companies like Meta, Nvidia’s leadership says. Unlike cloud providers that are primarily focused, at least for now, on building AI tools for corporate applications, Meta and other “consumer internet” firms are building AI for their own consumer-facing products like digital assistants or video production tools. As millions of consumers begin using those products, the need for inference computing grows.
This may be reflected in a geographic shift in data center demand.
While training can be done in highly centralized data center clusters that can be located anywhere, inference computing needs to happen near where the users of these products are located. Until now, much of the AI-driven data center development has been training-oriented. But as inference grows, experts expect increased demand for AI-capable data centers in primary markets, close to major population centers where the users of these products live.
“Some of the inference-based AI will be very latency sensitive,” Pat Lynch, executive managing director of CBRE Data Center Solutions, told Bisnow last week. “These aren't gigawatt deployments. But these are multiple megawatt deployments that are going to be dropped in every major city around the globe.”
While the bulk of Nvidia’s data center sales came from its Hopper line of IT gear, the company launched its next generation Blackwell line of processors in March and expects the new chips to drive a growing share of revenue starting in Q2. Blackwell marks a significant step up in performance from predecessors like Hopper.
Blackwell also produces far more heat, with top-end chips used by firms like OpenAI now requiring that data centers use liquid cooling systems. Yet the vast majority of data centers are air cooled, with retrofits for liquid cooling often prohibitively difficult and expensive.
Analysts have expressed concern that the pace of chip innovation is outpacing the supply of new data centers built to support these technologies, particularly as Nvidia plans to continue releasing new generations of processors each year.
But Huang said the data center industry will be ready. He said Nvidia has been engaging proactively with data center providers and tenants, as well as the manufacturers of cooling gear and other operations equipment, letting them know well in advance of new product launches what infrastructure will be required to support them so they can have adequate runway to prepare.
“We have been priming the pump, if you will, with the entire ecosystem, getting them ready for liquid cooling,” Huang said. “No one is going to be surprised.”
UPDATE, MAY 23, 5:20 P.M. ET: This story has been updated with additional context and commentary from Nvidia's earnings call.