Contact Us
News

DeepSeek May Have Been More Smoke Than Fire, But Panic Shows Data Centers Are 'Too Big To Fail'

National Data Center

Nearly two months after DeepSeek roiled markets and triggered alarm about the future of the data center sector, greater clarity is emerging as to what the Chinese artificial intelligence model actually means for the industry.

Placeholder

Despite the initial panic triggered by DeepSeek, there was a broad consensus among industry leaders speaking March 5 at Bisnow’s DICE Southeast event that the company’s groundbreaking ability to train AI models with less computing power and at lower cost will not be a data center demand killer. Nor will it be the paradigm-shifting “Sputnik moment” that some initially claimed it to be.

At the same time, DeepSeek does represent a meaningful step forward in making AI computing more efficient and less expensive — innovation that some industry insiders say is already driving a shift in the geography of the data center development landscape.

For some, the significance of January’s DeepSeek hysteria is primarily that it served as a reminder of the volatility and uncertainty of a data center sector that’s increasingly viewed as a sure bet and of the sudden centrality of a previously obscure segment of the CRE landscape to the global economy. 

“It wasn't just Nvidia stock plunging or a bunch of data center company stocks plunging … the entire stock market took a nosedive just based on one piece of breaking news about one AI model in China,” Ryan Hughes, managing partner at Florida-based developer Sailfish Investors said at the DICE event, held at the Crowne Plaza Atlanta Midtown.

“That shows how this entire sector is really getting almost too big to fail.”

When DeepSeek triggered a Wall Street sell-off of data center, tech and energy stocks in late January, investors were concerned that the cheaper, more efficient AI training the firm pioneered could reduce the demand for data center capacity. 

DeepSeek claimed that training its R1 AI model cost just $5.6M, whereas American AI firm Anthropic spent nearly $1B training one of its AI models, with the cheapest close to $100M. DeepSeek reportedly trained its model on just 2,000 older Nvidia chips, while U.S. tech firms routinely use tens of thousands of the latest processors. 

The model's low price tag and development on fewer, more primitive chips suddenly cast doubt on a fundamental assumption underpinning Big Tech’s AI spending spree and skyrocketing demand for data center capacity: that massive amounts of energy-intensive chips are needed to develop more advanced AI models that would lead to commercially viable products.

DeepSeek’s success seemed to suggest that this thesis is flawed, and that it will take far fewer chips, and potentially far fewer data centers and far less power, to achieve these aims.

While the best models from U.S. tech firms outperformed R1, the efficiency of the Chinese model generated amazement from tech leaders like venture capitalist Marc Andreessen, who wrote on X that it was “one of the most amazing and impressive breakthroughs I’ve ever seen.”

Yet almost immediately, there was skepticism from some prominent voices about DeepSeek’s claims and speculation that the company was dramatically understating the cost and compute power that went into its model. U.S. AI giant OpenAI quickly claimed that DeepSeek had built its model on the back of its existing technology. 

This doubt has only deepened in the six weeks since.

DC Blox CEO Jeff Uphues, speaking at Bisnow's DICE event, referred to DeepSeek as “nothing more than a big deep fake” — echoing widespread skepticism among industry leaders. Others emphasized that the legitimacy of the firm’s claims is still being determined, even if it’s likely that the efficiency improvements and cost reductions were significantly exaggerated.  

“The jury's still out,” said Adam Krupp, managing director at Wharton Equity Partners. “The Chinese haven't been too forthcoming with what all those costs were, and it's probably not accurate as far as how cheaply they say they developed that model.”

DeepSeek may not be the “game changer” it seemed to be at first glance, said Stonebridge Development Strategy Partner Tracy Vargo, but it does mark an inflection point in what he calls the “natural progress” of AI innovation toward being able to get more results with less computing. Whether or not DeepSeek changed that paradigm overnight, AI computing is going to get cheaper and more efficient, and that has implications for data centers that the industry needs to be ready for.

But while the investor panic that followed the announcement of DeepSeek’s R1 was driven in part by fears that greater computing efficiency equated to less demand for the data center capacity hosting that computing power, the emerging consensus in the data center sector is that, for the industry at large, the exact opposite is true.    

Placeholder

“Cheaper compute means more, not less,” Daniel English, president and co-founder of Legacy Investing, said at the DICE event.

English joins a chorus of prominent voices from across the digital infrastructure and tech landscape — from Microsoft CEO Satya Nadella to Equinix CEO Adaire Fox-Martin — who have suggested that if AI training becomes significantly cheaper, it will subsequently drive down the cost of adoption by enterprises and consumers and produce a more vibrant AI economic ecosystem.

This would mean more AI startups, more companies incorporating AI into their existing businesses and more people using AI integrated into consumer products. Ultimately, this would translate into more demand for data centers. 

But how that demand manifests across the data center landscape could look very different.

Should this phenomenon, known as “Jevons paradox,” emerge, executives like DC Blox Senior Vice President of Design and Engineering John Dumler said demand for computing capacity will shift from AI training and toward inference, the computing through which end users interact with an AI model. 

Such a shift could have a dramatic impact on where and how new data centers are built. Industry insiders, like Harrison Street’s Michael Hochanadel, tell Bisnow that demand could fade for data centers specifically used for AI training, particularly facilities like former bitcoin mines with massive amounts of power but not the network connectivity or proximity to users to be viable for other use cases.

Conversely, a surge in demand for inference computing requires more “edge” data center capacity — smaller facilities located close to large population centers where the people and companies using AI products and services are located, Dumler said. He said DC Blox has seen demand for edge deployments double in the days since DeepSeek’s announcement.

“Our business has seen an explosion of activity in the last 45 days,” Dumler said.  “We build 5 and 10 megawatt edge facilities for the hyperscalers, and we've seen a two-times uptick in their interest in accelerating those builds.”

Beyond the direct implications for the long-term data center demand picture, the jolt DeepSeek delivered to the data center sector was a critical reminder of the volatility and uncertainty that inevitably lie ahead for an industry tied to a rapidly evolving technology like artificial intelligence, said Wharton’s Krupp. The global data center gold rush may give the illusion of stability, but DeepSeek will not be the last unforeseen technology or innovation with the potential to transform the industry in ways few anticipate. 

“It’s a very dynamic and evolving space,” Krupp said. “You’re going to see a lot of changes, and this industry — especially from the capital perspective and the investment perspective — is not for the faint of heart.”