3 Startups Raise Nearly A Half-Billion Dollars For Emerging Data Center Technology
Tech giants and venture capital firms are driving a sudden wave of investment in an emerging technology that may make artificial intelligence data centers significantly faster and more energy efficient.
This month alone has seen three major investments in data center technology startups focused on photonics computing infrastructure — the use of light, rather than electricity, to transmit information between millions of AI chips housed in a single data center.
The nearly half-billion dollars directed toward these early-stage firms in a matter of weeks, some of it coming from the Big Tech behemoths driving the global AI data center boom, reflects a growing conviction among data center operators that this technology is critical for addressing power and data center capacity bottlenecks that threaten to slow AI development.
“AI is evolving faster than anyone could have predicted, pushing the limits of data center technology,” said Erik Nordlander, general partner at Google Ventures, Alphabet’s venture capital arm, in a press release last week announcing the fund’s investment in photonics firm Lightmatter. “Photonics isn’t just a breakthrough; it’s the future of [high-performance computing] data centers for AI.”
Google Ventures’ investment in Lightmatter was part of a $400M fundraising round that quadrupled the Massachusetts-based firm’s valuation to $4.4B. T. Rowe Price led the capital raise, which brought total investment in the company to $850M.
The announcement came just days after competitor Xscape Photonics completed a $44M Series A round led by venture capital firm IAG Capital Partners and a consortium of investors that includes AI computing giant Nvidia.
This week, British data center photonics startup Oriole Networks announced that it too has closed a Series A funding round, raising $22M from UK-based VC Plural.
The surge in funding for data center photonics startups comes as AI is fundamentally changing how data needs to be transmitted within a data center.
Locked in an AI arms race, the world’s largest tech companies — particularly Amazon, Microsoft, Google and Meta — are spending tens of billions of dollars to secure high-performance processors, like those produced by Nvidia, and to build the data centers that will house them.
But it is not enough to simply cluster all that computing power together in one place. Training large-scale AI models requires that all the chips or processing nodes within a data center be able to pass data back and forth quickly between each other, necessitating a far more complex and dynamic network than would exist in a traditional data center.
This means that today, the amount of networking infrastructure like switches and copper cabling within an AI data center is far greater than in older facilities. The AI data center boom is expected to increase the global market for such network gear from around $25B last year to over $42B in 2028.
These massive network infrastructure requirements worsen two of the most significant operational challenges for AI data centers: the amount of time it takes to train large generative AI models and the enormous amount of energy consumed in the process.
Latency from network gear, the time it takes to transport data from one processing unit to another within a data center, accounts for as much as 90% of the total time needed to train an AI model, with a given packet of data having to pass through nine different switches en route between one processor and another, according to a report in Fortune. At the same time, switches and copper network cabling produce a significant amount of heat, creating the need for additional cooling and thus greater energy use. In a typical data center, network equipment accounts for between 10% and 40% of a data center’s energy consumption.
Proponents of photonics say the technology can dramatically improve data center performance on both of these fronts, even as the specific products being developed by firms like Xscape and Oriole differ significantly in their approaches to design and engineering.
By using light rather than electricity to transmit data, information is able to flow more directly from one processor to another and to multiple processors at once, all without traveling through a complex network of cables and switches with limited capacity. This has the potential to dramatically lower latency within the data center, photonics firms say, with Oriole claiming that its system allows processing that is 100 times faster than in current data centers.
Transmitting light also produces far less heat than traditional network gear, thereby requiring less power for cooling, studies show. On top of that, the more efficient data transfer enabled by photonics-based systems eliminates or reduces the use of more energy-intensive traditional networking infrastructure and the cooling requirements that come with it. Estimates of the potential energy saved from the use of photonic networks vary, but XScape Photonics says it reduces power consumption by a factor of 10 compared to existing systems.
As constraints on power and available land slow the build-out of the new data centers needed to support the AI ambitions of the world’s largest companies, photonics firms argue their technology is critical to maintaining the pace of AI advancement. It may be increasingly hard to build new data centers, said Xscape CEO Vivek Raghunathan, but photonics allows data center users to squeeze more computing out of each facility.
“Historically, performance and scalability challenges have been addressed by building bigger data centers to train large language models,” Raghunathan said in a statement. “This approach is not sustainable and unlocks a myriad of additional issues around energy consumption and cost.”