Siemens and nVent develop blueprint for NVIDIA AI data centres

Siemens and nVent are collaborating on a combined liquid cooling and power reference architecture designed specifically for hyperscale AI workloads, including deployments based on NVIDIA DGX SuperPOD with DGX GB200 systems.

The architecture is described as a Tier III-capable, modular blueprint that brings together Siemens’ industrial-grade electrical and automation systems, NVIDIA DGX SuperPOD reference designs and nVent liquid cooling technology. The goal is to help operators deploy AI infrastructure more quickly, while maintaining resilience and improving energy efficiency at very high rack densities.

“We have decades of expertise supporting customers’ next-generation computing infrastructure needs,” said Sara Zawoyski, President of nVent Systems Protection. 

“This collaboration with Siemens underscores that commitment. The joint reference architecture will help data center managers deploy our cutting-edge cooling infrastructure to support the AI buildout.”

According to Siemens, the approach is not just about accommodating higher power use, but also about maximising useful compute output from each watt consumed.

“This reference architecture accelerates time-to-compute and maximizes tokens-per-watt, which is the measure of AI output per unit of energy,” said Ciaran Flanagan, Global Head of Data Center Solutions at Siemens. 

“It’s a blueprint for scale: modular, fault-tolerant, and energy-efficient. Together with nVent and our broader ecosystem of partners, we’re connecting the dots across the value chain to drive innovation, interoperability, and sustainability, helping operators build future-ready data centers that unlock AI’s full potential.”

Blueprint for 100 MW hyperscale AI sites

The reference design targets 100 MW-class hyperscale AI facilities, where operators are increasingly turning to direct liquid cooling to handle rising rack-level power densities and to keep efficiency within acceptable limits.

By defining how electrical distribution, automation, liquid cooling and compute platforms fit together, Siemens and nVent argue that operators can shorten design cycles, standardise interfaces and reduce deployment risk. Reference architectures of this kind are already widely used in other parts of the data centre stack as a way to replicate proven designs at speed.

Data centres running AI workloads are seeing a convergence of challenges: higher compute intensity, tighter resilience requirements and growing pressure to design for modular expansion. The partners position the joint blueprint as one answer to those pressures, with fault-tolerant electrical topologies and liquid cooling integrated from the outset rather than added as a retrofit.

While detailed technical information has yet to be published, the architecture is intended to align with NVIDIA’s DGX SuperPOD reference designs, which define how large clusters of AI systems are deployed at scale. nVent’s liquid cooling technology is integrated into that framework, while Siemens’ role spans power distribution, automation and energy management.

On the Siemens side, the company is bringing its experience in medium and low voltage power distribution, automation and energy management software from mission-critical environments into the AI data centre space. The architecture is expected to draw on IoT-enabled hardware, software and digital services that can monitor and optimise energy usage across the site.

nVent, meanwhile, is contributing its liquid cooling portfolio and experience delivering high-density cooling solutions for global cloud service providers and hyperscalers. Its technology is designed to manage the thermal load of tightly packed AI hardware, where traditional air-based approaches struggle to keep up with escalating chip power.

By packaging these elements into a single reference architecture, Siemens and nVent are betting that operators will be able to move faster on new AI builds, while still meeting Tier III-style resilience expectations and keeping a close eye on metrics such as energy efficiency and ‘tokens-per-watt’ as AI workloads continue to scale.

Related Articles

More stories

Top Stories