Can data centres keep up with AI?

Gary Tinkler
Gary Tinkler
MD of Data Centres at Northern Data Group

Forget cooling – for data centres, it’s now an issue of power, says Gary Tinkler, MD of Data Centres at Northern Data Group

When we talk about High-Performance Computing (HPC), the fusion of AI and computational power is driving incredible innovations. In the past, we focused mainly on cooling solutions to keep systems running smoothly. But now, with AI-driven HPC systems requiring so much more power, the real challenge isn’t just about keeping hardware cool; it’s about managing an enormous demand for electricity. This pivotal shift in the industry is telling us something important: it’s no longer a cooling problem – it’s a power problem.

Where are we now?

Let’s take a closer look at NVIDIA, a giant in the HPC world. They’ve created popular air-cooled systems that have served us well. However, as AI models get more complex, the power requirements are skyrocketing. Reports show that AI training tasks use 10-15 times more power than traditional data centres were designed to handle. Facilities that once operated at 5-8 kW per rack are quickly becoming outdated. Recently, NVIDIA announced a major rollout of new GPUs, highlighting the urgent need for advanced technology to meet these growing power demands.

To put this into perspective, data centre operators are now reevaluating their power strategies because their existing setups can’t keep up. For example, a facility that used to work well with 8 kW per rack now finds that this just isn’t enough anymore. As AI continues to advance, we’re looking at power needs soaring to between 50-80 kW per rack. This isn’t just a small tweak; it’s a major change in how data centres need to be designed.

A recent study from the International Data Corporation (IDC) found that global data centre electricity consumption is expected to more than double from 2023 to 2028, reaching an astounding 857 Terawatt hours (TWh) by 2028. This underlines the importance of having data centre facilities that can support higher power loads if they want to stay competitive in the fast-paced AI world. This isn’t just a theory – it’s a reality that data centre operators must face head-on.

Steps data centres can take

One of the biggest challenges in this transition is updating power supply systems. Traditional Power Distribution Units (PDUs) aren’t built to handle the demands of these new AI-driven systems. To meet the required power levels, data centres can to invest in more advanced PDUs that can manage heavier loads while boosting overall efficiency. For many setups today, that means installing six units that can each supply 63 amps of power. This shift not only changes how data centres are built but also adds complexity to how everything is arranged inside the racks.

Of course, as facilities rush to meet these new power needs, we’re seeing innovative solutions come to light. Ultrascale Digital Infrastructure has partnered with Cargill for example so that its data centres can run on 99% plant-based fluids, eliminating the need for billions of gallons of water used annually in cooling, offering new opportunities for water conservation, particularly for data centres designed to rely on water in their operations. 

Evolving infrastructure for power demands

As power demands rise, the standard 1200 mm deep racks are becoming outdated. To meet this increase we’re likely to see a shift to 1400mm deep racks. This isn’t just about making things bigger; it’s about maximising flexibility and capacity. Recent reports indicate that wider rack options – ranging from 800mm to 1000mm – are becoming more popular, providing standardised 52 Rack Units (RU) that help facilities scale more effectively.

This change in rack design is crucial because it directly affects how data centres can support the evolving demands of AI and HPC. By optimising the size of racks, facilities can improve airflow, streamline power distribution, and ultimately boost operational efficiency.

‘Another big challenge is the issue of ‘stranded space’”’ in data centres. As facilities designed for traditional workloads try to adapt to new HPC infrastructure, they often find themselves with wasted space. Older data centres weren’t built to handle the density and power needs of modern AI workloads. Even those with upgraded setups, like indirect cooling solutions that can support 30 kW per rack, are now proving inadequate as requests now frequently exceed 60 kW. Facilities operators are rethinking not just their cooling methods but also how to make the best use of their available space while preparing for increasing power demands.

Traditional data centres were built with certain assumptions about power needs – typically around 5-8 kW per rack. This led to innovations like aisle containment, designed to improve cooling in response to growing demands. However, as AI keeps pushing the limits, these outdated assumptions are no longer enough. HPC deployments now require facilities that can handle power outputs of up to 80kW per rack or even more.

We’re beginning to see a new wave of advanced data centres emerge that look very different – facilities designed from the ground up to meet these heightened demands and that can handle diverse power requirements while ensuring flexibility for future growth.

What’s next?

As AI continues to reshape what’s possible in HPC, the industry is faced with a significant challenge at its core: the power problem. The traditional focus on cooling just isn’t enough anymore. With exciting new technologies being developed at a faster pace than ever, attention is shifting to building a robust power infrastructure that can support this new frontier. Data centres that evolve in their design, layout, and operational strategies to turn this power challenge from a roadblock into an opportunity, can unlock the full potential of AI in high-performance computing. The future of HPC looks bright, but it all depends on our ability to adapt to these new demands.

Related Articles

Top Stories