Skip to content Skip to footer

Cooling a hyperscale

Cooling

With power densities among the hyperscalers continuing to snowball, how do operators go about meeting the cooling challenges of these ever-growing and varied power densities? Phil Smith, construction director UK at Vantage, tells us more.

Power densities in the hyperscale data centre era are rising. Some racks are now pulling 60 kWs or more and this trend will only continue with the growing demand for high performance computing (HPC) as well as for supporting new technologies such as artificial intelligence.

In parallel with these challenges, there is the ongoing debate over the environmental impact of data centres, presenting considerable environmental, compliance and CSR responsibilities. Putting super-efficient cooling and energy management systems in place is therefore a top priority.

However, modern fit-for-purpose facilities are ‘greener’ and increasingly efficient, despite the rise in compute demands. Best practice has necessitated real-time analysis and monitoring for optimising cooling systems and maintaining appropriate operating temperatures for IT assets, without fear of compromising performance and uptime.

Central to this, and to maximising overall data centre energy efficiencies, are integrated energy monitoring and management platforms. An advanced system will save tens of thousands of pounds through reduced power costs and by minimising environmental impact.

For cooling there are various options, installing, for example, the very latest predictive systems and utilising nano-cooling technologies. However, these may only be viable for new purpose designed data centres rather than as retrofits in older ones.

Harnessing climatically cooler locations which favour direct-air and evaporative techniques is another logical step, assuming such locations are viable when it comes to location accessibility, available power, and connectivity.

Taking Vantage’s 750,000 sq ft hyperscale facility in South Wales as a working example, it must satisfy and future-proof highly varied customer requirements. From delivering standard 4kW rack solutions up to 60kW per rack and beyond – with resilience at a minimum of N+20%. T

he cooling solutions deployed intelligently determine the optimal mode of operation according to the dictates of the external ambient conditions and individual data hall requirements. This enables operation in free-cooling mode for most of the year, only providing supplementary cooling in times of elevated external ambient conditions.

Cooling in close up

On the 250,000 sq ft ground floor, comprising 31 separate data halls drawing a total of 32 MW, a Stulz GE system is installed. The indoor unit has two cooling components, a direct expansion (DX) cooling coil and a free cooling coil.

It utilises outdoor air for free-cooling in cooler months when the outside ambient air temperature is below 20°C, with indirect transfer via glycol water solution maintaining the vapour seal integrity of the data centre.

The system automatically switches to free-cooling mode, where dry cooler fans run and cool the water to approximately 5°C above ambient temperature before it is pumped through the free cooling coil. In these cooler months dependant on water temperature and/or heatload demands, the water can be used in ‘Mixed Mode’.

In this mode the water is directed through both proportionally controlled valves and enables proportional free cooling and water-cooled DX cooling to work together. Crucially, 25% Ethylene glycol is added to water purely as an anti-freeze to prevent the dry cooler from freezing when the outdoor ambient temperature is below zero.

In warmer months when the external ambient temperature is above 20°C, the system operates as a water-cooled DX system and the refrigeration compressor rejects heat into the water via a plate heat exchange (PHX) condenser. The water is pumped to the Transtherm air blast cooler where it is cooled, and the heat rejected to air.

On the 250,000 sq ft top floor, Vertiv EFC 450 indirect free cooling units are used for indirect free cooling, evaporative cooling and DX backup. There are 67 units providing 28.5 MW of cooling on an N+1 basis.

These allow us to control the ingress of contaminants and humidity for ensuring sealed white space environments. Using this solution, real life PUEs of 1.13 are being achieved during IST testing at maximum load.

The system works in three modes. During winter operation mode, return air from the data centre is cooled down, leveraging the heat exchange process with external cold air. There is no need to run the evaporative system and the fan speed is controlled by the external air temperature.

In summer, the evaporative system must run to saturate the air. This enables the unit to cool the data centre air even with high external air temperatures. By saturating the air, the dry bulb temperature can be reduced.

In the case of extreme external conditions, a Direct Expansion (DX) system is available to provide additional cooling. DX systems are sized to provide partial back up for the overall cooling load and are designed to provide maximum efficiency with minimum energy consumption.

HPC environments

However, the cooling required for highly dense and complex HPC platforms demands bespoke build and engineering skills to ensure highly targeted cooling. Simple computer room air conditioning (CRAC) or free-air cooling systems (such as swamp or adiabatic coolers) typically do not have the capabilities required.

Furthermore, hot and cold aisle cooling systems are becoming inadequate for addressing the heat created by larger HPC environments.

This places increased emphasis on ensuring there are on-site engineering personnel on hand with knowledge in designing, building, and installing bespoke cooling systems, utilising, for example, bespoke direct liquid cooling.

This allows highly efficient heat removal and avoids on board hot spots, removing the problems of high temperatures without using excessive air circulation which is both expensive and noisy.  

In summary, cooling efficiency has always been critical to data centre resilience and uptime as well for energy cost optimisation. But it now matters more than ever, even though next generation servers are capable of operating at higher temperatures than previous solutions.

Looking to the future, with many conventional data centres consuming thousands of gallons of water a day, operators will be striving to optimise Water Usage Effectiveness (WUE) – not just PUE. For example, one such initiative will be rainwater harvesting to lower the WUE on adiabatic-evaporative systems.

There will also be growing innovation around on-site renewable energy generation for power not dedicated to cooling and operating servers, along with the use of process heat for all office and back of house environmental control – helping drive down facility PUE.

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.