Skip to content Skip to footer

Kao Data: More than a feeling

A trillion-dollar industry must be driven by more than feelings, says Paul Finch, COO, Kao Data, as he discusses why following the proper standards when it comes to cooling your data centre, will yield a far more efficient and effective facility.

‘Feelings, nothing more than feelings…’ The words of the old song could sum up many data centre operators, especially when it comes to decisions on cooling these multi-million-pound construction projects.

Traditionally, and even today, there are facility managers and data centre directors who believe that a cooled IT space is a good place. Well, it turns out that their instincts are wrong; not only is this ineffective, but also inefficient.

More and more organisations are taking heed of the latest research in technology reliability and energy-efficiency performance underpinning the evolving global guidelines, standards, environmental policy and simple economics. These changes are signalling a departure from the embedded data centre mind-set, that mechanical cooling is an efficient response for colocation data centre design and operations.

Arguably, as far back as 2004, what were then known as “close-control” environments started to be eroded, driven by the creation of the Thermal Guidelines and Environmental Classes which have continued to evolve over the last 15 years.

Change also came back in 2011 when ASHRAE widened the Environmental Classes, introducing the Allowable Range. A key consideration was the impact that increased server inlet temperature would have, not only on energy-efficiency, but more importantly, server reliability, driving data centre availability higher.

The ASHRAE TC9.9 guidelines, Green Grid initiatives such as PUE (and other metrics), and other energy reduction programmes like the EU Code of Conduct (EuCoC), have created the opportunity to operate data centres to better meet the needs of the servers, storage and networking equipment they are designed to support and increase their effectiveness. 

ASHRAE TC9.9, for example, through the correct application of the Environmental Classes and when coupled with the appropriate cooling technologies, can provide a real opportunity to deliver data centre operations requiring no mechanical cooling, across many geographic locations around the world.

 

Image 1 Green Grid Free Cooling Map EU Cropped
Image 1. Green Grid Free Cooling Map – EU – Courtesy of Green Grid

This industry shift continues to confound many, but this process allows operators to reduce short-term capital expenditure, longer-term operational expenditure, increase reliability, reduce maintenance and servicing costs, and benefit from on-going operational savings due to optimised data centre operating environment (therefore reducing energy use). It is a solution that is effective for many situations and locations in temperate climate countries.

Data centres live or die based on their up-time and availability, therefore equipment reliability is paramount. That is part of the conundrum; how, in this industry, do we increase reliability, whilst reducing engineering complexity?

Working with ASHRAE TC9.9 guidelines, servers, storage and networking manufacturers have for some time been engineering their devices and equipment to not just perform across the full ‘Recommendation’ range, but also into the wider ‘Allowable’ environmental range, for much longer periods. This allows IT equipment to operate more efficiently, and we gain by two outcomes:

  1. The data centre uses less absorbed power dedicated to cooling and reduces energy costs
  2. As the air inlet temperature increases, so does the free-cooling opportunity, and when applied innovatively with appropriate cooling technologies, can eliminate the need for any mechanical refrigeration.

Some IT equipment manufacturers in specific applications even allow for specified time excursions to environmental temperatures up to 45oC, without affecting the manufacturer’s warranty.

In real-life environments, the primary factor determining system failure rate is component temperature. Equipment improvements now in place provide high reliability and a reduction in the risk of device thermal shutdown, which has caused major data centre outages over the past few years.

Graph 1 ASHREA TC9.9 2011 Cropped
Graph 1. – ASHRAE TC9.9 Environmental Classes – Courtesy of ASHRAE

Delivering lower server inlet temperatures can result in large, complex, expensive equipment and cooling infrastructure. The more equipment on-site, the greater the overall complexity and the lower the reliability is likely to be. Plus, all equipment requires maintenance and servicing, and it is sensible to assume at some point during the life-cycle it will fail.

Furthermore, energy is the biggest Op-Ex for a data centre, and mechanical cooling represents the largest proportion of energy use, beyond the IT load. Therefore, this represents the greatest opportunity for energy and cost savings. Correspondingly, to reduce the energy used within the data centre infrastructure, effectively releases that capacity for more IT utilisation.

Reducing complexity, is a critical approach to efficiency and sustainability, achieving a ‘flat PUE response’ from 25% to 100% load drives up availability and uptime, demonstrating that low PUE, down to 1.2 – 1.0 is achievable, and not simply a marketing tool, but a fiscal responsibility for data centre operators. 

In comparison to traditional chilled water or refrigerant based systems, IEC is relatively uncomplicated, although it still requires mechanical ventilation in the form of fans and heat exchangers with few moving parts.

Air heat-rejection occurs when the return warm air is passively cooled through contact with a plate that has been evaporatively cooled on an adjacent atmospheric side. A benefit is that no moisture is added to the supply side air stream as it returns to the data hall, maintaining the humidity within the hall.

This ensures the precise supply air inlet conditions can be delivered to support the IT load. However, air movement, even within a confined space, can be chaotic. Our data centre design principles incorporated the use of CFD (computational fluid dynamics) to model the air movement around the IT space. The modelling is complex, but greatly reduces risk as it provides a detailed theoretical model on how the technical space will perform and react to dynamic loads.

Diagram 1 IEC Layout Cropped
Diagram 1 – IEC Layout – Courtesy of FlaktGroup

Modelling also assists in rack layout, and our results demonstrated that hot aisle containment (HAC) systems, as they stop hot and cold air mixing in and around the cabinets, offered the most efficient design to allow controlled air-flow circulation around the IT hall.

HAC draws cool air into the front of the contained cabinets and through the IT equipment and then expels hot air up and out into the ceiling space to return to the IEC system where the heat is rejected.

IEC, when used effectively, allows data centre designers to match the environmental conditions in the data halls to the free-cooling opportunity available within their specific geography. This ensures that evaporative cooling is effective for far longer throughout the year.

The latest IT equipment technology (servers, storage and networking equipment) is developed to operate within the parameters characterised in non-mechanical cooling systems, such as IEC, even with maximum predicted annual temperature excursions. In many locations and business strategies, the capital cost of installing a traditional chilled water and refrigerant based system can be avoided.

Conclusion

The data centre market has become increasingly competitive, and the industry continues to expand into untouched regions whilst the growth of our industry consumes increasing amounts of energy. Economic as well as political and social pressures demand more efficiency, and energy is a large component in the data centre cost structure.

Developments in cooling technology and the correct application of techniques offers a transparent path to designing and implementing non-mechanical cooling strategies, which reduce complexity, increase reliability and maximise the operational hours of minimal cost cooling.

The correct application of the ASHRAE environmental classes and broader thermal guidelines will drive PUE lower, not only at peak load, but consistently deliver sub 1.2 PUE, resulting in a far lesser impact from a sustainability perspective.

Our industry is no longer reliant on feeling our way forward to reduced energy use. We have standard processes that provide scientific support to more efficient and effective data centre businesses.

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.