Skip to content Skip to footer

Hot aisle containment – keeping data centres cool

sdecoret
Image: Adobe Stock / sdecoret

Data centre designs are continuing to evolve, and recently, more facilities are being designed with slab floors and overhead cabling, often in line with Open Compute Project (OCP) recommendations.

Rather than install expensive and non-flexible ducting to supply cooling from overhead diffuser vents, engineers are seeing high efficiency and sustainability gains by flooding the room with cold supply air from either perimeter cooling units, CRAC/CRAH galleries, or other cooling methods (rooftop cooling units, fan walls, etc.)

Hot aisle containment (HAC) then separates the cold supply air from the hot exhaust air and a plenum ceiling returns the exhaust air back to the cooling units. As such, this design is also gaining popularity due to its simplicity and flexibility.

Containment options

An optimised containment system is designed to provide a complete cooling solution with a sleek supporting structure that serves as the infrastructure carrier for the busway, cable tray, and fibre. Such a system should be completely ground-supported and for that, a simple flat slab floor is all that is needed.

The goal of any containment system is to improve the intake air temperatures and deliver cooling efficiently to the IT equipment, thereby creating an environment where changes can be made that will lower operating costs and increase cooling capacity. Ideally, the containment system should easily accomplish this while allowing both existing and new facilities, including large hyperscale data centres, to build and scale their infrastructure quickly and efficiently.

Traditional methods for supporting data centre infrastructures such as containment, power distribution, and cable routing can be costly and time-consuming. They require multiple trades working on top of each other to accomplish their work.  An optimised containment structure provides a simple platform for rapid deployment of infrastructure support and aisle containment.  For example, all cable pathways and the busways can be installed at the same time as the containment, allowing the electricians to energise the busway when needed, such as when as the IT equipment gets installed, or as the IT footprint expands.

The containment system should also give the end-user the ability to deploy small, standardised, and replicable pods. This helps to limit the amount of upfront capital spent compared with building out entire data halls by providing all the infrastructure necessary, while allowing for almost limitless scaling should the situation require it.  

When selecting a containment solution, the seal or leakage performance (typically a percentage) of the system is essential, it’s often stated that leakage is the nemesis of all containment systems. Users should reasonably expect a containment solution to have no more than approximately 2% leakage. This reduces and practically eliminates both bypass air and hot recirculation air that raises server inlet temperatures on IT equipment – the result being superior efficiency of the cooling system. 

There’s another important element to this design; the plenum ceiling return. The ceiling and grid system chosen should have minimum leakage to reduce and even eliminate bypass air where cold supply air enters the plenum ceiling return instead of contributing to the cooling of the IT equipment. 

Maximise energy efficiency and sustainability

We’ve mentioned the importance of maximising energy efficiency and sustainability. Flooding the data centre with cold supply air for the IT equipment and containing the hot aisles so that hot exhaust air returns to the cooling units (or is rejected by some other method) is a simple, easy, and flexible design. All new data centres should consider this for future deployments.

Another benefit of this (and most HAC designs) is that it’s easier to achieve airflow and cooling optimisation. In a perfect world, we would simply match our total cooling capacity (supply airflow) to our IT load (demand airflow) and increase cooling unit set points as high as possible. However, there’s inherent leakage in any design, including within the IT racks. The goal is to minimise the leakage as much as possible which is why the containment and ceiling structure is crucial.

The lower the overall leakage, the less cold supply air is needed, therefore, to maximise energy efficiency, we want to use as little cold supply air as possible while still maintaining positive pressure from the cold aisle(s) to the hot aisle(s). When this is achieved, there will be consistent supply temperatures across the server inlets on all racks throughout the data centre. 

Because HAC is used, the data centre is essentially one large cold aisle, so the total sum of cold supply airflow should only be slightly higher than the total sum of demand airflow (10%-15% should be the goal). This percentage is easily attainable if leakage is kept to a minimum by using a quality containment and ceiling solution, along with good airflow management practices such as installing blanking panels and sealing the rack rails.

To drive further efficiencies, operators can raise the cooling set points while maintaining server inlet temperatures at or below ASHRAE (American Society of Heating, Refrigeration, and Air-Conditioning Engineers) recommended specifications for cooling IT equipment (80.6°F/27°C)! This also results in higher equipment reliability and lower MTBF (Mean Time Between Failures).

It’s been said that the best energy saved is the energy we don’t consume, and that’s especially true in the data centre industry – even more so as we continue to progress towards our goal to become more sustainable and lower our carbon footprint.

Conclusion

The data centre industry is constantly evolving, and so should our designs. Energy efficiency should continue to be a top concern for data centres operators, both now and in the future.  Data centre designers and owners should carefully evaluate all options rather than just relying on or selecting from old projects. The saying ‘that’s just the way it is because it’s always been that way and there’s no reason to question it’ has no place in the industry.

Further, flooding the data centre with cold supply air and utilising a containment system, regardless of the cooling system, results in a simple, flexible design that’s both extremely energy efficient and sustainable. This will make both new and legacy data centres greener and more environmentally sustainable, and an environmentally friendly data centre is always a cost-effective data centre.

Picture of Gordon Johnson
Gordon Johnson
Gordon Johnson, Senior CFD Manager, Subzero Engineering

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.