Skip to content Skip to footer

Cooling data centres: treat the cause

Image: Adobe Stock / Gorodenkoff

Short of submerging your data centre in icy waters, relocating it to colder climes, such as Norway, or using district heating, data centres typically require extensive cooling mechanisms to extract waste heat. It’s no surprise, then, that the data centre cooling market is projected to reach a value of $15.7 billion by 2025.

These cooling systems account for anywhere between 30-50% of total electricity use and, according to the IEA, data centres and data transmission networks make up 1-1.5% of global electricity use. Given that Europe’s energy crisis is far from over ­– with many analysts predicting that it could last until 2024 – reducing heat output may hold the answer to lowering costs and emissions.

Heat and power

The relationship between power consumption and heat output is linear. Nearly all of the power consumed by an IT device is converted into heat. This means that if you can reduce power consumption, you can directly reduce the amount of heat output, and subsequently the amount of cooling capacity required.

For example, take a common piece of hardware used in nearly all data centres: an optical transceiver module, such as the 100 Gigabit Ethernet QSFP28 100G CWDM4. This device consumes around 3.5 W and we have customers that purchase more than 10,000 units every year. We recently developed a third generation of this device that reduces power consumption to between 2.5–2.7 W. If we multiply this 0.8 W power saving by 10,000 units running 24 hours and 365 days a year, power consumption is reduced by at least 70 MW per year.

Scaling sustainably

However, there are even bigger heat savings to be made. As well as reducing the power consumption per device, data centre managers and optical networking engineering should look at ways of reducing the total number of devices on the network altogether.

As bandwidth requirements increase, so does the amount of rack space used. The problem is that, because most data centres are still air cooled, racks are intentionally underutilised to prevent them from overheating, leading to an unsustainable expansion in the coming years. 

According to Research and Markets, the global data centre market is expected to grow by 73% over the next four years. Key drivers of this include development of hyperscale facilities of over 200 MW, as well as growth in developing markets, and the introduction of advanced liquid cooling technologies.

While trends like immersion liquid cooling and making data centres ‘sustainable by design’ will become more cost effective over time, they are currently capital-intensive undertakings. While smaller hyperscale facilities can cost $200 million, the largest sites can incur costs of around $1 billion or more. In the short term, operators can focus on improving power efficiency and reducing the complexity of their network architectures.

For example, given that a standard switch has 32 QSFP28 ports, if we were to install 10,000 100 G transceivers, we would need 312 switches. We can reduce the number of switches drastically by upgrading from 100 G to 400 G transceivers. This would mean we would only need 78 switches to achieve the same bandwidth.

In terms of energy, a 100 G switch has a power consumption of 600 W (four would consume 2.4 kW), whereas one 400 G switch consumes 1.3 kW. Taking this 1.1 kW saving and multiplying by 24 hours and 365 days, we would reduce power consumption by 600 MW. Add this to the earlier saving per transceiver and we’re looking at a total annual saving of around 670 MW by making these two small changes.

Not only would this result in a significant reduction in cooling capacity required, it would also result in a considerable reduction in power consumption, which is especially important given the current energy crisis in Europe.

Another benefit of switching to next generation transceivers is scalability and futureproofing. Switching from 100 G CWDM4 technology to Single Lambda technology like 100 G FR1 allows you to connect a 100 G device directly to a 400 G transceiver like a QSFP-DD 400G 4FR1. 

Because this type of transceiver is also capable of running at the 800 G and 1.6T speeds we expect to see in the future, the hardware will have a longer usable life and reduce e-waste. At the same time, it’s still backwards compatible with current 100 G devices. This allows data centres to scale rack space efficiently in the coming years, to keep up with higher bandwidth requirements, while keeping cooling demand to a minimum.

Picture of Marcin Bala
Marcin Bala
CEO at Salumanus Ltd

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.