Skip to content Skip to footer

Are underwater data centres sinking?

Image: Adobe Stock / rookielion

With Microsoft recently calling it quits on its undersea data centre project, Mark Seymour, Distinguished Engineer at Cadence, explores the key benefits of liquid cooling on dry land.

Almost a decade ago, Microsoft deployed the first undersea data centre prototype. It was a groundbreaking move followed by others as they sought new methods for keeping their technological infrastructure cool amid rising cooling costs, growing rack densities, and heightened environmental pressures. Fast-forward nine years to this June, and Microsoft returned to dry land, calling it a day on their undersea experiment.

Undersea data centres may not be the golden answer to cooling – they come with their fair share of logistical hurdles. However, the challenges they were created to tackle haven’t gone away, and the central idea of using liquid rather than air to prevent facilities from overheating is a good one. Liquid cooling is a powerful mechanism for enhancing energy efficiency, minimising operational costs, and empowering facilities to repurpose excess heat. This is even more so when applied within land-based data centres and implemented and managed with a digital twin that virtually replicates the physical facility. Let’s dive into the benefits of liquid cooling in more detail.

You can trust liquid

Computer processing units (CPUs), graphic processing units (GPUs), and other high-power components are the beating heart of data centres – fail to keep them cool, and you’ll have an outage on your hands. However, keeping them at a safe temperature is easier said than done as they produce significant amounts of heat, and air cooling is increasingly struggling with the task.

Air cooling can manage heat loads up to about 20 kW per rack, but beyond this, a blend of direct liquid cooling and precision air cooling is more efficient and economical.

Liquid has greater heat capacity than air – water alone has approximately 4.2 times more per kg. When combined with density, even as a mix, it’s about 3,500 times the amount of energy per unit volume. This means a small amount of liquid can more effectively extract the heat from IT equipment than air when placed in close proximity to it.

Numerous benefits are unlocked by being able to remove heat more easily from components:

  • It allows for high-density server racks, demanding workloads, and steadily rising power densities that have pushed air cooling to practical limits.
  • Increased liquid temperatures in the data centre cooling loop open up the possibility of free cooling (although, as power densities rise, this may be negated).
  • Greater return temperatures, due to the liquid being separated from the occupied environment, mean more heat recovery and reuse potential.
  • Lower pump energy is required for the equivalent power dissipation in liquid cooling compared with fan energy for air cooling.
  • Key technology, including CPUs and GPUs, can work at optimal temperatures, avoiding overheating and performance challenges. An essential requirement as data centre head loads continue to increase.

In short, through liquid cooling, data centre managers can ensure their powerful equipment is running optimally while contributing to sustainability goals. However, there are barriers to implementation, not least psychological. 

Barriers to Implementation

We all know that electrically conducting liquids and electricity don’t mix; if they do, the consequences can be severe. In the past, when everything was physically plumbed in, there were greater plumbing challenges with liquid systems. Now the risk has been significantly reduced, thanks to dripless quick connectors and negative pressure systems, which would prevent fluid from getting into the data centre even if there was a leak. However, people remain concerned by the idea of using liquid cooling. This hurdle is beginning to be overcome, but it’s not the only challenge.

When introducing liquid cooling to an existing air-cooled facility, it’s vital to carefully coordinate the two systems for efficiency. This is logistically complex and usually requires significant financial investment. Implementing liquid cooling in a brand-new data centre is simpler, but still requires more installation work than air. For instance, this work can include creating fluid distribution systems and connections as well as electrical ones, which can lead to hidden costs.

Further, resiliency in the case of failure (e.g., power failure) may be even more challenging than with air, resulting in a fast rate of temperature rise and risk to the IT. What’s more, there’s more than one type of liquid cooling, each with its own benefits and challenges.

Cold plate and immersion cooling

At present, the most common form of liquid cooling is cold plate technology. This ‘direct to chip’ or ‘hybrid’ method sees a coolant passed through a cold plate that’s physically attached to hot and power-hungry components in IT equipment. It’s highly effective in removing heat from CPUs and GPUs meaning they can run more quickly at lower temperatures, resulting in more compute per watt and increasing energy efficiency. However, it can’t capture heat from IT components that don’t have cold plates, so approximately 10-20% of heat still needs to be removed by air. This heat could present a cooling load for air systems of multiple, even tens of kW per rack due to rising power densities.

By contrast, in an immersion cooling system – where IT equipment is submerged in dielectric liquid – less heat dissipates into the surrounding air as all the components are in contact with the fluid that’s removing heat. But it has its challenges, as the coolant interacting with the electronics can shorten their lives. Moreover, as immersion cooling generally depends on buoyancy-driven flow, its efficiency may be reduced as power densities rise.

Cold plate and immersion cooling can both use a single or two-phase cooling approach. The latter uses a liquid that will boil at operational temperatures and pressures, capitalising on latent evaporation heat. This is great for high-density applications but has global warming potential, which can be at odds with the sustainability directives data centres are working with.

Utilising a digital twin

Deciding which kind of liquid cooling is suitable for a given data centre can be complicated. Thankfully, digital twins can assist in the decision-making process.

Digital twins empower operators to trial how and where to implement liquid cooling while giving oversight of metrics they can’t usually see or measure, such as cooling efficiency. This means they can be used to assess the possible advantages and disadvantages of each cooling method and help tailor the selected solution before implementation.

When liquid cooling is in place, digital twins can also proactively highlight possible improvements and prevent systems from being overwhelmed. For instance, they can assess the impact of hardware changes and heightened server density on cooling infrastructure to prevent IT from slowing down and capacity and resilience from being lost.

Embracing the future

Liquid cooling is becoming a data centre must-have with both power densities and environmental pressures rising. Capitalising on ‘free’ cooling with undersea data centres was an enticing option; however, the logistical challenges have highlighted the comparative benefits of above-ground liquid cooling. To safely unlock these advantages, operators should consider implementing a digital twin that can give them confidence in their liquid cooling transition.

Picture of Mark Seymour
Mark Seymour
Distinguished Engineer at Cadence

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.