Skip to content Skip to footer

Blowing the roof off conventional

Image: Adobe Stock / Connect world

“A gale of creative economic destruction is blowing the roof off the conventional data centre economic model, revealing something altogether different”, says Alan Beresford, MD of data centre cooling experts EcoCooling. We asked Alan to elaborate upon this rather mysterious statement.

There’s a whole raft of new applications utilising larger proportions of the data centre sector; AI, IoT, and cloud-based hosting and blockchain processing for over 1,300 different digital currencies is increasing the need for High Performance Computing (HPC) equipment.

You will no doubt have heard about tech giants Facebook, Microsoft, Google and Amazon Web Services building hyper-scale data facilities. These are a far cry from conventional data centres – a new breed based around efficient compute technologies, built specifically for the service each operator is providing.

These hyperscale facilities have smashed conventional metrics. They achieve very high server utilisations, with PUEs (power usage effectiveness) as low as 1.05 to 1.07 – a million miles from the average 1.8-2.0 PUE across conventional centres. To achieve this, refrigeration-based cooling is avoided at every opportunity.

Built on the back of new HPC applications (Bitcoin mining etc), smaller entrepreneurial setups have adopted these high efficiency, extreme engineering practices. They are no longer the preserve of traditional hyper-scale facilities, thus turning the economics of data centre construction and operation on its head.

Intensive computing

CPU-based boxes are highly flexible and able to run applications on top of a single operating system like Unix or Windows. But being a relatively slow ‘jack of all trades’ means that they are ‘masters of none’ – unsuitable for HPC applications.

The hyper-scale centres use a variety of server technologies:

GPU: Graphics Processor Unit servers based on the graphics cards originally designed for rendering.

ASIC: Application Specific Integrated Circuits are super-efficient, with hardware optimised to do one specific job but cannot normally be reconfigured. The photo (pic 1) shows an AntMiner S9 ASIC which packs 1.5kW of compute-power into a small ‘brick’.

FPGA: Field-Programmable Gate Array. Unlike ASICs, they can be configured by the end user after manufacturing.

Extreme engineering

In the conventional enterprise or co-location data centre, you’ll see racks with power feeds of 16A or 32A (4kW and 8kW capacities respectively). Although the typical load is more like 2-3kW.

Conventional data centres are built with lots of resilience: A+B power, A+B comms, n+1, 2n or even 2n+1 systems for refrigeration-based cooling. Tier III and Tier IV on the Uptime Institute scale.

What we’re seeing with these new hyper-scale centres however is that HPC servers with densities of 75kW per rack are regularly deployed – crazy levels on a massive scale. And there’s no Tier III or Tier IV, in fact there’s usually no redundancy at all except maybe a little bit on comms. The cooling is just fresh air.

Standard racks are not appropriate for this level of extreme engineering. Instead, there are walls of equipment up to 3.5m high – stretching as far as the eye can see.

The economics of hyper-scale

Whereas we all have our set of metrics for conventional data centres, the crypto guys have only one: TCO (total cost of ownership). This is a single measure that encompasses the build cost and depreciation, the costs of energy, infrastructure and staff etc.

They express TCO in ‘cents (€) per kilowatt hour of installed equipment’. In the Nordics, they’re looking at just six to seven cents per kWh, down to around five cents (€) in China.

However, all is not lost for operators in the UK and Europe. These servers and their data are very valuable – tens or hundreds of millions of pounds worth of equipment in each hyper-scale data centre.

As a result, we are already seeing facilities being built in higher-cost countries where equipment is more secure. But they still need to follow the same extreme engineering and TCO principles.

Keep it simple

You can’t build anything complicated for low cost. In this new hyper-scale data world, it’s all about simplicity. Brownfield buildings are a great starting point. Particularly sites like former paper and textile mills where there tends to be lots of spare power and space.

Those of you who operate data centres will know that only about half the available power is actually used. But worse still, when we get down to the individual server level, some are down to single digit utilisation percentages.

These guys squeeze all their assets as close to 100% as they can. Almost zero capital cost is spent on any forms of redundancy and direct fresh air is used for cooling.

A new dawn: Prototyping to benefit you

EcoCooling has supplied cooling solutions to one of the most ambitious and potentially significant hyper-scale developments in Europe. The aim of the H2020 funded ‘Boden Type DC One’ project was to build a prototype of the most energy and cost-efficient data centre in the world.

This created achievable standards so that people new to the market can put together a project which will be equally, if not more, efficient as those of the aforementioned giants such as Amazon, Facebook and Google.

We aim to build data centres at one tenth of the capital cost of a conventional data centre. Yes, one tenth. That will be a massive breakthrough. A true gale of creative economic destruction will hit the sector.

One of the key components is a modular fresh-air cooling system. And we’re trying to break some cost and performance records here too.

Pioneers

One of the early leaders in the Arctic data centres, Hydro66, use extreme engineering. In their buildings, the air is completely changed every five to 10 seconds. It’s like a -40°C wind tunnel with the air moving through at 30 miles an hour.

Importantly, you’ve got to look after all this highly expensive equipment. An ASIC AntMiner might cost 3,000 Euros. With 144 AntMiners in each of the racking units, that’s almost half a million Euros of hardware!

So, we need to create ‘compliant’ environmental operating conditions to protect your investment. You’re probably familiar with the ASHRAE standards for temperature and humidity etc. In many instances we have achieved 100% ASHRAE compliance with just two mixing dampers, a low-energy fan and a filter. There is some very clever control software, vast amounts of experience and a few patents behind this too of course.

Hang onto your hats

So, to conclude: The winds of change are blowing a gale of creative economic destruction through the conventional approach to data centres. Driven by Blockchain and Bitcoin mining, automation, AI and cloud-hosting, HPC equipment in the form of GPUs and ASICs will be required to drive the data-led economies of the future.

Hyperscale compute factories are on the way in. TCOs of five to seven cents (€) per kWh of equipment are the achievable targets.

This needs a radically different approach: extreme engineering, absolute simplicity, modularity and low-skill installation/maintenance. Hang on to your hats, it’s getting very windy in the data centre world!

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.