Skip to content Skip to footer

Leaving the legacy: What can we learn from the hyperscalers?

Image: Adobe Stock / Connect world

Hyperscale

Hyperscale data centres are on the rise. With 500 expected globally by 2020, these massive facilities are shaking up the ‘legacy era’ design practices of old. So, what can we learn from the data centre giants?

The world’s largest cloud platforms – the likes of Google, Facebook, and Amazon – need a lot of space and power, and the last few years have seen an unprecedented amount of data centre capacity built to support their growth.

As more and more users and businesses sign up for software services and content delivered over the internet or private networks, the number of massive (hyperscale) data centres designed specifically to deliver services at that scale is ballooning.

There are now close to 400 hyperscale data centres in the world, according to the latest estimate by Synergy Research Group. An overwhelming majority of those facilities, 44% in fact, are in the US, with China a distant second at 8%, followed by Japan and the UK, each home to 6% of the world’s hyperscale data centres.

synergy hyperscale data centers 2017 0

In order to operate to a viable business model, the hyperscale facility has had to change the design, construction and fit-out practices that existed during the aforementioned ‘legacy’ era. This is not surprising however, considering the sheer volume of data the ‘hyperscalers’ deal with on a daily basis – four billion searches a day on Google, 500 million tweets on Twitter and 350 million new photos uploaded to Facebook to highlight just a few.

The business model of these companies is based on the capability of digital infrastructure to disrupt legacy business models. To achieve this requires facilities that can deliver enormous economies in operation; that are scalable and able to cope with peaks and troughs in demand, that are highly networked and sit at the centre of huge webs of connected users and devices. They need to be able to analyse and model the massive amounts of data flowing through their systems, to create insights about their users that can be sold on. But what innovations have hyperscale facilities brought to data centre design?

A rare look behind a server aisle in a Google data centre

Eco-warriors

In terms of location, many of these companies choose sites close to sources of renewable power and where the climate is cool enough to reduce the need for electrically-powered cooling. Such locations include Northern Europe, Ireland and remote locations in North America. Google, Facebook, Amazon and Yandex have all adopted this strategy.

Google, as one example, has also made equity investments in clean energy projects as a means of offsetting its carbon footprint. It has also entered into long-term power purchase agreements (PPAs) for renewable energy, in order to guarantee a long-term source of clean energy at stable prices, increase the amount of clean energy it uses and that is available in the grid, and help enable the construction of new renewable energy facilities.

The process of constructing these huge data centres is highly commoditised – the IT end facility package is viewed as a standard product, split out from the building of the facility. The process is modularised, whereby common commercial module/parts are deployed in a standardised and flexible configuration, which will also support online expansion. Most components can be manufactured on site and can be assembled, dissembled, renovated, moved easily and replaced by hot-pluggable components.

The Open Compute Project

While information on the global size of these organisations is hard to come by, some are more than willing to share information on their design and build practices. In 2011, Facebook launched its ‘Open Compute’ project.

The project was intended initially to share information on hardware performance, to increase data centre efficiency and sustainability and the desire to ‘demystify’ data centres. It is claimed that the initiatives generated by the project have created 38% more efficiency in terms of energy usage and 24% cost savings in terms of Facebook’s Prineville site. In fact, Facebook estimates that it saved $2 billion over the course of three years with OCP designed hardware. The best thing about it is that the hardware designs are all open sourced, so are readily available to anybody looking to try the approach.  

One component of the Open Compute data centre design, is a high-efficiency electrical system. Facebook’s Prineville, Oregon, location is the first implementation of these elements in a data centre. The facility utilises an electrical system with a 48VDC UPS system integrated with a 277VAC server power supply. Typical data centre power goes through transformers that lead to 11-17% loss of power before reaching the server. Facebook is using a configuration that is reported to lose only 2%.

The battery cabinet is a standalone independent cabinet, that provides backup power at 48 volt DC nominal to a pair of triplet racks in the event of an AC outage in the data centre. The batteries are a sealed 12.5 volt DC nominal, high-rate discharge type, with a ten year lifespan, commonly used in UPS systems, connected in a series of four elements for each group (called a string), for a nominal string voltage of 48VDC. There are five strings in parallel in the cabinet.

Traditional data centre components such as UPS devices may not be required by, for example, direct grid power to the server with HVDC backup. This increases energy efficiency. In-row cooling, which shortens the distance air needs to travel also improves cooling efficiency.

Inside a Facebook data centre

Scalability

With technology today advancing at such a pace, the ability to future proof a facility with the freedom to scale up or down as needed is crucial. In order to achieve scalability, the hyperscale data centre has moved away from the legacy hierarchical network topology, to fabric network designs that offer greater capacity to deal with peak traffic, lower latency and higher redundancy in the event of switch breakdown, and greater efficiency through, for example, allocating lower traffic items to less efficient areas of the data centre in order to reduce rotation.

Facebook’s data centre fabric used in its Altoona facility is a ‘pod and core’ configuration based around 48 rack clusters arranged in a leaf-spine topology, through 40G downlinks and uplinks. The number of switches between fabric and spine creates a 3D structure, where the extra dimension creates added layers of connectivity, redundancy and efficiency to the traffic process. It is easy to add capacity, and edge pods allow scalability outside the core fabric.

What can the design and build of data centres take more generally from these hyperscale facilities?

There is no technological reason why smaller data centres cannot adopt the configurations and design principles used by the major cloud providers (which are, after all, based on the concept of scalability) and many have adopted, for example, convergence, software definition, Open Source, data centre fabrics and have migrated to faster network speeds.

The issue is not one of technology but one of ROI – for smaller data centres, the costs of refreshing an on-prem environment would probably not be justified, particularly if such environments can be accessed without significant capital cost through outsourcing.

However, changing the industry standard after decades of doing it the same way is no easy task, and as word of the inner workings of hyperscale facilities continues to spread, we will perhaps gradually start to see the industry replacing legacy centres that are not only outdated in technology, but in design as well.

 

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.