Skip to content Skip to footer

The changing times of the data centre

Image: Adobe Stock / Connect world

Despite their modern implications and future forward technologies, the data centre is much older than it lets on. 

Data centres date back to the 1940s, when the first computer designated rooms became home to large military machines that were set to work on specific data tasks. Then, in the 1970s, as Unix, Linux, and IT became more prominent, specific rooms with equipment and networking would popularise the name “data centre”. 

With the 1960s came the mainframe computer. Remember the episode in Mad Men when the ad office loses its lunchroom to the colossal computer? Well, this happened all over the world, with IBM leading the charge, filling dedicated mainframe rooms in large organisations and government agencies. Indeed, in some cases, these increasingly powerful and expansive machines needed their own free-standing building, which were to become the first data centres. 

With the 1980s, PCs were introduced that were typically connected to remote servers to enable them to access large data files. By the time the internet became ubiquitous in the 1990s, internet exchange (IX) buildings had sprung up in key cities to serve the needs of the World Wide Web. These IX buildings were the most important data centres of their time, serving most people’s needs. 

Since then, the need for data storage has grown in lockstep with storage innovation, which has become a critical factor. Storage devices were manufactured into many form factors to fit the needs of the data centre and ultimately helped to power its incredible growth over the decades.

The start of storage

To understand the role of storage in the data centre, a brief glance back in time is the place to start. After their inception in 1956, hard disk drives became the preferred non-volatile device for computing, which was still the case when AOL created the first modern data centre in 1997 at the start of the dotcom bubble. This kickstarted a boom in data centres, with companies using their remote servers to quickly get their websites online.  

However, as more data was created and captured, CPU speeds hit the roof, churning through the information more quickly. And with this, the industry was galvanised into action to accelerate storage to the speed of compute.

With no blueprint for how to address this industry challenge, storage took on a variety of new forms. Over the years, experiments with semiconductors led to the adoption of SSDs for the enterprise sector. Then came the evolution of SATA connections to PCIe and the emergence of M.2 slots, as well. Today, there are five major form factors used in data centres, a marked expansion from the one that started it all.

Looking back through the decades of data centre evolution, there are some common themes facing storage developments; demands for speed, constraints on storage, and a willingness to try anything to limit bottlenecking. While CPUs remained iterative, storage had to shape its own path.

Much like clay in the hands of a sculptor, over the years storage has been moulded into every conceivable shape and size that can be imagined. From the spinning disk to slotted memory and beyond, these innovations have been the cornerstone for flexibility in the data centre.

That flexibility is also the data centre’s primary strength. HDDs and SSDs coexist while serving different purposes and, with access to both, data hubs can find a balance between cost and speed limitations.

Today, the major players in the enterprise sector, the behemoths of cloud service providers, are looking to build data centres using custom components, and engineers are working hard to meet these use-case specific requests. Combined with the unrelenting rate of data creation in the world, this means that storage continues to evolve in step with the digital world. 

The pace of change

Change is moving at a good pace – faster than ever before. To keep up, engineers are currently tinkering with 13 new form factors, more than double the five that are currently in use. How many will make it to mass production? 

To avoid a bottleneck, it’s essential the storage speeds synchronise with the speed of computation. The new E1.S and E1.L drives are the front runners, as they are suited for hyperscale data centres and high-capacity use cases, respectively. But emergent use cases may make one of the other 11 form factors a better contender for mass production. It’s anyone’s guess.

While technology enables new solutions, the key driver is the exponential growth of data. Our increasingly automated, digitised lives leave us entrusting all our files into the ether. Yet maintaining the data centres that hold all this is anything but simple.  

Along with building, running, and chilling these massive data centres, the physical media on which the data is stored requires constant upkeep. And with extra storage capacity being added all the time, tending to it all is an increasing burden for cloud computing providers.

This demand for cloud applications has surged during coronavirus. According to the property company Knight Frank, take up of data centre capacity almost doubled in cities such as Madrid, Warsaw and Milan compared with 2019. Data hub mergers and acquisitions totalled almost $35bn globally in 2020, more than five times the volume of deals in 2019 and $10bn ahead of the previous annual record set in 2017. 

Data storage needs to continue to shapeshift safely, intuitively, and cost-effectively to best support the data it serves. Without the versatility of the storage components, the data centre today would look radically different, proving the importance, and persistence, of storage.

In the end, data must be stored. In this industry, it’s the one element that’s here forever.

Picture of Davide Villa
Davide Villa
Business Development Director EMEAI at Western Digital

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.