Skip to content Skip to footer

The data centre dilemmas of 2024

Image: Adobe Stock / knssr

As we inch ever closer to the midpoint of 2024, what have been the biggest challenges for the sector this year – and what lies ahead? Cindy Ryborz, Marketing Manager DC EMEA, Corning Optical Communications, weighs in.

To no one’s surprise, in 2023, data consumption took another significant stride forward. According to research from JLL, the first half of the year saw the most data centre uptake on record across tier-one European markets – a jump of 65% compared to the same point in 2022. 

In many ways. the drivers for this sharp rise in data centre demand are the same as they have been for the last decade: a need for more and more bandwidth as new, data-intensive technologies and applications mature and are adopted more widely.

In recent years, a few factors have sent data consumption into overdrive. Firstly, the pandemic and the surge it created in streaming services and virtual conferencing. Bandwidth-hungry technologies like machine and deep learning are growing in adoption and now, a breakout year for AI looks set to take this to even greater heights. 

Building new data centres to meet this demand is costly and factors such as local planning permissions and power availability can add additional hurdles. While colocation data centres provide somewhat of a middle ground for securing more resource, for many data centre operators and businesses, the best option is to find ways to upgrade and ‘future-proof’ existing resource – but how?

The drive for more density

While 400G Ethernet optical transceivers are used predominantly in hyperscale data centres, and many enterprises are currently operating on 40G or 100G, data centre connectivity is already moving towards 800G and beyond. We expect this to accelerate as IoT and AI really take off. 

As a result, there’s a growing list of considerations when it comes to upgrading infrastructure, not least whether to base it on a Base-8 or Base-16 solution – the former option is currently more flexible and the latter offers greater port density and an alternative path to 1.6T. Seeking cabling solutions that can handle the extensive GPU clusters needed to support generative AI – whether comprising 16K or 24K GPUs – will also be key for some operators.

Perhaps the most universal consideration for DC operators, however, is how to maximise space. Requirements for increasing bandwidths in network expansions often conflict with a lack of space for additional racks and frames and simply adding more fibre optic interconnects is an unsustainable strategy, given land and power constraints. 

Usefully, the latest network switches used to interconnect AI servers are well equipped to support 800G interconnects. Often, the transceiver ports on these network switches operate in breakout mode, where the 800G circuit is broken into two 400 or multiple 100 circuits. This enables DC operators to increase the connectivity capability of the switch and interconnect more servers.

Optical technology is also continuing to advance, allowing more data on fibre and wavelengths, which will go a long way toward helping data centres meet rising data demands. 

Cooling will be key 

In addition to massive bandwidth demands, AI also creates an even greater need for power and cooling efficiency in the data centre. As an industry that’s already notoriously energy-hungry – and many businesses now with ambitious sustainability targets – this is a growing challenge. 

For those with the resources, clever choice of location can be one solution to cooling challenges – Meta (Facebook) even has multiple data centres in Luleå that utilise the region’s sub-zero air and sea temperatures.

There are of course a number of smaller, more accessible approaches that can be taken by DC operators, like smart cabling choices. With the huge demands of AI however, it’s likely that incremental changes and advantages won’t scratch the surface. 

Set to make a greater impact are a variety of cooling techniques including air cooling, which utilises raised floor plenums and overhead ducts to distribute cool air to equipment, andin-row cooling where multiple cooling units are placed directly in a row of server racks or above the cold aisles.

More emerging techniques include liquid immersion cooling, which involves submerging IT equipment (directly to the chip in some cases) in a dielectric fluid – avoiding this risk of consuming too much water. This method provides efficient direct cooling, minimising the need for air circulation, but will however bring the additional challenge of connectivity components needing to be resilient to the coolant. 

Applications at the edge

2024 will see many companies build networks to support the development of their own large language models (LLMs). This requires the development of new inference networks where predictions are made by analysing new data sets. These networks can require higher throughput and lower latency and many operators will be looking to expand their infrastructure to support edge computing capabilities, bringing computation closer to the source of data.

Beyond this specific use case, edge computing is particularly valuable in scenarios where local analytics and rapid response times are needed, such as in a manufacturing environment that relies on AI, and also helps reduce networking costs.  Looking forward, 5G will play a major role in maximising the capabilities of edge data centres, ensuring the incredibly low latency required for the most demanding applications and use cases.

Enabling edge computing are colocation and hyperscalers working together to provide services that support rapid response times. Certainly, colocation is key as these data centres can be positioned closer to users and offer adaptive infrastructure that provides much needed flexibility and scalability in the face of unexpected events. It also alleviates the need for skilled labour on the end-user’s side. 

Configuring and optimising edge data centres, again, means a drive for ever greater fibre density, as well as modularity to allow for easier moves, adds and changes as data requirements grow. 

The road ahead

For enterprises looking to deploy or grow their AI capabilities, there are some key decisions to make in 2024. Much like the initial transition to the cloud, a primary consideration will be what proportion of their AI workload will be managed on-premise and what will be offloaded to an external cloud environment. 

Regardless of these choices, for the wider data centre industry, there’s a lot of work to be done to build and maintain resilient infrastructure that can support AI and other technologies not even conceived of yet.

These developments will continue to outpace data centre capacity and at an even greater pace as AI becomes more widely adopted. The priority for any DC operators will be to make the necessary adaptations to their infrastructure to stay agile and ready for whatever the future brings.

Picture of Cindy Ryborz
Cindy Ryborz
Marketing Manager Data Centre EMEA at Corning Optical Communications

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.