Carlos Mora and Cindy Ryborz from Corning Optical Communications believe that as AI adoption accelerates, data centres must evolve – fast – to meet soaring fibre and power demands.
2024 saw many organisations make their first steps towards AI adoption, either through pilots and full-scale implementation, or simply the development of AI strategies.
Research last year from McKinsey noted that 65% of organisations were now regularly using generative AI, nearly double the amount from their survey 10 months prior. While this is a meaningful increase on the previous year, separate research from Boston Consulting Group suggests only 22% have fully implemented their AI strategy and are seeing substantial gains.
It’s clear that progress is being made but the current expectation is that 2025 could be a true tipping point for the technology. Discussion around the role of the data centre industry in supporting this vast increase in power demands – and doing so efficiently – is intensifying.
What’s clear is that for enterprises looking to deploy or grow their AI capabilities – and for data centre operators – there will be some important decisions to make next year. This ranges from the choice of design to the specialised components they use to maximise space.
Let’s talk through some of these considerations in more detail.
Fibre, lots of fibre
To put it frankly, the AI ecosystem doesn’t work without fibre. Only fibre can provide the massive bandwidth, density, and adaptability demands that AI creates and it is needed in greater volumes to support the architectures that are emerging.
The typical spine-and-leaf network design, which is favoured for high-speed network performance in data centres that experience more east-west traffic, is experiencing a significant evolution – including a huge increase in fibre.
Unlike traditional data centres, AI data centres need fibre to be taken all the way to the GPU NIC itself, using an additional network layer called the Back End Network. This has increased the amount of connectivity from device to device by about 10x per rack.
We may also see more operators engage in different connectivity methods to expand network capacity, including long-haul connectivity between data centre campuses. Again, this means a lot more fibre.
Space optimisation becomes more critical
With this need for more fibre comes an imperative to maximise space and also leave capacity for longer-term growth – this will likely be one of the most universal considerations for data centre operators in 2025.
This has, fortunately, long been a focus for the data centre industry and there are myriad innovations across cabling and connector technology to enable greater density in the data centre.
Usefully, the latest network switches used to interconnect AI servers in a data centre are well equipped to support 800G interconnects. Often, the transceiver ports on these network switches operate in breakout mode, where the 800G circuit is broken into two 400 or multiple 100 circuits. This enables data centre operators to increase the connectivity capability of the switch and interconnect more servers.
Even when AI/ML optical interfaces are expected to utilise MPO connectivity, miniaturisation is a must for AI data centres and this will be enabled by leveraging multifibre very small form factor connectivity (VSFFC) such as MMC or SN-M. This helps to provide a reduced patching footprint for structured cabling implementations, fibre with low attenuation and high bend performance to better manage cable congestion.
Early adopters move to 1.6T
AI – along with other data-intensive applications like streaming services – brings bandwidth demands that will eventually take us beyond the 40G or 100G speeds where many enterprises currently operate.
When exactly we’ll see that tipping point remains to be seen but it’s possible that we’ll see an acceleration in the number of data centres that make the transition towards 800G and beyond in 2025. Sales of 400G and 800G transceivers are already on the rise and we can definitely expect early deployments of 1.6T network speed using 2x800G combo transceivers in 2025.
Again, VSFFC will be important here. Although the choice of connectors for 1.6T transceivers may remain as belly-to-belly installed LC duplex connectors for FR4, and MPO8 for DR4 technologies, the use of VSFFC such as MMC16 or SN-MT will be necessary. This is due to the need for backbone cabling to aggregate the 864 or more fibres per AI server racks at the Spine and Core network racks – these racks can accommodate over 9000 fibre strands.
Sustainability becomes a hotter topic
AI is expected to require the development of many new data centres, which this year were classified as critical national infrastructure for the first time in the UK. With the heightened energy demands of AI data centres there is also increased scrutiny on the impact this will have on emissions at a global level.
Approaches such as co-packaged optics, essentially placing optics and electronics closer together in a switching or processing system, are emerging as a solution to enhance energy efficiency.
In 2025, we can also expect cooling to be a key consideration. For years, air has been the primary medium for transferring heat in the data centre but traditional air-based cooling solutions are increasingly ill-equipped to handle demanding AI data centres.
Liquid-based heat transfer agents are fast emerging as a viable solution across the industry, with the most widely deployed method circulating water through insulated pipes around heat-generating Central Processing Units (CPU) and Graphics Processing Units (GPU) components using insulated heat-conducting plates.
The road ahead
There will be a number of key decisions to make within the next year – for enterprises, data centre operators and even leaders at an international level.
For the many stakeholders that keep the data centre industry moving, there’s a lot of work to be done to continue to conceive, build and maintain resilient infrastructure that can support AI and tackle the new challenges that the technology poses.
The industry is well prepared and has been anticipating these developments, but for data centres operators, the priority in 2025 will be to make the necessary adaptations to their infrastructure to stay agile and ready for whatever the future brings. It’s beneficial to be prepared now, rather than having to upgrade later under even greater time-pressure.