Alan Farrimond, Vice President of worldwide accounts at Wesco, argues that as GPU clusters scale and fibre counts explode, structured cabling can no longer be treated as an afterthought.
Today’s building boom of AI-ready data centres is notable for the speed and rapidly evolving technology that is being implemented. Power densities are increasing. Cooling methods are evolving. And GPU cluster growth is exploding. But another key element of AI data centres shouldn’t be overlooked: their network cabling infrastructure.
The demand for high-speed, low-latency data transfer between GPUs is creating a need for fiber optic connections to be deployed in data centres at a previously unseen scale. New topologies are needed to manage more complex cabling requirements among these GPUs.
AI data centre owners and operators that want to deploy an effective cabling infrastructure – one that’s efficient, manageable and able to incorporate new hardware releases – need a well-planned and standardised cabling strategy.
The evolving network architecture
From the internet boom at the dawn of this century to the more recent rise in cloud computing, data centres have operated relatively consistently for decades. They’ve used CPUs with a small number of cores that excel in serial processing tasks for operations that need to be performed sequentially.
However, this traditional approach isn’t sufficient for data centres that power AI workloads. These data centres require GPUs with thousands of smaller, more specialised cores that can be broken down into many tasks and are done independently and in parallel.
To keep up with demand for AI workloads, data centres are deploying thousands of GPUs. These GPUs must be able to process massive amounts of data, perform complex calculations simultaneously, and communicate and synchronise tasks with each other.
The traditional copper interconnects used in enterprise and early generation cloud data centres can’t keep up in many AI scenarios. Instead, fibre optic cabling is increasingly used for higher bandwidths, lower latency and the ability to cover longer distances.
Additionally, AI data centres are driving higher-density fibre optic cabling. A single GPU can require multiple optical transceivers, each potentially containing multiple fibre strands. Each new generation of GPU can also increase fibre counts significantly – rising from hundreds to tens of thousands of fibres in a single cluster.
A traditional cabling topology may not be well suited to manage this increase in fibre count because it can require extensive inter-rack cabling. In response, many operators are looking at more structured approaches to reduce complexity as fibre counts scale.
Structured cabling systems are designed to help manage the routing of numerous high-speed connections. Technologies like high-density fibre solutions and multi-fibre push-on connectors can also be used to maximise fibre count in tight spaces.
A structured cabling approach can also support other data centre needs, including:
Fast deployments: Data centres for AI workloads are being built with accelerated timelines, and structured cabling can support rapid buildouts. A standardised approach can also simplify installation and reduce rework compared with highly customised direct-connect designs.
Scalability: AI data centres are moving from 400Gbps to 800Gbps, and 1.6Tbps is on the horizon. A standardised framework can make it easier to adopt new technology and scale capacity and bandwidth without a full rip-and-replace overhaul.
Risk mitigation: AI data centres require the highest levels of availability. Clear labelling, traceability and more consistent documentation can support network administration and troubleshooting.
Priming a strategy for success
Choosing a cabling strategy is only the first step. Data centre owners or operators also want to craft an approach that positions them for the demands of long-term AI workloads.
This is why it’s important to plan ahead. For example, for data centres with rapid construction timelines, owners and operators may want to confirm early how delivery schedules align with build milestones.
In the same way that operators can standardise the technology used in their cabling infrastructure, they can also standardise deployment processes to meet aggressive timelines. Kitting and labelling practices can help ensure materials are easy to locate and prepare for installation as they arrive on site. Some projects also use preconfigured racks to reduce on-site assembly time and speed commissioning.
Building a foundation for the future
Cabling can’t be an afterthought in AI data centres. With complexity growing and technology cycles shrinking, owners and operators need a structured and well-planned cabling strategy that gives them the capacity and resiliency today and the agility tomorrow to adapt to technology changes without starting over.

