With AI workloads rising, hybrid models maturing, and CNI expectations tightening, Stephen Kingdom, CTO at Xantaro, explains why legacy converged architectures are becoming a growing risk for modern data centre operators.
The UK’s data centre landscape is undergoing significant transformation. While much attention is paid to the development of vast hyperscale sites, a quieter but no less important evolution is taking place within the private data centres that support much of the UK economy.
The combined pressures of AI adoption, the recent designation of data centres as Critical National Infrastructure (CNI), and a strategic enterprise shift towards hybrid cloud models have created a complex new reality. For many operators, existing network infrastructure is no longer fit for purpose. The decisions made now about network fabric are not just technical upgrades, but long-term strategic commitments that will shape a business’s direction for years to come.
Hybrid cloud repatriation
These internal infrastructure pressures are being compounded by a major external market shift around enterprise-owned facilities. Major retailers, financial institutions, media companies, and others are reassessing full public cloud models.
Driven by the need for greater control over costs, performance, and data sovereignty, many are embracing hybrid models, meaning their on-premises and colocation data centres are becoming more important than ever. This repatriation demands an on-premises network that can deliver cloud-like agility and automation, which many legacy networks struggle to provide.
The cost of converged legacy architectures
To build for this future, we must first address why legacy networks come under strain in modern environments. Historically, they were often built on converged multi-service architectures, where security, billing, and ops traffic shared the same pipe.
While efficient during earlier phases of growth, this shared infrastructure can lead to complex troubleshooting. Consider the mandatory requirements for high-definition security monitoring and access control. In a converged network environment, it is not uncommon for certain CCTV feeds to intermittently experience brief frame drops or ghosting, affecting overall high-definition, on-demand video-stream performance.
These issues are sporadic and non-deterministic, making root-cause identification particularly difficult. With separate teams managing the network infrastructure and the CCTV and access control systems, correlating errors and distinguishing network-performance issues from application-level degradation becomes increasingly challenging.
Advanced debugging efforts for one service also carry the risk of affecting other services within a shared environment. As a result, a significant amount of network team capacity can be consumed by troubleshooting cross-domain network and application issues, as well as incident resolution.
Challenges like this arise across many elements of sprawling legacy converged networks, which are likely to be stretched further by the demands of the AI era.
Mitigating the ‘rip and replace’ risk amid a skills gap
A network upgrade isn’t modular; it sits at the centre of the environment. Because it is deeply integrated with compute and storage resources, a change often requires a disruptive ‘rip and replace’ approach in a live setting.
The stakes for these architectural decisions are high. If you get it wrong, you are likely to live with the consequences for four to six years. Rushing a decision based on a single vendor proposal can create operational and security challenges. Reliance on vendor-locked hardware and software can also restrict technology choices in production, limiting flexibility and adaptability.
Crucially, this transformation is taking place alongside a widening skills shortage, as top talent gravitates towards roles in software development, DevOps, and cloud platforms. Specialised skills have become harder to find. This leaves many in-house IT teams, who are experts in their own right, without the deep niche expertise needed to design, deploy, and manage a next-generation, multi-vendor network fabric. Many teams also lack the scale of resources needed to manage everything effectively.
A framework for engineering validation
Given the stakes and leaner teams, how do you validate a design before committing CapEx? This requires a different level of engineering validation. Enterprises should consider a deep discovery audit, followed by multi-vendor lab testing, to build robust proofs of concept.
When taking a proposed design into a lab environment, operators should test against the following core criteria:
- Segregation vs Convergence: Testing the physical isolation of critical functions, such as CNI-mandated security feeds or billing sensors, from general traffic. For a colocation provider, precise metering and real-time data delivery are central to customer billing and operational transparency. To meet new security standards, operators may be required to isolate these energy systems into a standalone, physically separate network. Lab testing helps ensure data can be delivered to billing applications without loss.
- Intent-Based Networking (IBN): Validating whether the network can facilitate anomaly detection when it deviates from an intended state. Testing must confirm that custom intent-based analytics can use proactive probes to monitor the health of these billing-critical traffic flows. This helps ensure the system can stream anomalies as alerts to external systems, potentially catching failures before the NOC is notified.
- Zero-Touch Capabilities: Ensuring the architecture supports automated deployment, allowing non-specialist contractors to fit endpoints without needing deep network expertise.
Shifting the operating model and enhancing customer experience
Because specialised skills are scarce, the operating model must change. Manual CLI configuration for every switch is becoming harder to sustain at scale.
Instead, operators need a more centralised management approach. This gives end-to-end visibility and allows a smaller core team to manage a large, complex estate by using custom intent-based analytics to spot anomalies. These analytics can help detect when the network deviates from an intended state and stream alerts to external systems, giving the network infrastructure team more proactive notification rather than relying solely on application teams to raise issues.
Beyond operational efficiency, a modernised internal network can also influence customer experience. A robust internal network can help move the facility beyond being seen purely as a utility. This includes IoT solutions that help an engineer quickly locate a specific rack, saving time and reducing human error. It also means providing seamless, high-speed wireless for guests and staff.
Securing long-term performance
The decisions made today are long-term strategic commitments. This is not simply a refresh, but the architectural blueprint for the IT environment over the next half-decade. Whether the driver is repatriation of cloud workloads or CNI compliance, the network fabric should be treated as a strategic foundation, validated through engineering, rather than selected on promise alone.

