Future-proofing for AI

Surging AI demand is colliding with long data centre build cycles. That’s a topic we recently explored at Data Centres in the AI Era, where Tilly Gilbert, Director Consulting & Edge Practice Lead at STL Partners, moderated a discussion with Kao Data’s Richard Collar, Equinix’s Matt George, and Opengear’s Alan Stewart-Brown on how operators can future-proof capacity while grids, silicon and customer requirements keep shifting:

  • Public vs private AI footprints – contrasting training/inference density profiles, and what each model means for rack-level power, cooling and interconnect.
  • Designing ‘AI-ready’ space – hybrid liquid/air cooling topologies, CFD-driven room layouts and flexible MEP allowances that keep 25-year buildings aligned with 6-month GPU roadmaps.
  • Power as the gating factor – navigating grid-connection queues, campus master-planning in capacity-constrained metros, and choosing sites where renewables, transmission headroom and fibre intersect.
  • Network resilience at scale – why an independent management plane, out-of-band automation and segmentation are now essential for bursty, latency-sensitive AI traffic and ransomware containment.
  • Data-and-cloud adjacency – locating AI clusters next to data gravity, leveraging global fabrics to avoid replication drag, and balancing sovereignty, cost and ESG targets.
  • Heat-reuse & sustainability metrics – integrating liquid-cooled waste-heat into municipal schemes and aligning high-density GPU footprints with Scope 2/3 commitments.
  • Enterprise adoption realities – breaking organisational silos, satisfying sector-specific governance, and timing capex so customers don’t pay today for capacity they’ll need tomorrow.

Related Articles

Top Stories