Ian Shearer, Managing Director, APAC & EMEA at Park Place Technologies, explores why expert support on power, cooling and operations is now essential as AI pushes legacy data centre designs to their limits.
Artificial intelligence is moving from pilot to production, and data centre demand is rising fast. Independent forecasts project global data centre power demand could increase by up to 165% by 2030 from 2023.
At the same time, engineering guidance shows many AI racks will outgrow traditional air cooling, pushing operators to adopt liquid-cooling approaches and rework electrical and mechanical systems. This is where IT professional services help organisations upgrade safely and without service impact – from design to install/move/add/change (IMAC), relocation, and IT Asset Disposition (ITAD).
Fit-for-purpose design: Power and cooling for AI
Cooling choices
For many AI systems, air alone becomes limiting above roughly 50 kW per rack. Rear-door heat exchangers (RDHX) generally serve ~40–60 kW, direct-to-chip (DTC) plates serve ~60–120 kW, and immersion approaches can exceed 100 kW per rack, with dual-phase reported higher. Liquid cooling can also stabilise component temperatures and has been shown to improve power usage effectiveness (PUE) versus air; some sites report ~10% PUE improvement.
Environmental considerations
Recent life-cycle assessment (LCA) work indicates liquid-cooled facilities can reduce operational and embodied carbon versus conventional air systems – potentially up to ~50% in the modelled scenarios – while lowering water use, though outcomes depend on local energy mix and design choices.
Liquid is also a far more efficient heat transfer medium than air, which is why it’s increasingly favoured for dense racks. Where some dual-phase fluids contain PFAS, teams weigh potential health and environmental impacts alongside performance.
Why services matter
Moving to liquid or hybrid cooling is not a swap-in. It touches electrical loading, heat-rejection pathways, leak detection, rack layout, and controls. Professional services teams use site surveys, CFD (computational fluid dynamics) modelling, and methodical MOP/SOP development to plan and implement the change while keeping live workloads available.
Electrical architecture and capacity planning
AI loads tend to require larger switchgear and power distribution units (PDUs). Some operators are also evaluating 48-volt server power trains to reduce conversion losses – tests have shown ≥25% loss reductions in some trials. Backup strategies also vary: training clusters can sometimes run with lower redundancy than transactional systems under strict Service Level Agreements (SLAs).
The operations baseline isn’t starting from scratch: the industry-wide average PUE still hovers a little above ~1.5, and many sites still follow traditional designs. Specialist planners help set realistic upgrade paths that align utility feeds, on-site distribution, and thermal strategy – before equipment arrives.
IMAC at AI pace
As hardware evolves quickly, IMAC work must be controlled, well-documented, and reversible. That discipline matters: the latest global survey finds power issues are the leading cause of impactful outages, and about one in ten outages is rated serious or severe – exactly the kind of event to avoid during change windows.
Professional services teams can reduce errors by standardising Method of Procedure (MOP) and Standard Operating Procedure (SOP), coordinating windows against SLAs, and validating each change (power, network, cooling) with pre-checks and rollback plans.
Relocation and consolidation without service impact
Enterprises are consolidating footprints and mixing colocation, cloud, and on-premises resources. Even with widespread remote management, a substantial share of workloads still runs on-prem; data from the Uptime Institute shows ~45% on-premises or enterprise facilities vs. 55% remote/cloud in recent responses. Relocation services coordinate inventory, chain-of-custody, cabling, airflow/containment, and post-move verification so application owners see no performance dip.
ITAD and sustainability during refresh
Modernisation programmes inevitably retire legacy IT hardware equipment. IT Asset Disposition (ITAD) services close that loop by sanitising or destroying data in line with recognised standards (e.g., NIST SP 800-88 and IEEE 2883) and providing a certificate of destruction/erasure that records who did what, when, how, and with which verification. That documentation is central to audit and risk reduction.
Beyond data protection, many buyers look for environmental stewardship. Certifications such as R2 (administered by SERI) and e-Stewards (Basel Action Network) attest to practices around responsible recycling, worker safety, and export controls – useful signals when selecting an ITAD partner.
Execution playbook: What to expect from professional services
AI facility work succeeds when it’s planned and verified, not improvised. The phases below show how a professional services team keeps production running, sets clear checks and rollbacks, and leaves a reliable audit trail at each step.
1. Assessment & design
Confirm utility capacity, room for switchgear/PDUs, and thermal envelope; model options (air, RDHX, DTC, immersion) against target rack powers and PUE/TCO goals.2
2. Preparation
Update MOP/SOPs, implement monitoring, and stage gear to minimise live-site disruption.
3. Implementation
Sequence IMAC tasks with witness testing at each stage; enforce electrical and mechanical safety gates. Given outage statistics, treat power work with extra scrutiny.
4. Validation
Prove thermals, efficiency and resilience under load; capture a baseline for ongoing operations.
5. Decommissioning & ITAD
Sanitise media to NIST/IEEE standards, then repurpose, resell, or recycle with complete documentation.
Why now?
AI demand is accelerating; industry insights expect capacity and power needs to grow sharply this decade. Designing, executing, and documenting upgrades with experienced IT professional services reduces errors, shortens maintenance windows, and keeps compliance on track as facilities adopt higher-density cooling and new electrical designs.

