AI is exposing the limits of how we measure success in data centres

Paul Quigley, Airsys USA President, argues that data centres can look ‘efficient’ on paper, yet still fail to unlock meaningful AI capacity when the real constraint is thermal effectiveness.

For much of the past two decades, the data centre industry has treated efficiency as evidence of progress.

That assumption was reasonable. As facilities grew in scale and complexity, metrics like Power Usage Effectiveness (PUE) brought discipline to infrastructure design and operations. They helped reduce waste, improved mechanical systems, and gave operators a shared language for optimisation. In an era of predictable workloads and relatively abundant power, efficiency and advancement largely moved together.

AI workloads have begun to separate the two.

Today, it is increasingly common to find data centres that operate efficiently by every traditional measure, yet struggle to move forward when high-density AI workloads are introduced. Power is available. Sites are permitted. PUE looks respectable. And still, meaningful expansion stalls.

This is not because efficiency has stopped mattering. It is because efficiency no longer tells the full story.

Where the measurement breaks down

Traditional efficiency metrics are excellent at describing how cleanly energy is delivered. They are far less informative about what that energy ultimately produces.

AI has made this distinction impossible to ignore.

A large portion of the energy consumed by AI infrastructure supports work that is inherently transient. Intermediate calculations, discarded states, and short-lived outputs are fundamental to how AI systems operate. Only a small fraction of what is processed becomes durable intelligence that creates long-term value.

When power is plentiful, this distinction is academic. When power is constrained, it becomes strategic.

Two facilities can now consume the same amount of power, report nearly identical PUEs, and yet deliver vastly different amounts of usable compute. On paper, they appear equivalent. In practice, they are not.

Efficiency versus Effectiveness

This is where the industry’s conversation begins to shift from efficiency to effectiveness. Efficiency asks how well energy is delivered. Effectiveness asks what that energy enables.

A simple physical comparison helps illustrate the difference. Measuring the energy burned by someone swimming is very different from measuring the energy burned by someone treading water. Both can be efficient. Only one produces forward motion.

Many data centres today are expending energy efficiently. The challenge is that, in AI environments, efficiency alone does not guarantee progress.

From PUE to PCE

This gap between energy delivery and usable output is why concepts like Power Compute Effectiveness (PCE) are gaining attention. PCE does not replace PUE. It builds upon it by shifting the focus from how power arrives at IT equipment to how much sustained compute emerges once it gets there.

PCE brings cooling architecture, thermal transport, and workload density into the same conversation as power availability. It reflects a reality operators are already encountering: two data centres with equal power and equal efficiency can produce radically different results depending on how effectively heat is managed at the source.

Proof in the economics: ROIP

When effectiveness is viewed through an economic lens, the divergence becomes even clearer.

Return on Invested Power (ROIP) captures what traditional metrics cannot. It reflects how much value is created from each unit of power consumed, not simply how efficiently that power is delivered. Facilities with higher PCE consistently produce higher ROIP, even when headline efficiency metrics look the same.

This is no longer theoretical. It is showing up in real portfolios, real retrofit decisions, and real financial outcomes.

Powered, permitted… and still constrained

Many operators now find themselves in a familiar position. Their sites are powered. Their facilities are permitted. Capital is available. And yet expansion remains constrained. The limiting factor is no longer the grid. The constraint is thermal.

Legacy architectural choices – raised floors, multi-storey layouts, indirect airflow paths – introduce turbulence into systems that now demand precision. 

Air, once sufficient as the primary transport medium, becomes unpredictable at scale. Mixing, recirculation, and localised hot spots quietly cap what can be achieved, even in facilities that appear healthy by traditional measures.

A meaningful portion of the industry’s next phase of growth is not waiting on new power. It is stranded inside existing footprints, constrained by how heat is moved.

Finding footing again

Liquid cooling represents the point at which many facilities stop fighting the current and begin to move again.

By bringing cooling closer to the heat source and reducing reliance on air as the primary transport medium, liquid-based architectures replace turbulent workarounds with predictable flow. The same power envelope begins to support far more usable compute. PCE improves. ROIP follows. Capacity that once appeared unreachable inside powered and permitted sites becomes productive again. This is why liquid cooling is not simply a density upgrade. It is an effectiveness upgrade.

The industry will continue to pursue new gigawatts. It has to. But the next phase of progress will increasingly belong to those who stop measuring motion alone and start measuring distance traveled.

In an AI-driven world, the difference between success and stagnation is no longer how efficiently power is consumed, but whether it produces forward motion

Efficiency still matters. But progress now belongs to systems that can move. 

Related Articles

Top Stories