We’re fighting over GPUs and memory – but power manufacturing may decide who scales first

Matt Coffel, Chief Commercial and Innovation Officer at Mission Critical Group, argues that while data centres contend with tight silicon supply and rising costs, a quieter constraint is electrical manufacturing capacity and skilled trades – and that may ultimately determine how fast new AI capacity comes online.

Whether it’s GPU supply constraints, allocation battles or which hyperscaler will secure the next generation of chips, the spotlight rarely moves away from compute. But if you walk into any factory that builds electrical gear for data centre power systems, another constraint becomes clear – and it’s not silicon or rare earth metals.

Electrical manufacturing capacity and the availability of skilled trades are becoming significant factors in the rate at which data infrastructure can scale. The industry has spent decades optimising compute performance, but now it’ll need to optimise everything around it. Estimates vary, but power demand from data centres is expected to rise sharply by 2035 – for example, from about 33 GW to 176 GW – which means we’re entering a phase where the ability to build, test and deliver power systems efficiently will help determine who brings capacity online fastest.

AI-dense workloads are rewriting power requirements

AI-intensive data centres have different needs from traditional data centres, making electrical infrastructure critical. With rising power densities, loads are running harder and for longer durations, reinforcing redundancy expectations. Switchgear, relay panels, power distribution units and modular power and cooling systems must all support 24/7/365 continuity for workloads that need reliable and resilient power.

This shift goes beyond scale. AI adoption is expanding so rapidly that operators are requesting equipment and turnkey builds that typically take 18–24 months in roughly half the time. That expectation is at odds with a manufacturing landscape that wasn’t designed for this kind of acceleration.

What’s happening on factory floors

From hyperscale to colocation to enterprise, telecoms and utilities, demand for electrical gear is rising across nearly every customer segment. At the same time, manufacturers are running into several simultaneous pressures, including:

  • Component lead times for everything from switchgear to relays and more are widening.
  • Workforce shortages are constraining how quickly assembly and testing lines can scale.
  • Engineering overload from custom builds slows down production – and those delays can cascade into downstream projects.

AI loads also raise the stakes because the GPUs used in data centres require stable power quality. The cost of failure isn’t just downtime – it’s efficiency loss, accuracy issues and delayed model completion.

Electrical systems aren’t assembled like consumer electronics: precision industrial equipment requires specialised technicians, careful quality control and assurance, and field or field-simulated testing. Speed and reliability matter, but you can’t rush safety.

To move more quickly yet safely, the industry has an opportunity to embrace modularisation, prefabricated power systems, digital twins and in-factory testing, as well as standardised assemblies. However, these can only go so far when upstream components, skilled labour and testing capacity continue to bottleneck the downstream supply chain.

How data centre operators can get ahead of the power gap

To address electrical manufacturing bottlenecks, operators can rethink how they plan and build power systems. There are a few ways to reduce risk and lead times:

  • Bring manufacturers into the design phase early. Many of today’s fastest projects are those where engineering teams collaborate from day one. This reduces waste and prevents late-stage surprises.
  • Reduce over-customisation. Every deviation from a standard design adds engineering hours, manufacturing spec changes, QA effort and testing complexities. Standardisation is one of the key levers to speed deployment.
  • Plan around power system lead times. Many projects treat electrical gear as a downstream dependency, but it’s one of the first things you should factor into timelines.
  • Use modularised, prefabricated solutions. This approach reduces on-site labour constraints and delivery risk, while also enabling operators to get what they need more quickly, with the option to scale in the future – without an entirely new design.
  • Design for future power density. GPU generation changes are outpacing electrical redesign cycles. Flexibility at the outset can be the difference between being able to grow or starting over from scratch.

Organisations moving fastest on AI deployments are treating power as a strategic planning input, not an afterthought.

The next constraint: power

In many cases, the constraint on AI growth is shifting from algorithms to infrastructure.

The limiting factor for AI expansion may not be who can build the biggest data centres or the highest volume of them – but who can get reliable, resilient electrical infrastructure into the field quickly, safely and at scale. Compute innovation will continue to accelerate, but the limits of the grid and power manufacturing capacity will influence who can keep up.

We should acknowledge that accelerating electrical infrastructure is just as crucial as chip production. If we get this right, AI’s next chapter can unfold at a pace that meets current expectations. If not, we’ll have the GPUs and not enough power systems to turn them on.

Categories

Related Articles

More Opinions

It takes just one minute to register for the leading twice weekly B2B newsletter for the data centre industry, and it's free.