Europe’s data centres can no longer treat sovereignty as abstract

Jad Jebara
Jad Jebara
President & CEO at Hyperview

As geopolitical pressures, AI growth, and supply chain constraints collide, sovereignty is becoming a more immediate concern for data centre operators across the UK and Europe. Jad Jebara, President & CEO at Hyperview, explains why.

Data centres have always mattered, but the reasons why are changing. For years, the focus has been on capacity, cost, and uptime. In 2026, the conversation is increasingly shaped by sovereignty, resilience, and trust, with AI accelerating that shift. Today, around 51% of global data centre capacity sits in the United States, a concentration that is shaping how other regions think about control, dependency, and long-term resilience.

For UK and European operators, this backdrop matters as capacity expansion and modernisation take place in an environment shaped by export controls, tariffs, and servicing restrictions on critical equipment. Decisions about hardware, maintenance, and long-term support are increasingly influenced by geopolitical risk.

Sovereignty is often framed as a policy issue, but in practice it is an operational concern that shapes workload placement, infrastructure sourcing, and how organisations manage continuity, compliance, and security at scale. It reflects whether operators can reliably source, operate, and support the infrastructure their digital services depend on as geopolitical conditions and supply chains continue to shift.

Sovereignty starts with physical reality

The industry often discusses sovereignty as a data issue, but the foundation is physical. Control over data and models depends on control over the infrastructure they run on, including access to power, access to hardware, and the ability to operate and maintain systems reliably through changing market conditions.

Two forces are shaping planning decisions. The first is the ongoing investment wave into AI infrastructure, particularly training environments. The second is geopolitics, including tariffs, export restrictions, and industrial policy aimed at keeping strategic capabilities closer to home. As hardware, servicing, and supply chains become more strategic considerations, the data centre becomes a strategic asset by default.

For European operators, these pressures are often felt without dramatic policy shifts, as lead times lengthen, sourcing options narrow, and servicing constraints emerge in parts of the supply chain that were previously stable. Even when restrictions do not apply directly, knock-on effects still arrive through global supply chains that reroute slowly and unpredictably. Resilience planning therefore extends beyond redundant power and cooling to include procurement, lifecycle support, and operational continuity.

AI is changing the footprint, not just the demand

Much of the recent infrastructure build-out has focused on training large models, driven by increasing volumes of data applied to ever more powerful systems. While this has delivered rapid progress, returns from scale are beginning to taper, and attention is moving towards inference and more domain-specific systems.

These systems create value through specialised knowledge, workflow automation, and trusted applications in areas such as healthcare, finance, and public services. This shift changes not only how AI is used, but also how infrastructure is planned and deployed. Training environments tend to concentrate in a small number of very large facilities, while inference environments are more distributed, particularly where sovereignty and regulatory requirements apply. As a result, workloads become more regional, more governed, and more sensitive to locality and operational continuity.

This combination makes 2026 a transition year. Capacity will continue to expand, but planning decisions increasingly need to reflect where data is processed, where models run, and how quickly new capacity can be deployed without introducing operational fragility.

Density is making visibility gaps surface faster

As AI adoption grows, data centre environments are becoming denser and more complex. GPU environments behave very differently from traditional IT estates, with sustained high utilisation changing thermal behaviour, power delivery patterns, and operational tolerances across facilities.

The impact extends beyond individual racks into switching gear, UPS capacity, cooling systems, and maintenance planning. Operators need to understand not just power draw, but what that power supports, how assets depend on one another, and where risk emerges as conditions change.

Security, compliance, and resilience depend on clear, up-to-date visibility into the infrastructure state. Many organisations still rely on fragmented systems and incomplete data, supported by manual processes that struggle to keep pace as environments grow denser and more distributed. AI and automation both depend on complete, accurate, and timely data, and when that foundation is weak, operational risk increases.

Data centre management is moving beyond monitoring

Traditional DCIM has typically been positioned as a monitoring function, focused largely on power and cooling and often disconnected from what is happening inside the racks. This approach struggles in modern environments because operators are no longer managing static facilities. They are managing systems that change frequently across physical, logical, and financial layers, sometimes within the same day.

As a result, the industry is shifting towards richer, more operationally useful models of the environment, continuously updated through discovery and telemetry and presented with context. Context turns raw data into usable insight. A PDU reading becomes meaningful when it is connected to a specific rack, that rack is linked to a cage, the cage is associated with a customer or workload, and the full dependency chain is visible enough to support informed decisions.

With this level of context, operators can move beyond simple status reporting towards diagnostic analysis and, in some cases, more predictive or prescriptive forms of decision-making. This is also where AI may begin to offer practical value, helping teams query their environments, surface issues more quickly, and reduce the time between detection and response.

Many of the resulting improvements are straightforward and operational. Missing blanking panels can undermine airflow and cooling efficiency. Network documentation can drift from physical reality, leaving cabling diagrams that no longer reflect actual connections. Firmware and configuration standards can quietly fall behind, allowing lifecycle risks to build over time. Used carefully, AI-assisted tools may help teams identify these gaps earlier and reduce some of the manual effort involved, while more automated processes can support monitoring, workflows, and policy enforcement within clearly defined controls.

As operations evolve, the metrics used to measure performance are also changing. Power utilisation percentage is becoming increasingly important as operators assess IT energy consumption against designed capacity in AI-driven environments. New AI-specific metrics, such as IT energy per token, are also emerging, but these are only meaningful when viewed alongside utilisation and financial measures such as free cash flow and revenue efficiency, which together provide a clearer picture of performance and profitability.

These changes carry important implications for sovereignty-driven deployment. As infrastructure becomes more distributed across edge locations and regional sites, the cost and complexity of relying on people to manage every environment increases. Greater operational autonomy may therefore become more important for scaling operations while maintaining security, resilience, and consistent control.

What this means for operators

For UK and European operators, sovereignty shows up in everyday choices around sourcing, servicing, compliance, and security, particularly as regulated inference workloads become more common. The most effective response is not to chase scale for its own sake, but to focus on visibility, flexibility, and control. That means understanding infrastructure in detail, using automation to reduce day-to-day operational pressure, and designing environments that can adapt as external conditions change, rather than locking long-term assets to short-term assumptions.

Related Articles

More Opinions

It takes just one minute to register for the leading twice weekly B2B newsletter for the data centre industry, and it's free.