DCR Predicts: Is data sovereignty about to trigger a cloud rethink?

Fred Lherault
Fred Lherault
Field CTO EMEA/Emerging Markets at Pure Storage

With regulators and boards paying closer attention to where sensitive data sits, Fred Lherault, Field CTO EMEA/Emerging Markets at Pure Storage, outlines why hybrid strategies and selective cloud repatriation are likely to accelerate as AI scales.

After two years of accelerated AI experimentation, rising expectations, and rapid vendor expansion, I believe 2026 will mark an important inflection point for organisations building modern data infrastructure. Many enterprises are now moving past the initial hype cycle and focusing on what is required to operationalise AI reliably and at scale.

That shift is already visible across customers evaluating how AI will integrate into production workflows. If we extrapolate from these trends, several themes are likely to influence how organisations design their data pipelines, storage architectures, and cloud strategies in the year ahead. The following reflects my perspective on how these dynamics may unfold.

From hype to production: data readiness and inference become the priority

While some organisations are still convincing themselves how essential AI is, most are now realistic about what they do, and, crucially, do not deploy. The switch in focus from training to inference means that, without a robust inference platform, and the ability to get data ready for AI pipelines, organisations are set to fail.

As AI inference workloads become part of the production workflow, organisations will have to ensure their infrastructure supports not just fast access, but also high availability, security, and non-disruptive operations. Not doing this will be costly, both from a results perspective, and an operational one.

However, most organisations are still struggling with the data readiness challenge. Getting data AI-ready requires going through many phases, such as data ingestion, curation, transformation, vectorisation, indexing, and serving. Each of these phases can typically take days or weeks, and delay the point when the AI project’s results can be evaluated by the business.

Organisations who care about using AI with their own data will focus on streamlining and automating the whole data pipeline for AI – not just for faster initial results evaluation, but also for continuous ingestion of newly created data, and iteration.

This remains one of the most significant barriers to AI adoption. Enterprise data is often dispersed across legacy systems, cloud environments, and archives, which makes it difficult to access and prepare at the speed AI workflows require. In 2026, we can expect this challenge to become more pronounced as organisations look to extract value from all of their data, regardless of location. Manual preparation will not scale to meet these requirements. Automated pipelines, richer metadata, and integrated data platforms will become essential foundations for organisations aiming to use AI with continuous, repeatable outcomes.

AI and data sovereignty will reshape cloud strategy, and accelerate selective repatriation

The dual issues of AI and data sovereignty are driving concerns about where data is stored, and how organisations can maintain trust, and guarantee access in the event of any issues. In order to extract value from AI, it is critical for organisations to know where their most important data is, and that it is ready for use.

Concerns about data sovereignty are also driving more organisations to reconsider their cloud strategy. Rising geopolitical tensions and regulatory pressure will shape nations’ data centre strategies in 2026 in response. Governments, in particular, want to minimise the risk that access to data could be used as a threat or negotiating tactic. Organisations should be similarly wary, and prepare themselves.

We are already seeing early indicators of this shift. Boards and regulators are paying closer attention to where sensitive and strategically important data resides, driven, in part, by evolving regulatory frameworks such as GDPR, DORA, and guidance emerging from the EU AI Act. This scrutiny is prompting many organisations to reassess cloud strategies that once prioritised cost or convenience over sovereignty and resilience.

As a result, hybrid models are likely to expand, with more AI-critical datasets and workloads positioned closer to where they can be governed, audited, and controlled. This is not a retreat from the cloud, but a more deliberate, workload-specific leveraging of it.

KubeVirt will scale into mainstream production

The recent changes to VMware licensing that followed Broadcom’s acquisition have kickstarted a conversation around alternative approaches to virtualised workloads. KubeVirt, which allows management of virtual machines through Kubernetes, provides one such alternative—a platform that encompasses both virtualisation and containerisation needs—and I expect it will take off in 2026.

The KubeVirt offering has matured to the point where it is suitable for enterprise needs. For many, moving to another virtualisation provider is a huge upheaval, and, while it may eventually save money, it always comes with a set of limitations and constraints, especially when it comes to everything that surrounds the virtualisation platform (data protection, security, networking, and so on).

KubeVirt enables organisations to leverage the growing Kubernetes ecosystem to more quickly realise the value in a platform which provides the capabilities to manage, orchestrate, and monitor not just VMs, but also containers, regardless of how the proportion of those evolves over time.

KubeVirt’s momentum reflects a broader shift in how organisations want to operate their infrastructure. As containerisation becomes standard and AI workloads scale, many teams are looking for a unified operational model that reduces complexity, and avoids long-term platform lock-in. Consolidating virtual machines and containers under a single control plane aligns with this direction.

If adoption increases as predicted, storage and data services will evolve in parallel, with greater demand for persistent, low-latency, Kubernetes-native storage that can support mixed-workload environments.

2026 will be about discipline, not disruption

If the past two years have been defined by rapid disruption, driven largely by AI, 2026 is likely to be a year where organisations prioritise the operational foundation required for long-term success. Enterprises will:

  • Move from AI experimentation to consistent, production-grade inference models
  • Modernise data pipelines to support continuous data readiness
  • Reassess cloud strategies with a sharper focus on sovereignty, governance, and resilience
  • Evaluate VMware alternatives, such as KubeVirt, which support a unified approach to virtual machines and containers

The organisations able to take these shifts in their stride will be best placed for success in 2026.

This article is part of our DCR Predicts 2026 series. The series will officially end on Monday, February 2 with a special bonus prediction.

DCR Predicts 2026

Related Articles

More Opinions

It takes just one minute to register for the leading twice weekly B2B newsletter for the data centre industry, and it's free.