Is enterprise AI deserting the cloud – and racing to the edge?

Lee Larter
Lee Larter
Pre-sales Director at Dell Technologies

Lee Larter, Pre-sales Director at Dell Technologies, explains why the winners of the AI era will be those that build a distributed, data-centric infrastructure able to place compute and intelligence wherever the data lives.

AI is rapidly reshaping the business world, enabling breakthroughs from real-time fraud detection in financial services to predictive maintenance in manufacturing. The UK AI market is valued at over £21 billion and is projected to reach £1 trillion by 2035, according to the US International Trade Administration. For UK enterprises to unlock AI’s potential, they need more than just advanced algorithms and top-tier data scientists. They require a robust, adaptable infrastructure that can flex and scale at pace with evolving demands.

The shift to distributed data centres 

For years, massive cloud-based models, trained on enormous datasets and running in centralised data centres, have dominated AI discourse. But a fundamental shift is underway. Now, building a truly future-ready infrastructure is about supporting AI initiatives – wherever data lives. As AI deployment scales, the next frontier isn’t (just) in the cloud – it’s at the edge, where immediate, data-driven decision-making is critical. AI is increasingly embedded in factories, hospitals, energy grids and countless other real-world environments. We are witnessing an infrastructure revolution, and with it, a distributed, seamless future is emerging.

A distributed data centre can be defined as an architecture where compute and storage resources are in multiple geographic locations but centrally managed. The driver of such a shift? The need to support the next era of AI, especially as we evolve from generative AI to agentic AI. 

Onboarding agentic systems – autonomous, self-organising and distributable – will bring adaptability and goal-oriented intelligence to business operations and the future of work. And it will have huge implications for infrastructure. 

Deploying agentic AI at scale requires a robust, scalable infrastructure and integration with existing tools that support both cloud-based and edge computing environments. Businesses need to ensure that their AI workloads have access to all their data in a consistent format – regardless of where it’s located.

Four pillars for optimised AI success 

Successfully scaling AI means making careful choices about every layer of infrastructure. This is particularly true for those looking to balance traditional workloads like virtual machines and databases, with AI, edge applications and containerised jobs. These are four essential areas to build scalable, future-ready operations:

1. Scaling computing power and networking for AI anywhere

To drive enterprise AI, performance is essential. Training large models, parsing immense datasets and generating real-time insights all require powerful accelerated computing precisely where the data lives. This is not just about stacking GPUs; it involves deliberate choices across the entire technology stack. AI-specific hardware – including GPUs, NPUs and dedicated accelerators – is now indispensable for enterprises pushing towards the edge of AI capability.

Seamless, high-speed data movement is another imperative. High-performance GPU farms, generative AI applications and enterprise-scale AI deployments demand connectivity solutions capable of handling massive data flows with absolute precision and speed. High-bandwidth, low-latency networks are necessary to interconnect clouds, sites and ultradense server racks. Solutions like software-defined networking (SDN) and advanced network optimisation enable consistent, uninterrupted AI operations regardless of data location.

2. Data management driving seamless AI workflows

AI thrives on high-quality data that is secure, accessible, and well-governed. However, orchestrating this data across multiple clouds is an immense technical challenge, magnified in heavily regulated markets like the UK. Because AI is only as powerful as the data that fuels it, organisations need a platform designed for performance and scalability. Key capabilities of an effective AI data platform include:

  • Data placement: Efficiently ingesting and placing vast data volumes from varied sources, using scalable file, structured and object storage to support high-performance workloads.
  • Data processing: Enhancing data discoverability using curation, metadata enrichment, tagging and dynamic indexing. This streamlines retrieval and paves the way for seamless integration with business applications.
  • Data protection: Protecting data with robust access controls, masking, encryption and intelligent threat detection to assure comprehensive compliance and maintain trust.

An AI data platform architecture needs to be able to adapt to the evolving needs of AI and data teams. As such, it should be open, flexible, and secure to avoid vendor lock-in and support an extensive ecosystem of tools and standards. This ensures that UK enterprises maintain compliance with regulations such as GDPR and CCPA, while addressing concerns like data bias and privacy in AI models.

3. Storage underpinning hyper-scalable AI

Enterprises must next focus on secure storage that supports exponential data growth while controlling costs and minimising bottlenecks. A tiered storage architecture is vital. High-speed flash delivers instant access for active datasets, while cost-effective archives handle long-term storage, maintaining performance and budget discipline. Distributed storage and hybrid cloud object solutions are particularly suited to managing the vast, unstructured data typical of AI workloads.

On-demand storage models are also accelerating in popularity, aligning with unpredictable data growth patterns and reducing upfront costs. Automation for archiving, deletion and migration boosts storage efficiency and compliance with data retention policies. It also ensures that AI models are always fed with the freshest, most relevant data.

4. Operational efficiency and sustainability at scale

The environmental impact of large-scale AI adoption is an emerging challenge. Thankfully, recent advancements include more energy-efficient AI infrastructure, innovative cooling and advanced management software that together cut power usage and extend hardware lifespan. Plus, real-time telemetry can provide the insights needed to optimise power and thermal management while preempting hardware issues. These provide the added advantage of reducing latency and boosting cost savings.

Intelligence at the Edge and beyond

Moving AI from proof of concept to pervasive reality demands both strategic vision and robust infrastructure engineered for innovation. By focusing on computing power, efficient data management, adaptable storage and operational sustainability – enterprises can shift from pilot projects to truly intelligent, scalable operations. AI’s future does not pivot around central data lakes; it will follow the data, demanding distributed, low-latency processing wherever information resides. 

Related Articles

Top Stories