What is cloud repatriation, and why does it matter?

Kiran Ghodgaonkar, Head of Marketing at NetScaler, answers some key questions about cloud repatriation, and how to maximise the best of both worlds while minimising the drawbacks of each.

Over the last decade, you’d have been hard pressed to find a business that hasn’t developed some level of cloud mania. With so much promise, and so many organisations demonstrating great returns – across almost any technology metric you care to mention – cloud became almost the presumed destination for workloads. As such, we’ve seen all kinds of deployments and migrations, taking applications which have been running on-premise (sometimes for decades!) and plugging them into the scalability, efficiency, and security advantages of cloud platforms.

Except that, in some cases, those advantages never quite materialised. When cloud investments, whether in new capabilities or upgrading existing workflows, don’t live up to their promise, we often see a kind of elastic snapback where businesses rush back to the known quantity of on-premises operations. We might call it repatriation, or reverse migration, or simply U-turning – but generally, we’re talking about strong reactions against cloud ‘failures’ and concerted efforts to return to privately-owned data centres.

That matters, in short, for the same reason that cloud migrations matter: it’s a major change to organisational strategy, deeply impacting both the labour and investment needed in the short-term and the competitive footing a business will find itself on in the longer term.

What do you see as the major trigger for cloud repatriation decisions?

There are three main tripwires that tend to trigger cloud repatriation initiatives: finances, practicalities, and regulation.

On the financial front, the economics of cloud enjoy a very different dynamic to those that on-premise infrastructure entails, and that transition can result in unpleasant surprises. Usage-based fees can look very attractive on paper. Once they have more than a few processes on the go, however, spiralling compute and storage fees can start to put businesses off their cloud-first strategy. This is only growing more acute as cloud fees rise along with costs in other sectors: it’s easy to see how a sense of nostalgia would develop for the more predictable, up-front expense of just buying the technology and using it how you will.

In terms of practicalities, we’re thinking about differences in the tools you use and the kinds of data visibility you can obtain. To state the obvious, cloud migration takes away your ability to physically go and prod a server when an application is misbehaving. It abstracts IT teams further away from the systems they are overseeing, and can make it more difficult to spot things like misuse and vulnerabilities. Tools exist, of course, to create observability for cloud-based infrastructure, but they work differently to those that teams are used to with private infrastructure, and the lag involved in reskilling can be unacceptable – especially for more sensitive data or more critical processes.

Finally, the regulatory net is tightening in markets around the world as governments try to get to grips with new business models arising from new ways of working with data. That’s, generally, making auditability and control a bigger imperative for businesses that need to demonstrate compliance to regulators whose demands might change quickly in the coming years – especially in sectors like healthcare and finance which handle highly consequential personal data. In this context, owning the whole system can become very attractive.

There are unique combinations and versions of these factors for different businesses, of course, but I suspect that every business that is repatriating would recognise at least one of these situations.

Are there risks associated with repatriation that businesses considering it might not be aware of?

Needless to say, there’s no big reset button that businesses can hit which will roll them back to on-prem operations overnight. Any major data transfer needs to be very carefully managed, regardless of the source and destination: planning to ensure both security and continuous uptime during the transfer is a complicated task.

Businesses will also need to accurately forecast the costs that repatriation will reintroduce to their capital and operating expenditures. This is not just the up-front impact of standing up an on-prem data centre, but any additional staffing costs that managing the infrastructure will require, along with new processes and tools.

The less foreseen factors, however, are likely to be around whether it is possible to move workloads on-prem without major refactoring or, in some cases, at all. Cloud native applications are likely to have proliferated quickly, even where a business isn’t satisfied with their overall cloud performance, and these often rely on capabilities which can’t be replicated in on-prem environments.

Finally, there is a more uncertain question around the impact that turning away from the cloud will have on the organisation’s competitive stance – if not on day one, then in two or five years’ time. Predicting exactly what a business will need over that kind of timescale is, of course, no easy matter, but leaders should be very mindful of the ways in which they might be losing potential capabilities as well as gaining some.

What’s the problem with treating the path from on-prem to cloud as a two-way street?

There’s nothing at all wrong with anticipating a fairly fluid balance between cloud usage and on-premise infrastructure – if, and only if, the business is set up in a way that can deploy consistently and securely in a landing point agnostic way. In fact, I think that the final destination for businesses currently working through this pendulum swing between cloud and on-prem will be a more flexible philosophy that can land applications and data wherever they need to be, whenever they need to be there.

You’ll notice that, when talking about the reasons why businesses repatriate, I didn’t suggest that there’s anything untrue or mistaken about those factors: they’re really valid, and businesses should feel empowered to, for example, keep high-risk data on-site where they can give it the level of protection it deserves.

At the same time, businesses should, and I think often do, want to hold on to the capabilities that only cloud can deliver. Being able to dynamically burst capacity in response to demand, for instance, or access high-performance computing resources on an intermittent basis, or take a test-and-learn approach to the discovering the true resource demands of a new application are all approaches that only the cloud enables.

The problem comes with businesses thinking in terms of a binary choice between going all-in on cloud or repatriating back to on-prem: there are synergistic benefits to running many workloads in a shared cloud platform, but a smartly engineered deployment environment will empower teams to put data and applications wherever works best for that specific use case.

How can businesses get repatriation strategies right?

Take your time to start with discovery and understanding: map out your assets, their different needs, and how certain you are of their future requirements. Consider implementing new discoverability and observability tools to gain a more accurate, more granular, and more up-to-date view of your infrastructure.

From there, you will be in a much stronger position to architect an environment which both meets your needs today and – vitally – can adapt more easily to future changes in priority. Ultimately, making sure that IT teams can shift between different types of deployment without refactoring applications or reskilling the team to manage them will both make any immediate repatriation process smoother and make decisions around cloud or on-prem less vexing in the future.

Related Articles

Top Stories