Skip to content Skip to footer

Busting cloud migration myths

Image: Adobe Stock/ Gorodenkoff

Until now Tier 1 business-critical applications have been firmly on-premises. While Tier 2 and 3 applications have moved from local data centres to the cloud (and to cloud-native applications at that), high-performance databases like Oracle have stayed put. 

But a change is coming, with organisations starting to reconsider the decision to keep these applications on-premises. The catalysts for this about-turn, supply chain shortages and cost-cutting imposed by the economic climate, added to bottom-line benefits from agility and flexibility to efficiency, could lead you to assume that cloud migration is inevitable. But some organisations are still choosing not to migrate. Why?

When I talk to cloud or infrastructure architects, they tell me they would have to make compromises, that if they migrate high-performance applications to the cloud they won’t get the same capabilities offered by their on-premises SANs, or that the cost will be the same or even higher. And that can be true, especially if you pick an off-the-shelf native option. But there are alternatives.

In fact, many concerns about migrating these workloads to the cloud are either no longer an issue or could be easily mitigated. The technologies available today to address these challenges mean these ideas are now myths, hangovers from another time, rather than valid concerns.

Let me share the facts, so that you can be confident you’re making the right decision for your business-critical workloads.

Myth: The cloud can’t provide the SAN capabilities that data workloads require

In reality, there are software-defined storage solutions delivering the rich data services you’re used to with your on-premises SAN, you just need to know where to look for them.

SAN capabilities are crucial: bringing added benefits to the business, including data security and disaster recovery. But although cloud infrastructure has much of what’s needed to migrate and run high-performance databases, native offerings can lack basic SAN capabilities.

Some specialist software-defined storage solutions on the market, today however, offer features such as automation, centralised storage management, snapshots, clones, thin provisioning, compression, and more. Different vendors focus on specific capabilities, but there is bound to be a product that fits your requirements.

Not all platforms are configured alike. Some of the more popular ones provide a la carte services requiring administration oversight which can be time consuming and cost money in the long run. Instead, look for options with no hassle data services that are built in and do not require expensive additional licensing.

Myth: The cloud can’t provide the high performance, low latency needed for IO-sensitive database workloads

Look beyond the native public cloud solutions and you will find non-native solutions offering the high performance/high IOPS and consistently low latency needed for IO-sensitive applications. For an organisation, this can lead to accelerated data analysis, business intelligence, product innovation, and a better customer digital experience. It is a case of finding the right fit for the workload.

Some software-defined storage solutions available today, for example, are capable of delivering equivalent performance to local flash and with consistently low latency when provisioned with storage-optimised cloud instances. In fact, some solutions can deliver up to 1M sustained IOPS per volume and low <1 msec tail latency. Compare this to native public cloud storage solutions that top out at 260K IOPS. Actually, it’s highly unlikely that your application will need more than 1M IOPS, but there’s peace of mind in knowing that this level of performance is available, and that provisioning it won’t break your budget. 

Myth: It costs more to run high-performance workloads on the cloud

The truth is that there are software-defined solutions out there that scale compute independently of storage to keep costs lower and more predictable.

However, the cost myth is probably one of the biggest reasons that large-scale, high-performance database workloads are not migrating to the cloud. Public cloud cost structures can be expensive for unpredictable and IO-intensive workloads. On most clouds, the more IOPS you provision for, the more money you pay.

It doesn’t have to be that way however. There are solutions that will keep the cost consistently low even if more IOPS are needed, allowing you to predict your costs. Additionally, some of these technologies are disaggregated so compute scales independently of storage. Dynamically scaling infrastructure in any direction (up, out, or in) can have a dramatic impact on cost-efficiency and the ability to meet SLAs while keeping pace with unpredictable business demands.

And of course, there are costs beyond storage to take into account. To determine whether the cloud delivers on price and performance, it’s important to consider the full cost of managing the on-premises solutions (hardware, software, networking, data centre overhead, administrative overhead, time to provision the systems, etc).

Myth: The cloud is expensive for unpredictable workloads

The good news here is that it is possible to pay for the capacity you use, not what you provision for, if you choose acloud solution with built-in auto scaling capabilities.

IT organisations often overprovision compute and/or storage to ensure business continuity when unpredictable workloads arise, increasing costs.

To mitigate this, some organisations burst their workloads to the cloud, a hybrid implementation where cloud resources are provisioned to accommodate spikes in demand and then decommissioned when no longer required. Other organisations migrate their entire workload to the cloud to take advantage of the cloud’s dynamic and automated provisioning capabilities. But, if they’re using native public cloud storage, they may still be paying for what they provision rather than what they use. Solutions with built-in auto-scaling capabilities are available, but you do have to look beyond the native offering.

The bottom line is you no longer have to anticipate how much capacity or compute resources you’re going to need. By configuring your cloud infrastructure to monitor demand and auto scale, you’ll guard against overprovisioning, while still providing the resources your workloads need: making this a very cost-efficient option.

This is a game changer for many organisations that provision for significantly higher volumes than they end up using. The savings here suddenly make migrating to the cloud an easy choice that allows them to divert budget to other projects.

Myth: The cloud does not protect against data loss

In fact, hyperscale public clouds have higher durability and availability than on-premises data centres. They have built-in support for effective disaster recovery (DR) architectures with multiple availability zones (AZs) and regions: safeguarding against data loss and providing business continuity reassurance. Data services such as snapshots, clones, and built-in incremental backup and restore offer added peace of mind.

In fact some organisations choose a hybrid implementation when they initially migrate to the cloud. Running workloads on-premises and synching data to the cloud gives these users an extra layer of protection. And to migrate to the cloud at your pace and on your terms, some new storage solutions allow you to port software licences between on-premises data centres and the cloud, so you can run your workloads wherever it makes the most sense.

The pricing structure for these services varies; look for providers who include data services in the licence and free backups and restores.

Don’t forget, technology evolves. What was true a few years ago may no longer be the case. Looking for new developments that fit your data centre or data strategy overall can have knock-on effects. They can make a significant difference to your organisation’s ability to bring products to market quicker, remain competitive, and increase profits.

With these myths busted, my final piece of advice is to understand your workloads. Public cloud infrastructure now provides everything you need to migrate your high-performance databases away from your premises. While the compute and storage domain might be unique to each cloud service provider (think AWS, Azure, and GCP) it’s easy to educate yourself on the nuances for each platform.

But it’s only by knowing what you need, that you can choose the right platform provider. And it’s only by really understanding your workloads that you can know whether it’s time to migrate to the cloud, and which provider offers what you need to make the migration a success.

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.