Skip to content Skip to footer

Moving Big Data to the cloud: A big problem?

Image: Adobe Stock / Connect world

Mark Gibbs, senior product manager at SnapLogic outlines the challenges of moving big data to the cloud, and how we can hope to break down the barriers preventing migration.

Digital transformation is overhauling the IT approach of many organisations and data is at the centre of it all. As a result, organisations are going through a significant shift in where and how they manage, store and process this data.

To manage big data in the not so distant past, enterprises processed large volumes of data by building a Hadoop cluster on-premises using a commercial distribution such as Cloudera, Hortonworks, or MapR.

The data analysed was mostly structured and required a large capital expenditure upfront to purchase the necessary hardware. Adding to this, Hadoop is a complex infrastructure to manage and monitor, requiring organisations to employ individuals with a specialised skill set, skills which are rare to come by.

To tackle these issues many organisations have been looking towards the cloud. Yet the benefits promised from moving big data projects to the cloud haven’t been realised for most organisations and as a result, data lakes are still being left on-premises.

Heading for the clouds

By creating or migrating their big data architecture to the cloud, organisations can take advantage of tremendous operational cost savings, nearly limitless data processing power and the instant scaling options the cloud provides. Additionally, they don’t have to have large capital expenditure up front or worry about having intimate knowledge of Hadoop.

Many enterprises are going through this ‘lift and shift,’ where they move their on-premises data cluster to the cloud. But historically this has also come with its own inherent issues, and a lot of the challenges with moving big data projects to the cloud have centred around simply getting the right data in the right place.

It comes down to skills and cost

Moving big data to the cloud sounds simple enough. But migrating on-premise data lakes to the cloud and then connecting cloud-based big data environments with diverse data sources, while also creating Apache Spark pipelines to transform that data, requires highly technical knowledge and continuous coding resources from data engineers and core IT groups.

Developers must write code to integrate with each application’s programming interface (API) and authentication mechanisms, thereby enabling the data to freely move between the applications and the data lake. Not only is this an incredibly time-consuming process, it is also error-prone, two realities that are magnified during the maintenance stage of cloud-based big data projects.

As with any other software project, code decays over time and must be updated. If the developer who wrote the code leaves the company, often the IT organisation’s ability to understand the pipeline that is being used at the code level also vanishes.

This time drain on critical IT staff is one of the biggest issues organisations have had to overcome in moving to cloud-based big data projects. The intensive management and monitoring required ultimately results in prohibitive operational costs, longer time-to-value and a strategy which does nothing to address the OpEx and skill set gap which is steadily emerging.

Finding individuals with the necessary skills and experience to build big data and cloud pipelines is a tough process. Unsurprisingly this is impacted by the current skills gap in the IT landscape.

Research from Experis has shown demand for big data skills and professionals has grown 78% in the last year, whilst demand for cloud skills and professionals has grown 30% in the same time frame.  

With these individuals in such short supply, if you do manage to have them in your IT team, having them focused solely on managing and maintaining the big data environment, both pre, during and post migration to the cloud, is frankly a waste of resources. It also has a big impact on the second big issue in moving to the cloud – cost.

If you employ people who are highly skilled, you want them to deliver significant and strategic benefit to the business. To focus on higher value tasks and projects that will help the organisation innovate. The flexibility and scalability of the cloud can be a huge benefit in a drive towards innovation. But the proposed time-to-innovation identified at the start of the cloud migration will never be realised if teams are focused solely on infrastructure management to make the big data project work.

Buy vs build

The solution to this problem is relatively simple, and it all comes down to buy vs build. Unless you’re Google, chances are you’re not going to self-build every aspect of your IT estate. So why should you self-build all the connections you need too?

For big data projects to flourish in the cloud sooner, organisations should look towards implementing a completely managed data architecture, including data integrations (iPaaS), processing (BDaaS) and storage (SaaS).

By doing this, organisations should then be able to effortlessly deliver large data sets to and from their cloud-based data lakes, regardless of where the data is coming from. This approach can also increase productivity by eliminating cumbersome manual tasks around adding information and transforming data, allowing teams to focus on those value delivering activities instead.

By supporting this managed data architecture with self-service, organisations can free up even more time within the IT team. Self-service integration is making it fast and easy for organisations to create automated data pipelines with no coding, and self-service analytics is making it easy for analysts and business users to manipulate data without IT intervention.

By using self-service tools like this it’s not just organisations with fully fleshed out IT teams that can benefit, businesses struggling to recruit coding talent can also develop their own cloud big data pipelines as part of their managed data architecture in the cloud.

Removing the complexities

Running big data projects in the cloud should be simple. All organisations, regardless of size, should be able to realise all the benefits cloud provides as soon as they get up and running, not years down the line. It’s only in taking a step back at the planning stage and removing the complexities surrounding cloud migration and integration that businesses will finally be able to use their big data projects for innovation and to deliver business value.

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.