During the 2010s, the use of data catapulted the global business world into a new era. The 2020s will likely be the same.
IDC research revealed that the world increased the capture, creation, and copying of data by 5,000% between 2020 and 2021. With this unstoppable trend, businesses are often struggling to deal with the overloaded data due to a lack of talent. The demand for data experts, who have academic backgrounds in areas such as machine learning and computer science, has significantly outstripped the supply. DCMS Digital Skills Report shows that nearly half of businesses are struggling to hire the right workers with prime data skills. That needs to change, and fast.
One in 10 job adverts now mentions the need for data skills, according to research by the Royal Society. The sheer demand for data experts has forced businesses to rethink the way they structure their internal teams or source new talent, such as hiring students through internal training or bootcamps. These ideas are gaining momentum and will be key to this transformation.
Thankfully, the way businesses deal with the data available to them is changing. New job roles, new technologies and new organisational cultures around data offer new ways for business leaders to extract value from data. Crucially, business leaders can now derive insights from data without relying on a single overworked central data team outputting business insights into the form of reports and visualisations. Training internal talent in basic data skills will be key to this transformation.
The concept of a ‘data expert’
There is a growing trend for the artificial divisions between data experts and business users to break down, with data experts becoming more business-minded and business users learning to ‘self-serve’ with data. One aspect of this is the rise of roles such as ‘analytics engineer’, which help to bridge the gap between IT and data consumers within an organisation. Analytics engineers collaborate with the team to analyse the data, to ensure that the business can use the high-quality insights generated from their work. Together with wider teams, these engineers help to set up and activate a truly modern data stack.
Internal upskilling for all professionals
Rather than relying solely on hiring qualified data experts, business leaders should aim to train their existing workers with data skills: this can help to keep costs and overheads down. Data literacy courses are already becoming common in many companies, and large organisations such as Bloomberg and Adobe are going further, with in-house digital academies dedicated to training workers in how to use data.
Training analysts to use low-code or no-code tools for data management costs far less than hiring a data scientist. By removing bottlenecks in daily data operations, teams that need analytic dashboards to make decisions for campaigns don’t have to wait and can focus more on revenue-generating activities.
Training existing employees is particularly powerful because they combine newly acquired data skills with their existing domain expertise to extract maximum value from the data. These ‘data citizens’ will be able to extract value from data without waiting for a separate team of data experts or scientists to do it for them.
Leveraging a new suite of tools
Democratising access to data within your organisation and unlocking the business value of data requires the right tools. Data management is critical to ensure data is delivered to the right team within your business, in a condition where it can be used, without the bottlenecks and delays that can come from relying on a central data team.
Data management deployments automate procedures into one framework, making it simpler for business users to extract value. Along with tools such as data quality management, data validation ensures that data meets the standards required by business users.
Perhaps even more important is Reverse ETL, which turns the normal job of data warehouses on their head to direct a stream of valuable data directly to the teams which need it most. Reverse ETL reverses the traditional process by which data is loaded into a data warehouse, by first extracting it from a data warehouse and then loading it into your operational systems.
In Reverse ETL, the data is loaded from the data warehouse and then fed directly into business software such as ERP (Enterprise Resource Planning) or CRM (Customer Relationship Management). Sales or marketing teams have data delivered directly into the applications they use in their daily work, meaning there’s less training required to understand it.
For example, this can be used to deliver personalised offers based on purchase history or more precisely targeted marketing campaigns. It’s key to breaking down the barriers between data and the data consumers within a company, removing the burden from overworked specialist data teams.
Enter data mesh
Along with these technological changes and job role evolutions around data, there is also a new organisational approach to how data works within companies; a data mesh. In short, data mesh offers a decentralised and ‘self-serve’ approach to delivering data throughout an organisation. Rather than relying on a centralised data team – where the warehouse is controlled by hyper-specialised experts – data is organised via shared protocols, in order to serve the business users who need it most.
The significance of this is that it helps empower teams to access the correct data they need, right when they require it, via the distribution of data ownership across the organisation. Whilst the concept of data mesh isn’t necessarily new, the key to operationalising this approach effectively is the introduction of a platform or universal interoperability layer that facilitates the connection of domains and associated data assets within it. Companies can then use a platform that will help them to connect all the dots and manage the entire operation, in order to fully operationalise the approach.
Being aware of the business value of data is no longer enough. Companies need to start adopting a ‘data as a product’ approach as data mesh will be the core to enabling the application of the product life cycle to data deliverables. By applying product thinking to datasets, a data mesh approach will ensure that the discoverability, security and explorability of datasets are retained. Teams are then better prepared to swiftly derive the most important insights from their data.
Self-serving data is the future of business
Getting the right access for the right people is vital for businesses to make timely decisions. Over time, with the appropriate tools and technologies, businesses should evolve to have data citizens throughout the company, who are able to self-serve data as a product without the need for hyper-specialisation. By having an internal team of data experts across a majority of functions, companies can make full sense of their data and are empowered to make insightful decisions. The 2010s saw a Cambrian explosion of data use-cases. The 2020s will see the same trend. Businesses that cannot adapt will be left far behind.