Skip to content Skip to footer

The importance of a solid back-up strategy

Image: Adobe Stock / KanawatTH

Jon Fielding, Managing Director, EMEA at Apricorn, explains how failing to implement a data back-up strategy can compromise recovery.

A good back-up strategy is fundamental to ensuring business continuity. Whether it’s due to a cyberattack, employee error or technical failure, being able to rely on back-up can mean the difference between an organisation being able to resume business as usual (BAU) with minimal disruption or struggling to recover. But a recent survey looking at current practices revealed that standards are slipping and to devasting effect.

The Apricorn Annual Survey 2023 found that of the 90% companies that did need to resort to recovering their data, only 27% were fully able to do so. That equates to one in three businesses losing data due to back-up failure and, considering that most back-ups are triggered by the loss of a single application, that does not bode well for bigger incidents. Even more alarmingly, that figure is close to half of what it was in 2022, when 45% were able to fully recover their data, indicating that practices have significantly worsened.

Around a third of security decision makers who had experienced back-up failure admitted this was due to a lack of robust back-up processes compared to just 2% the previous year, which suggests awareness is growing. Furthermore, nearly a quarter of those surveyed recognised they didn’t have sufficiently robust processes in place to enable rapid recovery from an attack, up from 15% in 2022. So, if teams are all too conscious of the fact their back-up strategies aren’t up to scratch, why is this happening?

Poor testing or user error?

One possible reason is that back-up processes just aren’t being tested enough. The partial data recovery we’ve seen shows the processes are there but they’re not being routinely put through their paces. The slightest small change in the business can then lead to data not being saved. Routine testing also provides an opportunity to assess the performance of the back-up, keeping downtime to a minimum, and to evaluate and make improvements by updating, adding to or swapping out back-up mechanisms. 

However, the survey also uncovered a shift in how companies are going about backing up their data, which could shine a light on the worsening recovery rate. Only half of back-ups are now automated, compared to 93% previously, with a dramatic increase in the number doing so manually. In fact, manual back-ups went from 6% in 2022 to 48% in 2023. There was a corresponding rise in the use of personal back-up devices, which rose from 1% to 16%. This means individuals are now much more involved in the back-up process and as we all know, humans are fallible, so this increases the risk of error.

The switch from automated to manual is almost undoubtedly due to the mobile workforce, with more people working from home, on the move or in a hybrid arrangement. Users are being given much more autonomy and the uptick in personal storage shows they’re aware of the importance of backing up data and have no doubt been instructed to do so by their IT team. But while such flexibility is to be welcomed, relying so heavily on the user to carry out back-ups is clearly not working. It relies on the user remembering to execute the back-up in the first place and then executing it correctly numerous times a day.

Multiple back-ups

Moreover, the number of businesses backing up to both a central and a personal repository was also low across the board. Only 38% of respondents did so, regardless of whether they were using the automated or manual approach, revealing that few are backing up to both personal and central storage. Being overly reliant on one form of back-up can create a single point of failure so it always pays to adopt a belt-and-braces approach with multiple forms of back-up.

The 3-2-1 rule is well-known, with the concept made famous by photographer Peter Krough, who wrote about using it to safeguard digital imagery some twenty years ago, although he confesses it originated in IT. It sees the retention of at least three copies of the data, stored on at least two different media, one of which is offsite, and preferably offline. For example, an encrypted removable hard drive or USB which can be disconnected from the network can be used for the latter, effectively creating an air gap.

These days, there are numerous options available for following the 3-2-1 rule with the business spoilt for choice. There are disk (i.e. DAS, SAN, NAS etc), object storage or cold archive storage (i.e. in the public cloud), hosted back-ups (i.e. via CSP), and replication of cloud options, for instance. Having multiple copies of data ensures the business can recover because in the event that one is compromised, it’s possible to draw down data from one of the other sources.

An effective recovery plan requires more than multiple back-ups, however. There needs to be consideration given to the frequency of back-ups which should happen several times a day (technologies can ensure that only the changed data is refreshed during the back-up), and a clear process for accessing and recovering the data, some of which may need to be converted or recreated which then adds to recovery time. Knowing what needs to be done, by whom and practising it can ensure that, when the time comes, all data can be recovered, downtime kept to a minimum and BAU resumed.

Picture of Jon Fielding
Jon Fielding
Managing Director, EMEA at Apricorn

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.