Skip to content Skip to footer

Data loss: An unfortunate reality

Philip Bridge, president at Ontrack explores the causes of data loss, and what can be done when organisations can no longer rely on backups alone to restore vital data post breach.

Data loss is an unfortunate reality for anyone who manages virtual systems. According to Europol, the frequency of cyberattacks has never been higher. In particular, there has been a significant rise in the number of phishing mails containing the words corona or Covid-19.

One such example is a spoof email purporting to come from the National Institute for Public Health, that claims to contain important information about the virus. However, if a user opens the attachment, the computer will become infected with ransomware and access to their files compromised.       

Lately, we have seen an increasing amount of data recovery cases emerging where backup applications have also been erased by cyberattacks. Often, these are virtual machine (VMs) backup files.

We are living in an increasingly virtualised world, so this is hardly surprising. Modern hypervisors make the configuring and maintaining of physical servers far less complicated than ever before.

However, whilst sometimes the data from backup files and storage systems can be saved post breach, it is often not clear how long the cybercriminal has had access to the system. As a result, organisations cannot rely on backups to restore their vital data. So, what can be done?

Our own data shows that – in addition to ransomware – human error, hardware malfunction and RAID issues are the most common cause of data loss on VMs.

Human error includes everything from patches with programming errors, to updates without an offline backup, poorly planned implementation of new company-wide software, accidentally overwriting a storage medium, damage to the core database or integration problems between disparate systems.

The hardware problems faced by virtual systems are almost the same as with physical ones: faulty drives, faulty controllers, faulty server components and power problems. However, RAID damage is a much bigger challenge for VMs because of the very nature of virtualisation.

RAID controllers are responsible for assigning all information to the many disks available. When a RAID configuration becomes corrupt, files cannot be simply rebuilt. Rather, when that happens, the interconnectedness of multiple systems may lead to significant data loss and long downtime.

It is important that organisations recognise that virtualisation and VMs are not flawless. The reality is that they can become defective just as quickly as other legacy storage options. Therefore, before creating a virtual environment for sensitive applications, think about which solution best fits the specific needs of your organisation.

Further, using multiple virtualisation solutions in the same environment can increase the risk of data loss exponentially. Adding too many layers of complexity can be very risky and makes the data recovery process time consuming for even a seasoned pro. Therefore, it is better to keep your virtualisation simple and stay with one solution within one environment.

Finally, always back up and take snapshots of your changes. No exceptions. Since advanced persistent threats (APTs) are gaining in prevalence, a good backup rotation scheme is vital. I would advise to make multiple backups. Then don’t forget to save them to another physical location (whether that be a local server, hard drive or tape) or in the cloud.

Also, think carefully about the right backup software for your virtual environment so that it can support you in your endevours. There are several backup software solutions on the market. Some can be used with both VMware and HyperV solutions. Probably the most important factor to consider is how much time it will take you to recover your VMs from the backup.

Although virtualisation can undoubtably save time and eliminate complexity, data loss is a reality for anyone who manages virtual systems. It is, therefore, essential that the IT department is fully aware of the ins and outs of their systems and has a specific plan on how to respond to a breach. It should never be considered an if, but a when.

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.