The ‘it won’t happen to me’ attitude is detrimental in most aspects of life, but by now, most businesses know data protection and recovery is not an area to be scrimped on, but when the consequences of a data breach are so potentially dire, why do so many do it? Here Steven Wood, EMEA director at Carbonite explains how businesses can ensure their strategies are up to task.
For as long as we have been creating data, there has been the need to record it, share it, distribute it and keep it safe. This is what makes data the most important asset for a modern organisation, although few have a proven process in place for protecting it.
Recent high-profile incidents, such as the Microsoft Exchange server hack, and destructive data centre fires in France, have highlighted the vulnerability of data as well as the increasing risks from malicious cyber-attackers and unforeseen and unmitigated disasters.
While the idea of states engaging in cyber warfare seems novel, there is evidence to suggest the Microsoft attack was the work of a state-sponsored entity specifically seeking data, demonstrating how cyberattacks now go beyond the work of criminal groups simply looking for financial gain.
This comes at a time where many public and private sector organisations continue to reduce spending across their businesses, with IT often identified as an area where savings can be made.
Data protection is often viewed as a ‘sunk cost’ – it doesn’t benefit the business until a breach or incident occurs, so may be viewed as an easy cost to cut. After all, “we’ve never lost data before, so why would it happen now?”
Though rising and evolving cybercriminal activity is one reason for concern – irregular and unseasonal weather events, mergers/acquisitions, revolving staff and a deepening reliance on IT and systems all justify that data protection is one area that businesses simply cannot afford to shave investment.
With that in mind, there are a few important steps businesses can take to ensure they have adequate data recovery and protection strategies in place.
Know and classify your data
This is the single most important thing for businesses to understand – data can’t be secured if IT teams don’t know where it lives or how important it is. They must know where data resides, how up to-date it is, what protection it needs, as well as how it’s currently protected and who has access to it.
If an organisation suffers an attack or experiences physical damage, like hardware failure or a natural disaster, it is important to know the answers to these questions to restore data effectively – even if backups are in place.
This is where classifying data comes in, as organisations need to know which data is mission-critical to get through the next day, and which is historical data that won’t make or break the business if access is temporarily lost.
Once an understanding is developed of which systems and data need to be available this second and which can wait a few days or weeks, businesses can plan their disaster recovery strategy and choose the right backup solutions and schedules.
As with classifying data, businesses also need to consider their infrastructures, systems, and sites.
If an employee loses a laptop, can the recovery process replace the hardware and the date in a timely fashion? Similarly, if an application server or hypervisor fails, how can these platforms be recovered as well as the data?
Is there a system fail-over plan using a dark site, partner resources or public cloud? If the worst happens, and an entire site is lost in a flood, what business continuity steps would be taken to get the business operational?
Business owners should also evaluate whether a largescale recovery is within in the capability of the IT team. If not, there should be a trusted partner included into the recovery plan.
Test, test, test
With a clear understanding of their systems and data, IT teams can start to put together a test plan for how to make sure they are prepared for worst-case scenarios.
They should have a working knowledge of how to perform different types of recovery, from a single file, a group of users, an application, or the entire environment, and this only comes with practice.
If the solution being used has an instant recovery option for virtual environments, this also needs to be tested – as this will save time and simplify the process of recovering infrastructure.
Organisations can never be too careful or methodical in how they deploy, monitor, and test their backup solution.
It’s important to test the recovery process as if an event is happening, and when testing the restoration of the most critical data, it’s helpful to have a recovery environment that can verify functionality of the environment once it is restored. The recovery environment can be at scale or pared down to fit what is being tested.
Finally, as part of the disaster recovery plan, IT personnel should define alternate paths of communication should email systems be compromised, as was the case in the Microsoft Exchange attack.
Live disaster recovery will become significantly more complicated without the critical ability to communicate between IT infrastructure staff, help desk staff and executives.
The implications for businesses of not regularly testing their backup and disaster recovery strategy can be wide-ranging and devastating: from losing critically important data, to the heavy cost of replacing hardware, rebuilding software or reconstituting data.
However, these costs can pale in comparison to the potential lost revenue or reputation if a business has to close its doors due to an outage.
Cyber and physical threats are constantly evolving and becoming more common. Businesses need to go beyond implemented conventional tactics to prevent and mitigate these risks and immediately put equal focus on having proper data security and data protection tools in place to achieve true cyber resilience.