Recent tornados, hurricanes, earthquakes, fires and tsunamis have refocused attention on disaster recovery among business leaders and IT managers. While the broader scale of disaster recovery planning includes facilities, power, cooling, communications and people, data recovery remains key to business continuity, Gary Watson explains.
When people think of disaster recovery (DR), they probably typically think of ransomware, cyberattacks or user errors. However, there are many more possibilities and circumstances to consider when it comes to DR.
Data is collected non-stop in almost every corner of the world – including unlikely locations such as research centres in the desert, the back of an icebreaker in Antarctica, and gas wells. In these types of remote areas, it’s not possible to have ideal data centre conditions, such as clean power and steady air conditioning. Instead, conditions can be unpredictable, swaying from extremely hot to extremely cold, with the potential to lose power at any time.
Nonetheless, DR is just as critical, whether you are an enterprise in the centre of town, a submarine 800 feet underwater or a research lab in the Antarctic. So, what factors need to be considered when planning DR for these types of challenging locations?
It’s important to invest in the right strategy, architecture and data storage solution. And this may vary depending on the organisation’s needs and location.
One of the main challenges for businesses in remote locations is the need for storage to withstand extreme weather environments. As part of this, any organisation operating in an isolated location needs to ensure storage is not only highly reliable but also very durable. These types of locations will also most likely have limited or no on-site IT staff to help in the event of a disaster.
A key factors to consider when building your DR plan in these environments should make sure any chosen system has data redundancy and replication components built in to ensure there is no single point of failure in collecting data. Having these tools in place can help protect all your crucial data in even the most difficult of environments.
It is also important to implement a system that is scalable and can store hundreds of terabytes of data without taking up too much physical space. In these odd and difficult conditions, systems often need to be able to take in data from multiple streams and a variety of instruments – while being able to fit in a small space. If your data centre is in a shipping container on the back of a boat or in a small closet, you only have so much space to work with, but the data load doesn’t decrease.
But perhaps most importantly, it’s imperative that the system has the ability to re-start at the exact point where the power dropped. This is where DR most obviously comes into play. If your data centre is housed in any sort of extreme environment, it is likely it’ll be forced offline due to complications that come with that environment at some point in time. But if the system is able to re-boot at the exact point where it dropped off, that means no data is lost.
Data is being collected 24/7, in every part of the world. In fact, the global datasphere is expected to grow from 33 Zettabytes in 2018 to 175 Zettabytes by 2025. All that data needs to be stored somewhere – somewhere that can stay online in a place where humans rarely go, nonetheless run a data centre.
Disaster recovery in today’s world means a lot more than recovering data after an end-user hit the wrong button, or a company is hit by ransomware. DR means being able to recover data – no matter how crucial – after any sort of downtime or power loss, no matter the origin, and no matter how long. It means being able to rest easy knowing that if – and, let’s be honest, probably when – your data centre goes down, you’ve got a plan in place that will handle whatever is thrown its way.
Gary Watson is CTO at StorCentric and Founder of Nexsan