Did you know that 140,000 hard drives in the US fail each week? If you’re in charge of protecting valuable business data, this number should scare you – especially if you aren’t certain about the reliability of your backups.
Disaster can come in all shapes and sizes, from end-of-life hardware failure to human error to the ever-increasing threat of ransomware. And many businesses are totally unprepared.
The high cost of data loss
According to a report from Dell EMC, the average cost of data loss for businesses around the world is $913,958. And that number goes up as businesses increase the number of vendors they work with to deliver their data protection, likely because their respective technologies don’t play nice with each other.
Losing data can result in a whole host of other headaches for a business, beyond just the obvious financial impact. It can cause significant downtime (which results in even more money lost), breaches of industry-specific data storage requirements, and a future loss of trust – and business – from clients.
Hardware failure: The number-one culprit
The age, time spent in use, and brand of your hard drives are three critical factors in determining how long they will last.
And since hardware failure is the number one cause of data loss and/or systems downtime, it is imperative that you not only know how stable your environment is, but that you also have a plan in place for if – and when – your hardware fails.
Personal and business backup provider Backblaze examined the failure rates of the many thousands of disk drives it manages, and discovered that drives fail at an average rate of 5.1% per year for the first 1.5 years, 1.4% for the next 1.5 years, and a huge 11.8% after three years.
That means that there is a nearly 12% failure rate for drives in use longer than three years.
These numbers will vary by brand, but you shouldn’t put too much stock in what the manufacturers tell you about the reliability of their devices either – a study from Carnegie Mellon University found that annual disk replacement rates were commonly in the 2 to 4 percent range, with some more complex systems going up to 13 percent.
In most cases, these numbers track to a much higher rate of failure than the manufacturer’s claimed mean-time-to-failure (MTTF).
Ultimately, anywhere from 20 to 63 percent of drives will experience at least one uncorrectable error during the first four years in operation.
Ransomware: The newest threat
In a 2017 survey of over 1,000 small businesses around the world, Malwarebytes found that more than one-third of businesses had experienced a ransomware attack in the past year. And nearly a quarter (22%) of these had to cease operations immediately in order to address the attack.
Unfortunately for many businesses afflicted with ransomware, disconnecting and restoring from backup may not be enough to protect against an attack. While 81 percent of IT professionals believe these actions were enough to stop the attack, only 42 percent of victims were actually able to recover all of their data using backups.
Some of the more notorious, public instances of ransomware include CryptoLocker in 2013, which extorted at least $3 million USD before its shutdown, and the May 2017 WannaCry attack which affected several large companies in Spain, parts of the British National Health Service, FedEx, Honda and others.
These and the many hundreds of other attacks are not only costly, but they are constantly adapting to, and discovering ways around, the latest data protection software.
Backup solutions are not as solid as they should be
The scary thing is, even if you think you have a proper backup solution in place, you could be vulnerable to data loss. A survey of IT managers showed that while 57 percent of them said they had a backup solution in place, 75 percent were not able to actually restore everything when moving data.
Couple this with the fact that the majority of businesses only test their disaster recovery environments less than once per year (many never do at all!), and the picture is clear: Backup solutions are often neglected until it’s too late.
Best practices for disaster recovery
Here are some key takeaways that you can apply to your backups today to prevent disaster from wreaking havoc on your data in the future:
- Test your backups regularly
Too often, companies invest huge sums of money into state-of-the-art backup systems, only to discover that they fail when things go south. Regular testing will ensure that, should something go wrong with your storage environment, you will have immediate access to your essential data with little-to-no downtime.
Test your disaster recovery plan at least once per year, and every time you make a major hardware or software change. Simpler backups should be tested even more frequently, ioFABRIC recommends verifying your backups are clean and bootable every day.
- Simplify your backup environment by streamlining access, technology, and vendors
The more users and vendors you have, the more chances there are for ransomware to enter your system, or for incompatible technologies to cause something to break.
Give users access to only the data they actually need to prevent phishing scams from introducing ransomware into your system. And get to know your vendors well.
Make sure that they are compatible with one another, and that they actually provide the level of service you’re looking for.
- Understand how the different layers of your system coexist
Backups aren’t just about your files. Even a small system has multiple layers, including operating systems, data centers, servers and more. Each one of these layers needs to be protected and able to be restored should an issue arise.
Think about your data holistically, and make sure that every layer of your infrastructure is protected.
- Update your data protection methods regularly to avoid ransomware
It’s a sad truth that the creators of ransomware are often ahead of the creators of data protection software. To avoid your system falling victim to an attack, make sure you have the latest software backing up your most important files.