Losing access to data when you’re trying to use it is annoying for anyone. For businesses, though, the damage goes beyond just inconvenience. The downtime can turn into an even bigger disaster than whatever caused it in the first place.
A CA Technologies study published in InformationWeek found that companies in North America and Europe lost more than $26.5 billion in revenue to IT downtime – and since the study is a few years old at this point, it’d be safe to assume those numbers have probably gone up.
These numbers factor in a variety of things, including lost productivity as employees are unable to work (and later as they must shovel through backlog once service has been restored), lost business as orders can’t be placed or processed, and reputational damage as frustrated customers vent their displeasure or leave for a competitor.
Beyond just the cost of the downtime itself, the effects tend to linger. In an interview with TechRadar, an IT specialist firm notes that “a major outage will take months if not years to fully recover from.” If the downtime and/or data loss is severe or prolonged enough there’s a chance the business might not recover at all (RIP Code Spaces).
Getting back in business
The biggest takeaway, obviously, is that businesses should do their best to avoid downtime and losing data. There are a number of mitigating actions organizations can take to limit their risk of disaster, and there are numerous best practice guides out there on how to go about it. But at some point, every business is bound to draw the short straw.
For that reason, it’s vital to have a disaster recovery plan developed, tested and in place. If and when there IS a disaster, this helps limit the damage to minimal downtime while you recover data from backups. Since recovery speed is key to minimizing the fallout of downtime, a big part of the DR plan will be the organization’s RTO (AKA, how long you can afford to be without your data).
This is when having a faster DR solution pays off – especially if you’re not tied to any specific hardware for recovering your data.
While appliances come with their own set of inherent limitations, a big concern people tend to have about using the cloud for DR is how long it will take them to get their data back. It’s a valid one. A lot of no-appliance cloud services started life as consumer backup, and can’t really handle the volume of data a disaster-stricken business would need to recover in a short time. That’s where WAN optimization comes in.
We talk about built-in WAN optimization a lot – and that’s because it’s an indispensable part of our service. As our VP of Products Chris Schin recently discussed with Storage Switzerland, Zetta.net focused its early dev resources on developing WAN optimization technology to address the existing problems with cloud backup & DR. This allows us to get around the need for an appliance, while offering the kind of recovery speed that enterprise clients need to get back on their feet quickly.
A quick look at the numbers makes it clear that business downtime is an expensive – and potentially game-ending – proposition. The occasional downtime / data loss is bound to happen, and when it does a speedy DR solution can make all the difference. The faster you get your data back, the faster you get back in business.