How can enterprises reduce downtime costs?

Organisations must now focus on reducing downtime in order to avoid financial and reputational damage

Organisations must now focus on reducing downtime in order to avoid financial and reputational damage

The amount of downtime caused by data protection and recovery operations often results in some form of financial damage. It is thus more important than ever that enterprises actively reduce downtime in order to save costs.

The need to reduce downtime

Gartner senior director analyst Ron Blair estimates the average cost of downtime at $5,600 per minute across industries. In a report published in October 2018, Blair highlights that this amounts to $336,000 per hour.

It is evident that the unavailability of data has the potential to cost organisations millions of dollars per hour. In fact, a major United States airline reported $100 million of impact following a six-hour outage at their data centre.

Moreover, a recent survey found that 83% or organisations have experienced a DDoS attack in the last two years. DDoS attacks cause 12 hours of downtime on average, but 8% of respondents reported over 20 hours of downtime.

Measuring an effective solution

According to a Hitachi Vantara whitepaper, there are four primary attributes for measuring the effectiveness of a data protection and recovery solution. The backup window entails the amount of time allotted for performing a backup for a particular system or dataset.

The recovery point objective (RPO) also measures the frequency of backup operations and the amount of new data a company is willing to lose. Moreover, the recovery time objective (RTO) is the goal for how long it should take to restore a system.

Finally, Hitachi notes that cost is the fourth measure of an effective data protection infrastructure. In effect, a solution should provide the minimal levels of service that the business requires at the lowest possible cost.

Implementing a solution

In order to limit the time and money it takes a company to recover from failure, Hitachi recommends three modes of action. First of all, it is integral to significantly reduce or eliminate the backup window that restricts the frequency of protection operations (the RPO).

Next, enterprises must increase the frequency of protection operations so that far less data is at risk of loss. Finally, it is necessary to speed up recovery operations whether locally (operational recovery) or remotely (disaster recovery).

In order to address these needs, Hitachi developed two approaches with a single software solution. Indeed, Hitachi Data Instance Director provides enterprise copy data management for organisations looking to modernise and simplify their data protection, retention, and recovery operations.

How can companies benefit from data virtualisation? Listen to our podcast with Vincent Fages-Gouyou, EMEA Product Management Director at Denodo, to find out