6 Surprising Surveys About Causes And Effects Of System Downtime

Survey Says: Way Too Much System Downtime From Storage Issues

System downtime can mean a real disaster for a business, yet it happens with surprising frequency. And the root cause in the majority of system downtimes lies in the storage infrastructure more often than not.

For solution providers, the possibility of system downtime provides an opportunity to work closely with customers to comb through their IT infrastructures to find potential problems waiting to happen and to put in place new technology or disaster-recovery or business-continuity plans to reduce the chance of a system going down and/or mitigate the potential impact.

To help open the door to such opportunities, CRN has gleaned some interesting statistics sure to get attention from customers. Turn the page, get the stats, and start calling those customers.

Storage Upgrades And Disruption

For midsize businesses, disruptions caused by data storage equipment upgrades are the rule rather than the exception, according to a new survey conducted by Gridstore, a Mountain View, Calif.-based developer of storage that can expand nondisruptively with what the company called "no limits."

In the survey, Gridstore found that 55 percent of midsize businesses have experienced significant business and end-user disruptions as a result of major storage upgrades.

Gridstore also found that 32 percent of midsize businesses experienced failed upgrade processes and 9 percent experienced data loss as a result of upgrades.

Shhh, Don't Say Anything And Maybe No One Will Know

EVault, a Seagate company, found in a 2011 survey that 17 percent of IT decision-makers would rather have their teeth pulled without using painkillers than have to inform their bosses of a critical data loss.

Things in 2012 are not much better, the San Francisco-based cloud storage vendor found in its late-2012 survey of 650 IT decision-makers from companies ranging in size from 100 to over 3,000 employees in the U.S., the U.K, France, Germany and the Netherlands.

About 24 percent of respondents to the survey admitted to not telling their CEOs they are not backing up all files, especially those on mobile devices. About 38 percent admit they worry about their data not being saved securely or whether any work has been backed up at all.

They were right to worry. The IT decision-makers admitted that 53 percent of their companies had experienced data loss within the last 12 months, up significantly from the 31 percent that had that experience, according to the 2011 survey.

Storage The Primary Source For Downtime, Data Loss Risks

Of all the things that can go wrong, storage carries the biggest risk for downtime and data loss, according to Continuity Software, a New York-based provider of service availability risk management solutions.

The Continuity Risk Benchmark, based on real-world customer data collected by Continuity Software's RecoverGuard automated vulnerability-monitoring and detection software, found storage the root cause of downtime and data loss risks in 58 percent of cases, followed by servers at 17 percent, clusters at 11 percent, virtualization and the cloud at 9 percent, and databases at 5 percent.

Data loss was the primary potential impact to a business in 41 percent of instances, downtime and RTO (response time objective) violations at 25 percent, performance at 17 percent, and others at 17 percent.

Storage issues took the longest time to resolve at an average of 32 days, server issues at 19 days, cluster issues at 17 days, virtualization issues at nine days, and data base issues at seven days.

Hard To Analyze Data Needs

Oracle, working with audit advisory firm KPMG, surveyed more than 370 executives at the last Oracle OpenWorld, and found that 60 percent said their organization has a defined data and analytics strategy.

However, only 39 percent of respondents agreed or strongly agreed that senior management had access to increasing volumes of unstructured marketplace data needed to predict customer needs.

This made it difficult for senior management to analyze current data and to act on the results, according to 45 percent of respondents. Furthermore, over one-fourth of respondents cited managing the volume of data streaming from other sources as their major challenge.

Where Uptime Is Mission-Critical, Failures Happen Way Too Often

Emergency dispatch operations have to respond quickly to emergencies, yet outages happen way too frequently, according to a survey of 390 public safety answering point (PSAP) professionals conducted in October by Stratus Technologies, a Maynard, Mass.-based provider of fault-tolerant systems.

In the survey, 72 percent of PSAPs serving populations of over 80,000 citizens experienced downtime in the past 12 months, with 50 percent having two to four outages and 11 percent experiencing over five. About 60 percent of PSAPs in smaller communities suffered downtime at least once in the past year.

Fifty-seven percent of outages lasted at least 15 minutes, while 26 percent lasted over an hour, according to respondents. Stratus estimated that one hour of downtime could potentially affect six 9-1-1 calls at a PSAP handling 50,000 calls annually.

About 29 percent of respondents said their organizations had no formal disaster-recovery or contingency plan, or did not know if a plan existed.

Downtime Very Costly, And Way Too Common

System downtimes can cost a company an average of $366,363 a year, according to the 2012 Acronis Disaster Recovery Index survey done late last year by the Ponemon Institute and Woburn, Mass.-based data protection software developer Acronis.

In the survey, Ponemon also found that 86 percent of companies suffered downtime in the last year, and lost an average of 2.2 days annually. Sixty percent of respondents said human error was the most common cause of downtime.

The biggest cause of downtime, cited by 70 percent of respondents, was moving data between different physical, virtual and cloud environments.