Storage Automation: Friend or Foe?

Several years ago, the Mountain View, Calif.-based company developed an automatic-failover function for its server-clustering solution. How did customers take to the new feature? Simple -- about 70 percent turned it off. This tool would automatically bring an application back online when a server experienced a problem. But storage managers, ever the cautious types, wanted to ensure no false alarms were triggering the failover. Some would simply slow down the failover tool when an alert was sounded -- stopping the process in midstream -- thus enabling them to manually check if the application really was in trouble.

"They were just not ready to hand over control of the server to software," explains Robin Purohit, vice president of the Product Management Availability Products Group at Veritas. "It took them a couple of years to build some trust [with the tool] before they started to turn it on."

That's an important point for VARs to know when setting out to sell storage-management software. Right now, automating management through software is one of the key developments going on in the storage industry. Storage vendors -- both small and large -- are marching into the marketplace with software that automates some of the most tedious, labor-intensive tasks that occupy many storage administrators' time.

For example, storage leader Veritas Software's SANPoint Control can detect trends in the file system of a database and initiate more capacity based on thresholds. It automatically allocates and configures storage online. Another product, called OPForce, can detect low CPU power and automatically serves up more horsepower. Moreover, Hewlett-Packard, another storage leader, is working on an umbrella policy-based management tools that automates the placement of data on storage resources based on its age and importance to business needs. It is working on things that automate jobs like storage provisioning, storage diagnostics and reporting as well as self-repair.

id
unit-1659132512259
type
Sponsored post

"IT budgets are pretty flat," says Mark Sorenson, vice president and general manager of the NSS Storage Software Division at HP. "[Companies] are not hiring too many people, but information continues to grow. So what do you do if each storage administrator has to manage 50 percent more storage than last year? The answer is through automated tools."

And that is what a lot of storage vendors are developing. But based on past experience, executives at Veritas recognize that customers don't initially hand over the task to the software right away. Veritas has experienced instances where it tries to sell some new automation feature, only to see customers throw up their arms, shrink back and call it "black art." Veritas has found that it usually takes about one to two years for IT manager to trust the software whole-heartedly. It's a gradual process. Veritas already is seeing a similar situation with its SRM and SANPoint Control product. Currently, only 20 percent of its customers use the automation tool within those products.

"My belief is, every time you try to automate something new, you are going to see this process of IT building a trust with that capability," Purohit says.

Sorenson agrees with Purohit -- to a certain extent. It's true that customers are going to be wary of certain solutions that automates various jobs, especially those that entail disaster recovery and replication. For instance, a bank with its headquarters in New York certainly wants to make sure it is dealing with a real-enough crisis before its system fails over to its secondary site in New Jersey.

"We have technologies that do that automatically," Sorenson says. "But, yes, we have people who want to press the button to make sure that it really needs to be done. I don't want to disrupt everything...just because someone tripped over a plug."

But automation of certain features merits more skepticism than others. Sorenson says there is a difference between automating a failover mechanism and automating a tedious task like storage provisioning and configuration.

"I think the risk factor is significantly lower," he says. "It's one thing to failover an entire bank to another site in New Jersey. It's another thing to say, 'Oh, that provisioning step did not work right. I have to go back and fix it.' That has no overall impact on anyone."

Still, skepticism of software that automates is not unusual, especially when dealing with failover technologies, says Darren McBride, president of Sierra Computers & Training. As a solution provider, McBride has spent many hours with clients dealing with data-mirroring solutions in which both drives are in an active/active configuration.

Vendors, McBride explains, can never replicate the kind of failures that solution providers experience in the field. McBride describes one client that does mining engineering and tests rock samples. There have been times when they look into a server that has not been opened up for several months, only to find it coated with white dust. McBride concludes that hardware rarely fails catastrophically; oftentimes, it degenerates over time. And customers spend hours trying to determine with drive has failed and they hold their breath hoping they have restored the data on the healthy drive instead of the faulty one.

"That paranoia [of software that automates] is not unfounded," McBride says. "The problem with those [software] solutions is, customers may not have used them often enough. So you are sitting around toying around with this software that you are uncomfortable with. You haven't worked with it often enough to be comfortable with what it does."