Primary Storage Not Ready For The Cloud: GigaOM Panelists

Changes in storage technology and the way it is priced are required before the cloud can be ready for primary storage, according to a panel of storage vendors presenting at the GigaOM conference.

The panelists, including executives from NetApp and three startups, agreed that the cloud for now is more suited for backup and recovery, but that a number of new companies doing well in specific areas of cloud storage offer clues about business models that could help pave the way for more primary storage in the cloud.

Vanessa Alvarez, an analyst at Forrester Research who moderated the panel, said that the good news about cloud storage is that customer interest is increasing. About 28 percent of respondents in a Forrester survey said in 2010 that they are interested in cloud storage, up from 18 percent the year before.

The bad news, Alvarez said, is that those respondents are only looking at backup and recovery on the cloud, and not at primary storage.

id
unit-1659132512259
type
Sponsored post

Andres Rodriguez, founder and CEO, of Nasuni, a Natick, Mass.-based developer of virtual appliances for backing up and archiving files to third-party clouds, said that it is important for the storage and compute layers to be close together.

For now, concerns about performance and security among customers are holding back customer investments in primary cloud storage, Rodriguez said. As a result, he said, it is better for many customers to move the cloud into the data center rather than move the data center into the cloud.

IT departments are still reluctant to move primary storage to the cloud, said Val Bercovici, senior director in the Office of the CTO at NetApp.

However, Bercovici said, moving backup data to the cloud offers a chance for IT departments to add value to such data while cutting the costs of storing it. "Backups and business continuity are easy stepping stones to the cloud," he said.

One problem with running primary data in the cloud is the issue of how applications work, said Dheeraj Pandey, CEO of Nutanix, a San Jose, Calif.-based startup developer of technology for virtualizing both compute and storage resources together. Nutanix is planning on coming out of stealth mode at the VMworld conference late next month.

It is very difficult to run applications like Exchange or SharePoint in the cloud because of the problem of accessing the necessary metadata, Pandey said. "If metadata is sitting in the cloud, 200msec access is too slow," he said.

Dave Wright, CEO of SolidFire, a Boulder, Colo. startup developer of SSD-based primary storage systems for cloud computing which just came out of stealth mode this week, said that breaking the tie between physical servers and their storage does not work because of the physics related to data access speed.

There are millions of virtual servers running in clouds, all with their own storage, but still without the performance needed to run applications like SharePoint and Exchange in the cloud, Wright said.

The industry needs to get away from thinking in traditional storage terms like block and file and move on to new storage technologies that have an awareness of cloud topologies, Bercovici said.

He said that no one is running Oracle in a cloud, but they are running primary applications from companies such as Box.net, a provider of cloud-based file sharing and management, that have developed models that are specific to providing storage in the cloud.

Next: New Technologies Needed To Guarantee Service

To run such applications in the cloud will require that service providers start adopting technologies like SSDs to provide the needed performance, Wright said. And that, he said, will require service providers to start charging for storage only based on the number of gigabytes stored but on a combination of capacity and performance. "That will allow service providers to apply SLAs (service level agreements) to storage in the cloud," he said.

SSDs are still seen as very expensive when compared to hard drives in the enterprise where storage systems were built for hard drives or, more recently, expensive SLC (single level cell)-based Flash storage, Wright said. However, by building storage systems from the ground up to use less-costly MLC (multi-level cell) SSDs, the cost per gigabyte approaches that of hard disk technology while making it possible to provide a promised level of performance, he said.

A focus on a guaranteed level of performance will help drive primary storage in the cloud, Wright said. "That will make (customers) comfortable with the technology," he said. "That's something they can't even get from their own infrastructures."

Service providers also need to be concerned about the economics of running storage in the cloud from the start, something many had not been doing, Pandey said.

Pandey cited the example of Iron Mountain, which recently exited the public cloud storage business because of competition from Amazon's S3 offering less than one year after it entered the business. He also said that large storage providers like Amazon started out working with technologies from established vendors like NetApp only to later decide that they need different technologies to meet their price points.

Rodriguez said another issue is the ability of companies to sell cloud storage and the need to make sure its sales staff are compensated not just on capacity sold but on the full range of services involved in a complete solutions. "You need to sell storage services, complete solutions, where you can have 60 points of margin and pass 30 points to the channel," he said.