Scale-Out Storage Provides Performance Edge As Capacity Increases


Scale-out storage, or the ability to increase storage performance even as capacity is increased, has in the last couple of years taken its place on the list of "must consider" if not "must have" technologies for managing storage.

It is being embraced both by vendors who have scrambled to acquire the necessary intellectual property, and by customers concerned about the performance bottlenecks that come from their ever-increasing data stores.

Scale-out storage is a way of non-disruptively increasing the performance throughput of networked storage even as capacity is increased. It uses clusters of storage nodes, each with its own processing power, storage capacity, and I/O bandwidth so that as capacity is added, the processing power and bandwidth increase at the same time.

However, when presenting the technology to customers, just remember not to call it "scale-out storage," warn solution providers.

Customers are increasingly asking for the capability, but they don't yet know the term "scale-out storage," said Greg Knieriemen, vice president of marketing at Chi, a Cleveland, Ohio-based solution provider focusing on the storage industry.

"They call it high-performance storage, or high-performance file storage," Knieriemen said.

It is a technology which customers ask for every day, even if they don't know the term "scale-out storage," said Jamie Shepard, executive vice president of technology solutions at ICI, a Marlborough, Mass.-based solution provider.

"Customers don't know they're asking for it," Shepard said. "But every time they speak, they're asking for it. They're asking for storage that can grow and expand without limitations."

Scale-Out Vs. Scale-Up Storage

The interest in scale-out storage stems from limited performance and network capabilities of traditional "scale-up" storage architectures as customers look to handle either larger data volumes or larger amounts of data as a single system.

As the amount of data stored within a scale-up storage system increases, performance of that storage eventually starts to decrease. This is because increasing the capacity of a typical storage array means pushing and pulling more data through a storage controller which eventually reaches limits in terms of its processing capabilities and network bandwidth.

As the processing and bandwidth performance become constrained, the overall performance of storage becomes a bottleneck. This can impact not only the speed at which data is read or written, but also the performance of several common storage services.

For instance, declining processor performance could lead to limits in number and frequency of snapshots of the data for setting recovery points and adversely impact replication performance. And, as network bandwidth reaches its limits, server access to storage across a storage network starts to slow.

With traditional scale-up storage architectures, most arrays can be upgraded either with more powerful storage controllers or multiple controllers within the same system, which provides performance and bandwidth relief until capacity grows to a certain point. Customers can also connect multiple arrays in a cluster configuration to increase performance and bandwidth. However, in both cases, the boost reaches a plateau based on either the relatively few controllers that can be added to an array or the limits in number of arrays that can be clustered.

With scale-out storage, however, performance and network bandwidth both increase as new capacity points are added to a storage system. This is because the additional capacity comes from the installing of more storage nodes, each of which has its own controller with one or more processors and its own network connections in additional to the hard drive or SSD capacity.

As a result, increasing the capacity of the storage systems by adding new nodes actually decreases performance and bandwidth bottlenecks.

Next: Storage Capacity: Cutting Back On The Bottlenecks