Page 1 of 2
High availability doesn't have to mean high cost. For mission-critical enterprise applications, high availability is everything. Washington, D.C-based storage solution provider IceWeb on Tuesday unveiled the 6500 Cluster, a beefier version of the IceWeb 6000 series product that's designed specifically for such needs at a price that's not completely in the clouds.
Starting at $45,000 list, the 6500 Cluster combines two multiprocessor storage nodes in a single 3U server chassis and includes software that directs storage traffic to the node that's most available. Each eight-core node has its own networking and power supplies, and can share access to as many as 16, 3.5-inch SSD and SAS drives set up in myriad RAID configurations.
With a background in geospatial computing, IceWeb CEO John Signorello, is no stranger to systems for effectively processing extremely large, non-structured data files. "One hundred percent of our federal customers are doing those sorts of things," he said.
The goal now, he added, is to attract strong integrators and expand horizontally into health-care and other markets with high-performance processing requirements. Signorello said that to do this, the next 12 months will involve a new technical roadmap. "The key is to build integration, virtualization and VDI opportunities," and to continue to build relations with managed service providers and the VAR channel.
As for system performance, IceWeb has the technical side fairly well wrapped up. The CRN Test Center was given an exclusive preview of IceWeb's new high performance and high availability cluster, and found that it was able to handle all of the traffic we were able to throw at, and barely seemed to flinch.
To test IceWeb's latest array (which we set up as iSCSI), we fired up a Dell Optiplex 990 test workstation equipped with an Intel Core i7 3.4 GHz dual-core processor running 32-bit Windows 7 Professional on 4 GB of DDR3 memory. Using IOmeter as a benchmark, we employed our standard methodologies to achieve maximum performance by determining the optimal number of outstanding IOs per target to test with.
The IO/t determines how many operations are sent to the transaction queue at one time. The default setting is one, but most hardware will turn in better IO performance when the IO/t is somewhere between 12 and 48, depending on storage hardware and other infrastructure variables.
To find the optimal number, we repeated tests while gradually incrementing the IO/t until performance stops improving. Once we arrived at the optimal IO/t setting for the 6500 Cluster of 48, we used it for all subsequent testing.