SAS Switch: Boost For Direct-Attached Storage; Bane Of iSCSI SAN12:02 PM EST Fri. Feb. 25, 2011
When it began shipping in October 2010, LSI Corp's SAS6160 SAS switch was the only device of its kind: a 6Gbps interconnect that allows multiple servers to share one or more JBODs (just a bunch of disks), RBODs (RAID bunch of disks) or other SAS resources at greater distance than direct attached storage, and at lower cost and with far less complexity than storage area networks.
Today, it's still the only one, and the CRN Test Center has had an opportunity to test one in its labs. The first thing that surprised us about the SAS6160 was its small size. For a SAS expander device that's capable of connecting as many as 16 servers, storage units and other switches using the relatively bulky external SAS 4x connectors, the box took up less space than we expected.
At just 8.5-inches-wide, the SAS6160 can be mounted two-across on an equipment rack using
An infrastructure that incorporates SAS switches also can be used to deploy a tiered-storage solution using a mixture of 3Gbps and 6Gbps, SATA and SAS storage devices, including spinning and/or SSD media. As many as 1,000 devices can be connected in each storage system. What's more, LSI SAS switches can be placed as far as 75 feet apart, about four times more than traditional direct-attached storage systems using passive copper cabling.
As many as four SAS6160s can be cascaded or connected redundantly to boost bandwidth or provide fault tolerance and high availability applications in data center, managed hosting or could computing scenarios. Each of the SAS6160's 16 ports provides four 6 GB/s lanes for a total of 24-GBps bandwidth per port, and a total aggregate bandwidth of 384 Gbps for the entire switch.
The specified throughput of the SAS6160 exceeds our ability to test. Instead, we sought to verify the SAS6160's per-port transfer rate and to measure its transaction capabilities. To do so, testers prepared a server-class machine with 4 GB of DDR3 memory running the 64-bit version Windows Server 2008 Data Center and a MegaRAID SAS 9280-8e RAID controller from LSI.
We configured three SSD drives of a 24-drive JBOD as a RAID 0 array and used Intel's IOmeter to test throughput (measured in MBps) and transaction processing (in IO/s) with the server connected directly to the array. These results would become the performance baseline. Then we connected the server to the SAS6160, ran another cable from the SAS switch to the array, and repeated the same IOmeter test.
After some back and forth with LSI engineers to make sure that our IOmeter settings were in tune with the LSI MegaRAID card, we observed sustained throughput rates at around 815 MBps, and a sustained transaction rate of 3273 IOps. When connected through the SAS switch, throughput dropped by about three percent to 789 MBps, and transactions 3156 IOps, an additional latency of 12 microseconds per IO.
Next: Setting Up SAS6160
Setting up and configuring the SAS6160 was not all rainbows and lollipops. Since SAS is a connection-oriented, point-to-point technology, its default state is is to allow any attached initiator (server or client device) to access any target (storage device). A fair amount of administration will be necessary before unleashing even the simplest segregated deployment. So before getting started, we recommend hammering out user rights and privileges in advance and documenting it all on paper as you would with any server deployment.
Once you're ready to configure the switch, you'll need to connect a Java-enabled PC to its front-panel Ethernet interface and browse to the SAS6160's default IP address. The unit's embedded management interface -- SAS Domain Manager -- will download and attempt to launch (after it verifies that Java is present and up to date). The first step in creating SAS zones is to specify the zone groups, which will include any hosts and storage devices that will be sharing the same access privileges. For example, Zone group 8 (see diagram) might represent an accounting department that should have access only to its own server and storage system (JBOD A). Similarly, Zone group 9 has access only to its server and storage (JBOD B).
Notice in the diagram that JBODs A and B and the SAS6160 itself are part of a larger shaded area marked ZPSDS. This "Zoned Portion of a Service Delivery System" is created by all linked switch/expander devices that are compliant with SAS 2.x spec and have the zoning function enabled. All zoning-enabled switch/expanders in this zone maintain identical permission tables, which establishes access control throughout the zone. Notice also that JBOD C, which is not zoning-enabled, is not in the ZPSDS.
The CRN Test Center sees great potential in the SAS6160, we recommend this product with confidence, and we expect to see many others like it in the future. Reseller opportunities abound for this major technological advance, not the least of which are connection flexibility, scalability, increases in storage efficiency through elimination of storage silos, and a vastly simplified interconnect methodology versus iSCSI. However, there will be a learning curve as technicians tackle the subtleties of SAS zoning and provisioning.
The SAS6160 lists for $2495. It began shipping on Oct. 12 and is available through CDW, NewEgg and high-tech distributors. A 1U rack mounting kit holds two SAS6160s and lists for $450.