Review: Xsigo's Breakthrough - I/O Virtualization

Virtualization is rapidly changing the data center landscape. While server density has increased dramatically due to hypervisor technologies, the same cannot be said for the devices that exist in the same ecosystem. The devices have to squeeze more data out of the existing bandwidth and they are not keeping up. For instance, physical ports are usually linked to virtualized adapters in a one-to-one ratio to maximize performance. Switches sometimes play a role in creating VLANs to segment these networks. As a result of virtual environments, network cards with more ports are installed on servers and bandwidth management is done on managed switches.

While this technique created an exciting new way to squeeze performance out of servers, it also created a nightmare for system engineers. New servers with larger network capabilities and lots of cabling needed to be put in place. In addition, new I/O bottlenecks cropped up and new methods for identifying heavy traffic across network nodes were required. Products like Akorri BalancePoint sprung up to solve cross-domain problems in virtualized data centers.

With I/O virtualization, cabling, costly power consumption from network devices and many other bottleneck headaches simply disappear.

San Jose, Calif.-based startup Xsigo has the right solution. Two and a half years in development, Xsigo's Virtualization Platform 780 is radically going to change the VMware landscape. The Xsigo box manages the traffic across virtual network adapters (vNIC) and SANs with proprietary virtual host bus adapter (vHBA) architecture.

id
unit-1659132512259
type
Sponsored post

In what best can be described as a virtual LAN in a box, the VP 780 eliminates the need to have multiple physical connections to and from virtualized infrastructures. At 780-Gbps bandwidth backbone, hence the VP 780 name, the Xsigo box provides a massive I/O gateway for dozens of servers. And the Xsigo box is stackable. An expansion slot for 24 ports is available, so in theory data centers can have more than a 1-Tbps backbone serving hundreds of servers. The VP 780's chassis says it all—it contains hot-swappable slots for 10-Gbps network switches and Fibre Channel switches. Two long rows of InfiniBand connectors are fixed on top of the slots. Xsigo sells specialized host channel adapter (HCA) cards that must be connected to manage each server.

The Xsigo box comes with a plug-in for VMware ESX server and a Web-based controller software for Red Hat. The Test Center tested the chassis with three Dell servers, a Brocade 200E storage switch and a Clariion AX 150 SAN box. The Dell servers can reach a maximum of 16 Gbps when connected to the Xsigo chassis through the InfiniBand HCA cards. The cards are flashed with Xsigo's software.

Through the HCA cards, the Xsigo box can generate vNICs and vHBAs and can provide PXE booting from any SAN disk. Xsigo is the only vendor on the market that can boot servers this way. Right now Xsigo sells cards with DDR, which can go up to 16 Gbps. In the future, the company is going to offer dual QDR HCA cards, which will double the performance of host bus adapters. The HCA cards provide this bandwidth in bidirectional flow per connection. With higher bandwidth, only one card will be needed per server, so they are greener.

The Xsigo chassis came with one 4x1 Gigabit card, dual Fibre Channel controller and a dual port Fibre Channel SAN card. The Fibre Channel control can provide high availability (HA) to a virtualized infrastructure by connecting to another Xsigo box.

Each Xsigo box can serve several racks of servers. In an HA scenario, installing products such as EMC PowerPath on primary servers is key for managing disk duplication. Nonetheless, the Xsigo box provides HA capability without the complexity.

The VMware plug-in impressed us the most. Through the VMware Infrastructure Client, administrators can manage all of the I/O of a virtual infrastructure. Because I/O is virtualized outside of physical servers, Xsigo's I/O Management software can generate any virtual connection on the fly, so administrators never have to shut down servers. Administrators can allocate resources at will without installing physical devices and drivers. Administration and deployment time is cut dramatically.

Quality of service (QoS) on vNICs and vHBAs are managed through profiles. Xsigo did a great job simplifying QoS on virtual I/O connections. In addition, the I/O Management plug-in supports VLANs, so vNICs can be segmented inside a profile. Likewise, the software can map vHBAs to LUNs. Through profiles, administrators can configure LUNs masks and all other host connections.

The I/O Controller software also provides vHBAs and vNIC statistics. The I/O throughput charts provide realtime views of each virtual connection. Out of the box, the Xsigo Management software comes with various network policy profiles. Administrators can set up network access control layers to control port level traffic as well.

The company has more than 100 connected hosts for proof of concept. Xsigo is offering partners a three-day hands-on training class to step them through complete product management, including troubleshooting complex virtualized data centers. The classes cover everything partners need to install and manage the product.