Fibre Channel over Ethernet is poised to revamp the storage industry, but is it worth considering just yet?
It does have a lot of potential, yet CRN Test Center reviewers ran into a hiccup during a remote test of an FCoE solution. Overall, though, we liked what we saw.
First, some background: The key idea behind FCoE is convergence—consolidating SANs that traditionally use Fibre Channel with Ethernet networks without having to make changes to the network infrastructure. Some of FCoE's advantages are high-speed data transmission and "lossless" transmission over Ethernet. Since FCoE can utilize 10-Gbit bandwidth, this enhanced speed reduces network traffic collision and dropped frames, hence resulting in lossless transmission.
FCoE vendors claim yet another advantage: the potential to lower TCO through better server utilization of network and storage resources, simplification of the infrastructure, and maintenance of the network's physical layout. With FCoE, there is no need to mix Fibre Channel and Ethernet so less wiring is needed. That means less equipment and a consequential reduction in power and cooling requirements. The scalability potential is vast, as well.
Aliso Viejo, Calif.-based QLogic Corp. has delivered its own Converged Network Adapter (CNA) designed to consolidate the data networking of a NIC with the storage networking of a Fibre Channel host-bus adapter, all done on a single 10-Gbit Ethernet adapter. QLogic's CNA works in tandem with San Jose, Calif.-based Cisco System Inc.'s Nexus 500 series switch and Sunnyvale, Calif.-based NetApp's FAS3040 storage system. All three components make up a multivendor offering that Qlogic touts as the first FCoE native storage solution. Cisco's switch is the first to come with both Fibre Channel and Ethernet.
QLogic's setup offers a remote, "hands-on" FCoE environment, giving reviewers the chance to experience the difference between Fibre Channel and FCoE.
Testing started off with the configuration of the Cisco switch. This is done through the Cisco Fabric Manager. Fabric Manager gives a detailed, yet well-organized, view of logical objects in a network such as SANs and VSANs connected to the switch, as well as physical objects like interfaces. There is also a graphical image of the Cisco switch and the devices connected to it. In this test, the switch had the NetApp storage device connected and displayed that the connection to an ESX manager FCoE was down.
The next step of configuration involved converting a port on the switch to a Virtual Interface Group (VIG) port. This is the first step in getting the port converted to an FCoE port.
It was easy, although we wouldn't exactly say intuitive, to do the port conversion. A right-click on the VIG icon to create a new row, defining the port ID number and then selecting the Eth interface are the steps involved in creating a VIG port and then binding it to an Ethernet port. Next, we set out to create the Virtual Fibre Channel Interface. Again, this involved a right click, and then selecting the switch name and the VIG port ID created in the above set. This is the last step in creating an FCoE port. The task essentially involved the same handful of steps as above. While the Fabric Manager console is not, once again, so intuitively navigable, the steps are consistent throughout.
Other tasks we ran included setting up the NetApp device, creating a VMware Logical Unit Number (LUN) and mapping it to Fibre Channel and FCoE initiators. Reviewers were stopped dead during the hands-on review trying to set up the VMware LUN. The system reported the following: "Error during configuration of the host: Failed to update disk partition information." We threw this back to QLogic engineers for them to follow up. They found that the NetApp protocol had become disabled, but when asked about it, they could not provide an explanation. However, once the protocol was re-enabled, the testing continued glitch-free.
It was fairly simple via the interface to add the VMware LUN to datastore on existing ESX servers and then add the LUN as a hard drive on a virtual machine. Once accomplished, it was a breeze to manage the virtual disk though Windows. The disk was partitioned the same way any physical drive would be within Windows Disk Management.
Reviewers witnessed the performance of this solution by using JetStress, a utility that simulates an Exchange environment. The test gave read-write counts for an Exchange server that could support up to 700 mailboxes, each averaging 200 MB, and showed how efficiently I/O and Exchange runs over FCoE as compared to a traditional SAN. This solution is sophisticated, offers hearty performance, and each of the vendor's products seemed to integrate well with one another. The protocol issue with the NetApp device is still a bit worrisome. This solution is not a cheap one as well: Starting list alone for the NetApp device is $47,500. However, per QLogic, additional channel pricing is set by the OEM. Although the initial startup costs may be high, the focus in a solution like this is a long-term reduction in costs.