A growing number of organizations, from cloud hosting companies to enterprise data centers, are considering a higher performance data center and evaluating the emerging collection of products and technologies concerning Ethernet switching, storage convergence and network virtualization. Those products and technologies, combined with the trend for increased virtualization of servers (driven by the unification of data and converged storage networks) are leading the market to a network solution called a “fabric” type Ethernet data center. (Ed. note: For recent news on that topic, see, "Xsigo Challenging Data Center Giants With Virtualized Server Fabric.")
IT solution providers, charged with helping their clients to stay in step with cost-saving and technology trends, need to know how to effectively navigate through the rapid changes that have occurred with the network model for delivering services from a centralized network and/or the virtualized cloud.
The first step in the process of migrating to a fabric is answering this key question: “Just what is a data center fabric?” The answer sets the foundation to establishing a plan for building a cost-effective converged data and storage network and/or cloud infrastructure. Simply defined, an Ethernet data center fabric is a high-performance network that will provide high-speed, low-latency interconnectivity. The fabric comprises a non-blocking, non-oversubscribed Ethernet switched network that supports Layer 2 connectivity, promoting a flattened and often more cost-effective network design. Additionally, an Ethernet data center fabric will provide performance attributes such as multiple active paths with fast failover, mesh connectivity in place of a Spanning Tree topology. Lastly, simplified management, configuration and service provisioning will be integrated. Given this set of attributes, what is the best way to build an Ethernet-based fabric network for the cloud?
Today, controlling costs means everything. Enterprises and cloud hosting providers are seeking to build a network that better meets service level agreements (SLAs) while maintaining performance and flexibility with the network. That can be achieved by seeking open and standards-based technologies, including data center bridging (DCB) (IEEE 802.1), edge virtual bridging (EVB), and, conversely, by avoiding proprietary vendor technologies, because they often lead to architecture lock-in and hinder pricing leverage.
The second step to achieving a cost effective data center fabric is choosing appropriate connection speeds at the port. Emerging 40 Gigabit Ethernet (GbE) pricing (estimated at approximately $4,000 per port) compares favorably to early published 100GbE pricing, which has a general list cost of 5x more. Thus, 40 GbE ports become the more cost-effective choice. Given that servers are moving to 10GbE performance, the access layer of the network downstream is also naturally based on 10 GbE port connections. That means that the interconnectivity, the fabric that ties access layer switches together, will move to 40GbE based on price and performance. Combined, the resulting data center network is an open standards, high density and high fan out, non-blocking 40GbE data center fabric providing high-speed, low-latency interconnectivity.
The third step would be the adding the functionality to provide active-active data paths with as few hops as possible — and that is done for two reasons; first to avoid taking multiple hops up and down a traditional Spanning Tree topology and second to use bandwidth on all available paths. Here again, there are open interoperable approaches to achieve multi-path forwarding. MLAG is one solution which provides dual active-active links with fast failover and works on most existing infrastructure while interoperating with other network infrastructure devices. Alternative standards-based solutions including TRILL or SPB may also be considered where newer network infrastructure needs to be deployed.
Lastly, the requirement for simplified management is once again being met through an open standards approach. OpenFlow holds great promise as an up-and-coming, breakthrough technology for provisioning, configuration and administration. Already initiatives such as OpenStack are looking at OpenFlow as an open approach to provisioning in large, cloud-scale data centers. The Open Networking Foundation (ONF), which will drive the OpenFlow effort, is backed by a large cross-section of both consumers and providers of this technology.
The combination of technologies, i.e. high density, non-blocking standards-based 40GbE interconnectivity, multi-path support, and a standards-based provisioning solution such as OpenFlow should provide a viable, open-standards solution to the demand for layer 2 data center fabrics that can serve the movement to the cloud.
Shehzad Merchant is vice president of technology for Extreme Networks, where he drives strategy and technology direction for advanced networking including LANs and Data Centers. With over 17 years of industry experience, and an engineering track record that is highlighted by the achievement of several patents, Shehzad is a veteran of wired and wireless Ethernet and communications.
- How To Protect Customers From Online Fraud
- How to Choose a Next-Generation Firewall
- How To Batten Down Network Security and Increase ISP Customer Satisfaction
- How to Successfully Help Customers Mitigate Application Issues Around Windows 7
- How To Accelerate Cloud Adoption Through Windows 8
- How To Prepare for Deploying 100GigE In 6 Steps
- How To Secure Mobile Devices
- How to Prepare for a Microsoft Exchange Migration
- How To Develop Reliable IT Applications While Reducing Costs
- How To Keep Data Safe In the Cloud
- How To Integrate High Performance Computing On the Cloud
- How to Successfully Execute IT Projects Without Fail
- How to Cost Effectively Migrate to a Network Fabric