The Future Of Servers: Unified, Consolidated, Virtual

The days of companies buying racks full of "pizza boxes" may be coming to an end, as the increased adoption of server virtualization has businesses scrambling to virtualize the loads of multiple individual server boxes onto larger, more powerful multicore servers.



Servers of the future are likely to be hybrids, combining the elements of traditional servers, storage devices and networking into a single hardware platform topped by virtualization technology.



Cisco, as part of the buzz surrounding its expected first foray into the server business, calls this trend "unified computing." But this is more than a Cisco strategy. Most of the top server vendors see this unified concept as the main focus in the server business going forward.



In fact, the first steps have already been taken. Sun and Intel, for instance, have both offered "unified" or "hybrid" server/storage combinations which, unlike traditional servers with internal storage, act as both servers and storage appliances.





Here's how the top vendors, including the biggest up-and-comer of all, Cisco, see server development going forward.

Padmasree Warrior, Cisco CTO, used a blog in January to describe Cisco's unified computing strategy and help ignite buzz around Cisco's unveiling Monday of that strategy.



In her blog, Warrior wrote that Cisco is innovating around an architectural approach she calls "Unified Computing."

She defined it as a way to advance toward the next-generation data center by unifying all resources in a common architecture, including the compute platform, the storage platform, the network and the virtualization platform. Such an integrated architecture breaks down the silos between compute, virtualization and connect, she wrote.

"IT architectures are changing, becoming increasingly distributed, utilizing more open standards and striving for automation. IT has traditionally been very good at automating everything but IT! Unified Computing and automation at an architectural level can lower operating costs while extending capital assets," Warrior wrote.

The commoditization of servers over the past few years has been great as long as performance doubled every two years, but that is no longer enough, said Tom Bradicich, IBM fellow and vice president of systems technology with the company.



Instead, what is required is combining compute resources to meet customers' needs for increased efficiencies, lower costs, increased reliability and more environmental friendliness, Bradicich said.



That means some type of a hybrid computing platform, which includes general-purpose servers, technology for virtualizing applications and securing data, new compute-intensive applications, compatibility with Web devices and a networking subsystem.



And these all have to work together in the same manner as an orchestra, Bradicich said.



"You have the players, the conductor and some combination of experts to play the music to do it right," he said. "You could play Beethoven with 100 accordion players. But to do it right, you need the right combination."



Increased efficiencies from combining compute resources will make possible the building of efficient cloud-computing infrastructures by eliminating the time-and-place constraints to computing, Bradicich said.



"We were able to eliminate time-and-place constraints by opening 24-hour ATM machines," he said. "Then, when the Internet became robust, we did it again. If you can do that with computing, we have the cloud. Then the efficiencies of IT rise dramatically."

Even with a fall in the cost of computing, customers are focusing less on capital costs and more on operating expenses, said Anthony Dina, (left), director of server strategy at Dell.



As a result, customers are looking at commonality and standardization between different components, and are insisting on heterogeneous connectivity, especially between blade servers and networks, Dina said.



Ethernet is becoming the fabric of choice, especially as virtualization takes hold in the data center, said Larry Hart, director of storage strategy for Dell. "Instead of having to deal with Fibre Channel, InfiniBand and Ethernet, we feel customers prefer to manage one fabric, Ethernet, whether it's Gigabit Ethernet, 10-Gbit Ethernet or the upcoming 40-Gbit Ethernet," Hart said.



As more and more servers get virtualized and connected to a SAN, the need for storage administrators to work with Ethernet will be critical, Hart said.

"To provide a unified fabric, they will need to converge virtualization; systems management, including a common framework; and network simplification," he said.



The cost of such a unified fabric is still a barrier to adopting it, but companies like Dell are working to make 10-Gbit Ethernet cheaper than 4-Gbit Fibre Channel or 4-Gbit Ethernet ports, Dina said.



"But there are also political issues," he said. "Companies are reluctant to collapse their connectivity into a single fabric. But this is changing."



About half of current blade servers are attached to a SAN via Fibre Channel blade switches in the chassis, all of which could be eliminated with a converged fabric, Dina said.

Jim Ganthier, director of BladeSystem marketing in Hewlett-Packard's Enterprise Storage and Servers Division, said that data centers today are based on islands of expertise, including storage, LAN and servers that have to be brought together.





"IT needs to exist for one reason only: help drive business," Ganthier said. "It needs to help businesses decrease risk, decrease costs and put information into the hands that need it. So instead of multiple individual islands, businesses should have pooled resources of assets that can be assigned and then put back into the pool."



Ganthier said that servers will not only continue to get faster, but they will not be viewed as islands of processing. Instead, Ganthier said, servers will be more flexible, modular and scalable, and part of integrated systems.



As a result, instead of focusing on what is the next processor, customers are starting to demand that their data centers will help them with their assets, Ganthier said.



"It will lead to huge changes in the way servers will be done," he said. "The next level of integration? We're way beyond that. People should be asking for quantum changes. We need to look at what we are doing for storage, power, cooling and management."

Sun has already started the move towards "modular computing," which is what it calls the future convergence of server, storage and networking, said CTO Shane Sigler.



The company already produces hybrid server/storage devices, in particular its Amber Road series introduced late last year.



Sun also recently reorganized to put its server, storage, Solaris operating system, virtualization and software in its Systems Group.



The moves are a result of a combination of factors, including increased processor density, which allows up to 256 processor cores in a 4U enclosure, the increasing availability of solid state disk storage and improvements in networking as InfiniBand gets ready to move to 40 Gbps and 10-Gbit Ethernet starts to take off, Sigler said.



"A lot of this is about driving efficiencies, including power, cooling and software," he said. "It's about making it a lot more efficient from the customer perspective."

The increased use of server virtualization will have the greatest impact on the future development of servers, said Margaret Lewis, director of commercial solutions and software for AMD.



Server virtualization lets customers consolidate underutilized server resources to cut energy consumption, cooling requirements and space in the data center, Lewis said.



Part of that includes a drop in demand for a "pizza-box" type of rack-mount servers and an increase in demand for blade servers to cut power consumption, she said.



However, Lewis said, cloud computing could result in demand for less powerful and low-power-consuming servers. Such servers also might not need the kind of reliability required for servers in traditional applications because their reliability would come from clustering.



"If you have big clusters of computers catering to many customers' needs, we may not want more powerful computers," she said. "We may want higher density, or several more smaller processors. It depends on the cloud providers. Some want fewer but denser racks. Or they want single smaller boxes with low-power processors."



More powerful processors, on the other hand, will be the basis on which servers, storage, networking, and other devices and functions will converge into a single device, Lewis said.



"Multicore processors are important for multiple technologies," she said. "But we also look at what things can be moved from software into the processor. For example, VMware and Connectix, which was acquired by Microsoft, started doing virtualization in software, and then came to AMD and Intel to look at the workload and determine which functions could be put on the processor.

Servers today are described in terms of processor, memory and applications, but as customers start moving to computing clouds, their servers will become more of a mysterious "black box" inside a building, said Eric Doyle, channel enterprise marketing manager at Intel.



As server, storage and networking resources become more and more converged, enterprise customers will find computing to be more efficient and flexible, Doyle said.



Server virtualization offers a glimpse of how that will happen. A year ago, customers who bought new servers couldn't pool them with older servers, Doyle said. "But now, with virtualization, they can do flexible pooling," he said. "Storage is at that situation today. We're chasing the bottleneck. Before we chased the server bottlenecks. Next, we'll chase the storage bottlenecks, and then the networking bottlenecks."



That comes thanks to Moore's Law, which after all these years is still working, Doyle said.



"A few years ago, people chased clock frequencies," he said. "But we are still getting more transistors on the processors, and using the extra processing not just to increase the clock frequency but also to improve storage and networking."



Intel's vision, and hope, is that within two years, storage and networking will be as easy to virtualize as servers are today, Doyle said.



"I want to be able to hit one switch, and virtualize my server, my storage and my networking for the appropriate workload," he said.