How Virtualization Is Revolutionizing Business Data Centers

Charlotte Observer

What to do? The newspaper reversed course. Rather than deploy more servers, it has spent the past 14 months eliminating them by morphing single-purpose servers into virtual machines within host computers. Space opened up as the number of servers--currently 120--was reduced, says Geoff Shorter, the Observer's IT infrastructure manager. Power and cooling costs were cut in half, from $7,000 a month to about $3,500. Shorter now expects to be down to around 30 quad-core servers in three years, and he envisions shrinking the data center to 800 square feet, less than half its current size. "We plan to virtualize every server we have," he says.

Virtualization--the science of creating multiple self-contained application environments on a single physical server--is altering the way businesses manage computing resources and changing the skills they expect from their IT staffs. In an industry driven by--and just as often disappointed by--the Next Big Thing, virtualization is just that, delivering in spades on the promise of efficient hardware utilization, better resource allocation, flexible application services, and lower costs.

Sponsored post

But there are pitfalls. Moving to a virtualized environment isn't cheap. Security is a wild card. Application performance can suffer. And vendor support for virtualized applications can be problematic. In short, the proliferation of virtual machines threatens to cause some of the same issues associated with the proliferation of servers.

But managed correctly--and there's technology and techniques emerging to do just that--virtualization gives IT managers a powerful set of tools to revamp data centers and harness the complexity that threatens to overwhelm them.



Small is beautiful in the virtual world, says VMware's Greene

Swarms of IT professionals descended on last month's VMworld conference (sponsored by market leader and EMC subsidiary VMware) to figure out how to get virtualization right. When it started two years ago, VMworld drew 1,200 devotees. This year, with attendance mushrooming to 7,000, it was moved from San Francisco's Moscone West Hall (capacity 4,000) to the cavernous Los Angeles Convention Center. "This whole community has the vibe of the early Mac community," said Andy Bechtolsheim, chief architect of Sun's Network Systems Group, at the event.

Walking a convention center hallway, Chuck Timm, a young network engineer, explains how he got from Edward Hospital in Napierville, Ill., to palm-studded Los Angeles. Like the Charlotte Observer, the hospital is running out of space in its data center. "I gave management the option of sending me to this conference or blowing out the walls," Timm says.

Right now, Fidelity relies on virtual machines primarily as its software development servers, with about five VMs on each physical server. That's often a first step with virtualization, because software testing requires different operating environments. If Fidelity deployed physical servers each time it needed a new machine, they'd be grossly underutilized, Ostager says. The next step for the company: running some business applications in VMs.

United Technologies' Pratt & Whitney subsidiary has moved beyond software testing into virtualized applications, Web servers, and Oracle and Microsoft databases. It's using VMware's ESX Server to run 110 virtual machines on 16 physical servers. "We're getting the appropriate people trained and feeling more confident with the process," says John Panatonni, IT systems designer.

A Virtual PrimerVirtualization has added its own lexicon to the computer industry. Here are a few terms:

>> Virtual Machine

One instance of an operating system running an application within certain parameters on a host computer, such as what CPU cycles it may use.

>> Hypervisor

An operating system kernel that answers calls from multiple virtual machines and translates them into requests to the underlying hardware, increasing efficiency.

>> Paravirtualization

A technique that allows a hypervisor to interact with the host operating system's device drivers; used by the Xen open source virtual machine engine.

>> Virtual Appliance

A combo of application and operating system optimized to run together and converted into a virtual file, ready to run in a virtual machine.

>> Virtual Hard Disk Format

Microsoft's format for virtual files. Microsoft has pledged to keep it open so other vendors' software may be virtualized with it and run in a Windows environment.

On a bus from VMworld back to his hotel at the end of a show day, Ahmed Mashaal, a curly-haired young man with a beard, tells how the FCC's wireless and telecommunications division has virtualized much of its IT infrastructure, including Web servers, development servers, and database servers. About three-quarters of the FCC's telecom applications run in VMs. Mashaal, a former IT staffer at the FCC and now a contract programmer there, says a VM is as easy to manage as a file on a PC. "You can copy it, back it up, move it over to a laptop," he says. "The flexibility is very powerful."

VMware president Diane Greene says virtualization technology lets data center managers focus on smaller software units--those optimized to run in virtual machines--rather than monolithic software wed to standalone servers. But she's not preaching revolution: Adopters of virtualization can start slowly and proceed application by application. "Virtualization is the least disruptive of disruptive technologies," she says.

Part of the excitement around virtualization is that many adopters are moving beyond the server consolidation phase. In a study published in August based on 150 interviews with early implementors, Andi Mann, an IT consultant with Enterprise Management Associates, identified disaster recovery and business continuity as the No. 1 driver of virtualization. Copies of a company's software combinations--operating system, applications, databases--can be built and moved to off-site computers, ready to be powered up in virtual machines at a moment's notice. That "instantaneous recovery," as Greene puts it, is possible without an investment in dedicated hardware, because virtualized software can run in VMs on whatever hardware is available. Two data centers can serve as hot standbys for each other.

The same principle holds true for data center applications. The Charlotte Observer is virtualizing its circulation applications, which run on an Oracle database and are key to delivering newspapers, billing customers, and activating new subscribers. If a circulation app should fail, another can be started from a disk drive and run in a VM on any available server.


What's not immediately apparent about virtualization is that it fundamentally changes the relationship between software and underlying hardware. An application usually runs on an operating system that's been tuned to a particular piece of hardware. Virtualization breaks the operating system's tie to the hardware, then re-establishes it directly with the application itself.

That's led to so-called virtual appliances. Instead of being optimized to hardware, the operating system is configured to run a particular application, then operating system and application are combined in a virtual file format, ready to run in a VM. Virtual appliances can be downloaded from the Web and moved from one server to another. VMware has a virtual appliance marketplace, where 347 preconfigured applications are available.

When a virtual appliance is downloaded, it's ready to run, without the painful setup sometimes associated with new applications. The use of such virtualized files, says Rich Lechner, IBM's VP of virtualization, could knock down the maintenance portion of IT budgets--historically 70% of overall costs--by as much as 20%.

The flexibility provided by virtualization is one of the keys to getting away from the data center's collection of rigid silos and to achieving a more adaptive enterprise. But proliferating virtual machines pose their own challenges.

Systems management tools tend to see either physical or virtual servers, not both. Only CA's Unicenter ASM 11.1 and tools from a few startups can portray both types of servers and map between them. So far, virtualization vendors, including VMware and Virtual Iron Software, have been moving faster than systems management vendors such as Hewlett-Packard and IBM; systems administrators want virtualization management closely integrated with existing systems management consoles.

Mashaal, the FCC contractor and a user of VMware's Virtual Center console, says new tools make it easy to generate as many VMs as you want. Deleting them is nearly as easy, so you better be careful. He witnessed an IT manager choose the wrong command and wipe out a VM "like you'd delete a Word document." Fortunately, with the backup and recovery features built into the management console, they were able to resurrect a new VM in about 15 minutes.

Another gotcha: There can be problems getting technical support when running a packaged application in a VM. Support specialists don't know whether the problem lies with the application or another vendor's virtual machine software, so they may insist that the problem be duplicated on a real server before they provide support. Edward Hospital's Timm had that experience with Microsoft support.

Not all the security vulnerabilities connected with VMs have been discovered, warns Gordon Haff, an analyst with Illuminata. When you update a server operating system, there are few patching tools that ensure that each VM operating system gets patched as well. "What if a virtual machine is running that didn't get a security patch? It's foolish to say, 'No one knew it was there,' once an intruder has capitalized on it," Haff says.

And who's guaranteeing the security of VMs, especially in hosted settings where one company's application could be running just nanoseconds away from a competitor's? Fidelity National's Ostager isn't worried. Part of the beauty of VMs, he says, is that they can't transgress the memory space allotted by a systems administrator. They're self-policing, which affords the flexibility to deploy applications alongside one another on one physical server and not have them conflict.

Service-oriented architectures are a little less daunting with virtualization. Services are small applications that need to be available quickly, and one way to do that is to format them to run in VMs. Virtualization enables SOAs by making it easier to manage those applications as a series of small services, each running in its own isolated environment, says IBM's Lechner. But to get there, IT staffs will have to convert 90% of their applications to files running in VMs.


IBM, HP, and Sun Microsystems have had proprietary forms of virtualization for years. IBM used virtualization in its VM operating system, which ran "guests" on the mainframe. HP can generate VMs with its Virtual Server Environment, and it teams with VMware to market the Virtual Desktop Infrastructure, which gives end users virtual applications off a central server.

Basic virtualization technology, such as VMware's original GSX server and Microsoft's Virtual Server, duplicates an operating system and application in each VM, and relies on the virtual OS to send instructions to the hardware. Operating system replication makes VMs voracious memory hogs. In the case of Windows, it adds significant license charges, too.

VMware's more advanced technology, ESX Server, and the Xen open source virtual machine engine are what's called hypervisors, where one piece of the virtualization software handles the calls for hardware resources from multiple VMs on a server. That's more efficient because one hypervisor can support many VMs and hand off software instructions to the hardware directly rather than through an operating system layer.

All virtualization technology imposes a performance penalty, up to 15% by some estimates. And performance is a sensitive topic: VMware requires in its contracts that customers not disclose performance data regarding its products unless it has reviewed and approved the test process. So far, nobody's taken it up on the offer, a spokesperson says.

At the Observer, Shorter has run tests on Virtual Iron's Virtual Iron 3, which is based on the Xen hypervisor. A circulation application that takes 45 minutes to run natively on a 3-year-old Sun 3800 four-way server takes 11 to 13 minutes to run unvirtualized on a Dell 2850 duo-core server, he says. Run again in a Virtual Iron virtual machine, it takes the same 11 to 13 minutes. "We were comparing apples to apples," he says, proving virtualization can be engineered to avoid most of the performance hit.

Sun's virtualization technology, known as "containers," lets several applications run under one Solaris 10 operating system. The overhead is "close to zero," says Sun product line manager Joost Pronk van Hoogeveen. Sun has demonstrated a four-way Solaris server running 500 containers, each running an application, though van Hoogeveen concedes that in the demo "none of them was doing very much."

Virtuozzo from startup SWsoft does the same thing for Windows and Linux. With Virtuozzo, several applications can run under a single copy of the operating system. Most companies are running one kind of operating system on a server, even if it's in several virtual machines, says SWsoft marketing manager Carla Safigan. "They don't necessarily want to mix Windows and Linux," she says.


VMware, founded in 1998, emerged as the virtualization market leader in 2004, displacing IBM. VMware dominates the field for virtualizing servers based on Intel's X86 architecture, a popular but underutilized resource in most data centers. Placing virtual machines on multicore servers, the latest hardware trend, is one of the few ways to capitalize on the full capabilities of those machines without rewriting applications into parallel systems, a difficult task.

An open source project may offer a compelling price-performance alternative to VMware. Simon Crosby, CTO of XenSource, the parent company of Xen, is a Cambridge University-educated South African with a mop of rebellious reddish-brown hair. He says IT managers should focus more closely on the advantages of the Xen VM engine, which is built on a "paravirtualization" approach. That means, under Xen, the Linux or Windows operating system is aware it's been virtualized and requires the hypervisor to do less work communicating with the server's hardware devices.

As a result, XenSource doesn't need to supply hardware device drivers for each server platform it supports. Many of VMware's 2,400 employees are engaged in producing device drivers for ESX Server, Crosby claims, while XenSource can rely on the drivers provided by Red Hat or Suse Linux, or in the case of Windows, those supplied by Microsoft, letting Xen offer 80% of the functionality of VMware at 20% of the cost. VMware charges $3,750 per ESX Server installation compared with $750 for a XenSource-supported version of Xen.

Microsoft is positioning itself for a major role in the virtualization market. Right now, however, it's a laggard, offering Virtual Server as a VM engine under Windows when both VMware and XenSource have moved to the hypervisor stage. At the FCC, Mashaal has tested Microsoft's Virtual Server against VMware's ESX Server and says Microsoft is at least three years behind.


Less is more, says Shorter, as he eliminates servers

Getting Windows into the emerging virtual appliance marketplace may prove difficult. It isn't constructed modularly, like Linux, so things can't be added or subtracted easily, which is what helps configure the operating system to the application. So far, the virtual appliance market has been almost exclusively a Linux play.

Microsoft has enlisted XenSource and Novell for help in bringing Linux virtualization to Windows Longhorn, the server version of Vista. Sometime after Longhorn's release next year, Microsoft will add its still-in-development Viridian hypervisor, which will run Linux as a VM.

If all goes according to plan, in the data center of tomorrow, systems administrators will manipulate virtual appliances, virtualized files, and a mix of virtual machines to allocate and reallocate resources. But it won't happen overnight.

"Virtualization is an evolutionary path," says Mendel Rosenblum, a Stanford University associate professor and co-founder and chief scientist of VMware. Rosenblum did the original work on virtualizing the X86 architecture, and at VMworld he enjoyed guru status. He's also Greene's husband, and echoes her thoughts on the step-by-step nature of virtualization. "You can start with server consolidation, then drop in virtual appliances," he says. "The end result is a radical change."

Radical change without the revolution. It explains virtualization's rapid acceptance--and promising future.

Illustration by Ryan Etter

Continue to the sidebar:
Virtualization Spawns Startup Companies