Page 2 of 3
At just 600 Kbytes (smaller than the size of some drivers), Microsoft's Hyper-V hypervisor is going to change the virtualization landscape. Like the now Citrix-owned XenSource Inc.'s virtualization technology, Xen, Hyper-V maintains a microkernel and takes advantage of the Windows driver model, but without binding to device drivers. The kernel remains at 600 Kbytes after the service has been turned on. In other words, Hyper-V does not suffer from VMware's more expansive device driver architecture.
Hyper-V uses a network virtual switch driver to control Windows Server network I/O. The switch abstracts the adapter by shutting down the adapter's internal services and inserting itself between the network adapter and the virtual machines, including applications and services running on Windows Server 2008. The virtual switch also turns on the equivalent services that were disconnected from an adapter. Once in place, the switch wedges itself between virtual machines and the Windows 2008 server physical adapter.
Some early problems—none showstoppers—became evident during testing. In the process of creating a Windows 2003 virtual machine, lab reviewers, with the acknowledgement of Microsoft engineers, found a small bug on Hyper-V (keep in mind that Hyper-V is still in beta). Hyper-V failed to automatically recognize the network adapters. The virtual network switch did not bind itself to a network adapter. The problem was found on a Gateway quad-core Xeon server running Windows Server 2008 Enterprise. The Gateway server has two built-in NICs. Only one adapter was connected to the lab LAN when the bug was discovered. And that's in addition to the blue-screening with VMware noted earlier. This was aggravating, but it's a problem that can be resolved.
Testing also turned up some erratic behavior when creating switches in Hyper-V's Virtual Network Manager. Reviewers had to activate networking in the correct sequence between Windows Server 2008, Hyper-V and a virtual machine, otherwise, virtual machines weren't able to properly load Hyper-V's Integration Services disk. Moreover, older operating systems like Windows Server 2003 require that Service Pack 2 is first installed before the disk can load.
Once the integration components are installed on Windows Server 2003, solution providers should go to the Device Manager to check if Hyper-V's VMBus driver inserted itself and created a virtual network stack. The Windows Plug-N-Play kicks in and places the drivers in the right location. This is the best way to find out if the networking stack was installed properly.
If you make it through those minor rubs, the potential becomes jaw-dropping. (Yes, jaws actually dropped when it was installed in the Test Center.)
During our testing of Windows Server 2008, it was possible to install Windows Server 2003, Ubuntu, Fedora 8 and Open SUSE operating systems on Hyper-V-based virtual machines. In each case, Microsoft's management console allowed easy changes to memory and hard drive allocation, networking and other functions. And, remember, consolidating servers means consolidating other tasks that all cost money. Using Microsoft's Windows Server Backup in the lab, it was possible to schedule a system backup—on the entire system and its VMs—to a NAS device. Backing up the equivalent of four servers took a few minutes, using software that will simply be included in Microsoft's operating system.
Microsoft has also built Server 2008 to be more efficient than past OSes during installation. Server 2003, for example, loads every possible server feature onto the system during installation; administrators must then disable, one by one, the features they don't need or want. In Server 2008, though, Microsoft has done the opposite: It loads very few features during installation, and so administrators only have to enable what they need. The result is that installing Server 2008 takes a fraction of the time it does to install Server 2003.
Windows 2008 server was tested on the same hardware on which we tested Windows 2003. Companies now can take advantage of the same physical server to run the newer operating systems. However, if you're going to upgrade, it's best to use quad-core servers with between 8 and 16 Gbytes of memory when running native operating system services along with virtual machines in the same physical servers. That's because if you try to launch a virtual machine, you'll find it will stall because of memory limitations.
The VMBus architecture is a virtual I/O bus to maximize file performance. Essentially, VMBus uses the large shared memory channel between virtual machine partitions, so that OSes can transfer data between their shared memory buffers. VMBus does not depend on physical I/O devices. It is a bus that resides in the OS.
VMBus works like a client/server architecture: It controls communication between a provider and a client. In this case, the provider and client work on a virtual I/O stack. The client is just a driver that plugs into the virtual I/O stack. In fact, the client is a miniport that resides at the bottom of the I/O stack and it is shown in a VM's Device Manager tree. By contrast, the provider runs internally and makes use of the physical hardware devices.
By abstracting the I/O stack, the VMBus can encapsulate just about any hardware controller. For instance, it can encapsulate SCSI commands and control them through a virtual service client. Here, Hyper-V's microkernel hypervisor is superior to VMware's hypervisor because it isn't dependent on I/O transfers across its kernel. With the VMware hypervisor, I/Os affect the entire kernel because driver executions are embedded.
The VMBus uses a similar architecture on the network stack. The virtual network uses the network driver interface card (NDIS) commands. Similarly, video is also virtualized using this architecture.
Microsoft has also simplified clustering VMs on physical servers; just pointing to the VMs makes them cluster. This feature will make Web servers, for instance, scale up and be highly available.