The Virtual Desktop: Everything Old Is New Again

Some technologies are like ladies' fashions; if you wait long enough, they eventually come back in style. The phenomenon is seen today with the current upswing of the virtual desktop. Introduced by IBM in the 1960s, virtualization has popped up in various forms over the decades, only to fade away like a pair of old jeans.

Since virtualization's 1999 implementation by VMware on x86 systems, processor makers have continued to increase the speed and power of their parts, leading to increases in server density and a rapid consolidation of the data center. In this fertile ground, virtualization and its cloud-computing cousin have matured from niche technologies to widespread infrastructure relied upon by millions.

The capability to quickly and easily virtualize just about any operating environment has renewed the interest of Enterprise IT departments to centrally manage the desktop, and number of off-the-shelf solutions can be sold for a relatively small up-front cost plus monthly maintenance.

One example is the SmartStyle Architecture from Zenith Infotech. This node-based private cloud employs one or more server nodes that virtualize the client operating system for IP-based delivery to new or existing client nodes. Benefits include remote control, centralized administration and real-time backup. Up front costs to the reseller total around $2,000, and per-user recurring revenues can be whatever the market will bear. Options include server redundancy, remote administration and centralized data snapshot.

id
unit-1659132512259
type
Sponsored post

For building a private VDI for a small office or department, Hewlett Packard offers solutions that include its ProLiant ML-350 dual-Xeon server running VMware ESXi Server 4.1 along with an 64-bit Atom-based t5740 Thin Client. With this HP solution, resellers have the flexibility to deploy one or a combination of session-broker clients from Microsoft (RDP), Citrix (XenDesktop), VMware (VMware View), or HP's own TeemTalk terminal emulation client for accessing legacy platforms. Server options include stand-alone, blade or rack-mount hardware, each with the usual complement of fail-over and backup options.

With new technologies come new challenges, and large-scale VDI deployments invite a unique style of disaster.

Sondra Padalecki is a senior solutions architect for IT outsourcing and global solutions provider Worldwide TechServices, where she handles global sales support and client implementation and maintains a VDI blog. In her experience, the main shortfall of large-scale deployments of virtual desktops has been insufficient I/O sizing. "You get I/O storms from rapid boot and concurrent log-ins," which often leads to a "frantic rush and a significant cost in additional equipment to overcome these issues," she said.

Alex Miroshnichenko has spent a lot of time optimizing I/O stacks. Founder and CTO of Virsto, a company focused on eliminating I/O bottlenecks on highly virtualized systems, explained why the problem exists in the first place. "Every operating system has a huge number of I/O caches, page caches, database flushes, etc. Once you implement those as guests on a server, those I/Os have to go through a limited number of tunnels in the physical I/O channel, where they choke up."

As the number of virtualized desktops on each server grows, it results in a huge number of highly random, independent disk I/O operations all mixing together in what has come to be known as a VM I/O Blender. "We realized that the core problem is that the I/O Blender--the highly random I/O going to disk, kills disk performance no matter what hypervisor you use."

Virsto One works by turning those randomized I/O requests into a sequential stream, and feeding them to the storage system at its highest possible level. Miroshnichenko claims that depending on hardware, Virsto One can improve performance levels by as much as three times. The product is currently available for Microsoft's Hyper-V; a version for VMware is under development and expected later this year.

Of course, resellers need to work with enterprise IT staff to extensively test a VDI system before deployment. But until recently, major infrastructure companies are not always forthcoming with the best testing tools, says Padalecki. "Microsoft, VMWare and Citrix have internal tools they can use for testing, [but they're] not available to individual companies unless they engage one of these vendors." That has changed as ITKO, Scapa Technologies and other traditional software test tool makers such have caught up to the technology.

Aside from the technologies, the obstacles to VDI adoption are the same as those of other CapEx projects: "The biggest obstacle is getting your audience to understand it, more specifically your finance dept," says Padalecki. "It always starts with creating the great business case, understanding licensing, scaling your storage and network environment, and of course, cost."

As with any project that provides a solution, the first step is to know the requirements. Identify, document, double-check requirements, and revisit them early and often during the project. Identify the solution from end to end, include all local and remote users now and in the future, calculate current licensing as well as the cost of ongoing maintenance, and test the infrastructure not only with testing tools, but with the type and number of applications the systems will be designed to support.

Most signs indicate that a virtualized world is headed our way. And as the transport and display protocols mature, they are likely to consolidate and become standardized, making cloud-based VDI a virtual certainty. "Companies have to find a way to control costs and work smarter," said Padalecki. "I think we will see more VDI in the cloud as security gets better [and] have a global workforce we have to be able to accommodate and [allow to] collaborate in a seamless environment."