Next Year's Data Center ... In A Box
It's a dream no more.
With the delivery of Intel Corp.'s Nehalem platform for servers, combined with Microsoft Corp.'s Windows Server 2008 and Hyper-V virtualization software, everything has changed. In the span of less than an hour, it's now possible to boot up a server and deploy 20 virtual servers in the confines of a 1U rackable box.
And that box consumes no more than about 120 watts of power and throws no more than roughly 86 degrees Fahrenheit of heat, all while hosting 20 virtual servers that can each run a moderate workload in a stable environment. The result: Solution providers, now more than ever, will need to understand the current state of a business' power, cooling and infrastructure needs so that all the gains made in deploying these new servers aren't lost in back-room inefficiency. Here's what you need to know:
When Intel launched its latest server-processing platforms, including the Xeon 5570, it began shipping to market the most powerful industry-standard CPU ever created.
The New Technology
The CRN Test Center spent several days evaluating an AsusTek server built with an Intel Xeon 5570 at 2.93GHz, with 24 GB of memory and two hard disk drives of 300 GB each. Windows Server 2008 Enterprise Edition was loaded onto the system to get started. First, benchmark testing running Primate Labs' Geekbench 2.1.1 testing software on the 64-bit software ran up a score of 14,981. That compares to a score of 7,912 that the Test Center measured in November when it tested the then-new server running an Advanced Micro Devices "Shanghai" processor. The score is simply the highest benchmark ever run in the CRN Test Center lab. (In 32-bit software mode, the Geekbench score ran to 13,600.)
But Intel has designed this latest round of server processors to be about more than just benchmarking speed. The company has specifically said that the system was for, among other things, faster virtualization. We tried it out.
With Server 2008 running, we created 20 additional virtual Server 2008s (through cloning) in about 30 minutes. On the test server, we set out to launch the 20 separate virtual servers, each running Windows Server 2008. We assigned each 1 GB of memory. One at a time, we launched each virtual server, started a workload and then went on to launch the next server. All down the line, not a single server took more than a minute to boot—even when there were 19 other servers doing work, all running off the same CPU. We did begin to notice some slowing in the launch of VMs once we got to 20, as we reached the host systems' memory limit. However, all VMs continued running and performing their assigned workloads without a hiccup.
When we added it all up, we were able to create the equivalent of a 21-server data center, with each server running a workload, in less than an hour. (That includes the host as well as VMs.) From scratch. On startup, the server consumed 118 watts of power, and that remained stable during testing on a variety of workloads. The unit never rose above 87 degrees Fahrenheit. Taken all together, the implications for next-generation data centers—from a performance as well as an energy-efficiency point of view—will be profound.
Ever since the launch, little more than a year ago, of Windows Server 2008, the software showed huge potential to radically transform the data center. The one big hurdle: CPU and other hardware support to make full use of the software potential. With the Asus server, built with the Xeon 5570 processor that we examined, we believe that issue has finally been realized. Solution providers now have a single CPU solution that carries the weight of significant virtualization deployments in a highly manageable manner.
We believe it won't take long for the market to acknowledge the power of this technology and begin deploying it as enterprises strive for savings in energy, management costs and in real estate. (Indications are clear that many businesses are seeking to find additional opportunities for either revenue or cost-savings by turning big, old data center areas into work spaces.)
As enterprises of all sizes seek to leverage the power of the new Intel platform with various virtualization solutions, including Microsoft's, VARs will see opportunities to deliver upgrades in other critical data center areas.
Thinking About The Data Center
Power and cooling become even more critical as the data center footprint shrinks. If one set of fans, for example, is keeping 20 servers running, well, by gosh, it's important to make sure those fans are working properly.
Here are products that the Test Center has reviewed that we recommend during data center consolidations—particularly for small or midsize businesses or for wiring closets:
Power Management:
We gave Raritan Power Management a look. Keeping tabs on power usage trends in a data center is crucial to keeping control of power-related costs. Raritan's power management solution consists of a remotely accessible Power Distribution Unit—essentially a souped-up manageable power strip. Raritan's PDU is called the Dominion PX. The PX has a Web interface that gives current power usage statistics based on each outlet on the PDU. An administrator also can set the device to send alerts when certain thresholds are exceeded, such as temperature or humidity levels. This is yet another cost-saving advantage—the ability to halt any damage to equipment due to environmental issues. The device also can send an alert based on excesses in power usage, an excellent way to notify of any spikes in power consumption. Raritan also offers a handy virtual appliance called Power IQ, which centralizes PDUs in an enterprise, and allows an administrator to create a graphical layout of data centers. Power IQ comes with a comprehensive dashboard view to show the health and stats of PDUs, and powerful reporting capabilities give the ability to generate trend analysis of active power.
Data Center Automation:
Maintaining a Data Center 24x7 can be manpower- and cost-intensive. Even if it's for a small or midsize business, monitoring uptime is critical, yet costs can often get out of hand. Automating the monitoring process, though, is simple enough if the right product is chosen, and the Test Center continues to prefer American Power Conversion Corp.'s NetBotz. The rackable device monitors temperature, humidity, sound levels and, with a built-in-camera, the physical security of a data center—all remotely and all capable of sending out SMS or e-mail alerts if, for example, sound monitoring reveals unusual activity. At a list price of $1,275 for the NetBotz 320 Rack Appliance with Camera, and a mature channel program from APC, the Test Center recommends this product for its ability to cut costs in many data centers.
Uninterruptible Power:
Nobody ever wants to need one, but nobody can do without one. The UPS is about as unexciting as it gets in the world of IT—that is, until the lights go out. But Eaton Corp. has delivered a nice package in the Eaton 9130 UPS that the company bills as cost-effective, especially for emerging and evolving elements of the data center, such as VoIP and Wi-Fi, that can get cranky during power fluctuations. We looked at the tower configuration of the 9130, which is part of Eaton's Powerware series. Very cool: hot-swappable batteries that don't require the unit to power down during new battery installation. It's built into a 15-by-10-by-6.5-inch form-factor, which is nice. And we liked the Load Segment feature, which permits remote reboots or management of sequential startups.
In each case, these products and technologies, we've found, are nice complements to a scenario in which server consolidation is taking place. They offer VARs the stability of working with mature vendors, each of which have robust channel programs, and can be cost-effectively deployed for even small businesses during a data center consolidation or rearchitecture.
—Samara Lynn contributed to this article.