Efficiency Or Bust: Data Centers' Drive For Low-Power Solutions Prompts Channel Growth

The Data Center Dilema

In the tech world today, "going green" isn’t just a hats-off to Mother Nature. Sometimes it’s a sink-or-swim initiative on which a business' future lies -- especially when it comes to data centers.

Amid rising energy costs, data center operators are embarking on a quest for efficiency. The search is driven largely by financial interest, as electricity and cooling equipment imply hefty operational fees. For a lot of data centers, though, the need for efficiency is much more pressing. SMBs, especially, face physical space constraints and finite power supplies that -- if not maximized -- will stump their business' growth, eliminating any chance of expansion.

Data centers today consume almost 1.5 percent of the world's total electricity production and cost nearly $44.5 billion a year to run, according to industry analyst Linley Gwennap.

What’s more, DataCenterDynamics, another market researcher, projects that this consumption will rise a whopping 20 percent in 2012, putting total data center power consumption around 31 GW -- enough energy to power all the residential homes in France, Italy or the U.K.

id
unit-1659132512259
type
Sponsored post

If this growth estimate proves true, energy costs will skyrocket. "Looking at the growth projections for data center usage and the future of power generation growth, this trajectory is unsustainable," Gwennap warns. "A new paradigm for developing data centers based on energy efficiency will certainly help make data centers scale realistically with future demand growth."

Luckily, this new paradigm -- or at least a shift in its direction -- is here. Cognizant of data centers’ hunger for efficiency, solution providers in the processor and server space are taking significant strides toward the delivery of low-power, cost-saving alternatives. These advances, said to offer efficiency without sacrificing performance, will cut data center costs, provide a platform for SMB growth, and arm VARs with never-before-seen opportunities in the server and processor space.

Vendors Gear Up To Go Green

Intel, the market share leader in the processor space, hasn’t historically had the most robust offering of low-power processors. As data center consumption becomes a more prominent issue, however, the chipmaker has joined the ranks. In addition to its revamped Xeon 5600 processors, Intel has launched several server management solutions, intended to monitor and manage power and cooling resources within data centers.

One such solution, the Intel Intelligent Power Node Manager, brings component instrumentation to the platform level to optimize the use of every watt consumed. The second-generation release can report on system-level consumption, along with process and memory subsystem consumption.

Jay Kyathsandra, product marketing manager for data center and connected systems group at Intel, said that most data center racks peak at only 50 or 60 percent. Power Node Manager can bump those peak levels up to 80 or 90 percent.

"In a rack -- say you have a limit of five or seven kilowatts -- you can now go up to 80 or 90 percent of that full power capacity knowing fully well that, if there is a spike, the Power Node Manager will help the whole rack continue to run at a slightly dropped performance level. So you can peak up to the peak available power, and still not have an outage, which has always been a fear," Kyathsandra said. "We believe there is a lot of headroom there that can be capitalized with the right tools and technologies, while making sure that reliability and risk is addressed. That’s where Intel is coming from."

While Intel's power management solutions help data center operators control and optimize cooling procedures and performance, fellow chipmaker Nvidia is enabling energy conservation through the use of its GPUs and parallel processing architectures.

As GPUs continue to serve as low-power, high-performing complements to traditional CPUs within supercomputers, their ability to reduce costs within data centers is becoming more and more evident.

"GPUs are dramatically better when it comes to energy efficiency [compared to traditional CPUs]," said Sumit Gupta, director, high performance computing products at Nvidia. "And the number one thing I hear when I talk to data center guys is 'I’m power-limited. I’m limited in how much power I have in my data center, but the engineers, or the business unit, or the scientists are asking for more performance.' They’re feeling that pressure."

In response to this pressure, Nvidia has dedicated itself to the GPU, parallel computing model, and low-power ARM architecture, all said to offer more performance-per-watt than traditional CPU models.

And the chipmaker isn’t alone. PC giant Hewlett-Packard announced in early November a partnership with startup server vendor Calxeda in an effort to bring less power-hungry ARM-based servers to data centers. The initiative, now deemed Project Moonshot, has thrust HP to an influential, front-and-center position within the move toward efficiency.

As part of Project Moonshot, HP unveiled its Redstone Server Development Platform based on quad-core EnergyCore ARM Cortex processors. The development platform will serve as testing grounds for customers and partners to explore new avenues for data center power reduction.

NEXT: AMD Steps Up

Alongside HP, Intel, and Nvidia, AMD is responding to market demand for more energy-efficient servers and data center infrastructures.

The firm’s most recent response can be seen with its launch of a new server-oriented CPU series, Opteron 6200 (or "Interlagos"). AMD claims Opteron 6200 is the world’s first 16-core x86 server processor, said to boost performance by up to 84 percent -- without compromising efficiency.

The Thermal Design Power (TDP) Power Cap feature is perhaps one of most significant, and energy-conserving, advancements seen with the new AMD Opteron processors. The cap allows data centers to maximize server density by configuring TDP power limits by granular, one-watt increments. Rather than having to select different TDPs for multiple processors, a data center can leverage a single power range, and simply modulate down when needed.

The elimination of TDP on a per-processor basis enables data centers to tap into unused space and install more processors within a pre-determined power budget.

"This is a huge feature for us because it’s not only helping reduce power on the platform side, but it’s a nob for the data center guys to potentially integrate into their data center cooling infrastructure, where we provide exact thermals for the processors themselves so that they can make their own deterministic times as to when they want to ratchet power down," Brent Kerby, senior product manager for AMD Opteron processors, told CRN. "They have flexibility to fine tune their performance per watt."

While in the past, this level of fine-tuning may have required customized CPUs, the TDP Power Cap with Opteron brings this level of customization to a new, broader level.

Efficiency Presents New Opportunities, Clients To VARs

As customizable solutions tend to do, the control-your-own-power trend lends itself to resellers. According to Kerby, the TDP Power Cap’s ability to resonate so well within the channel was a driving force behind AMD’s development efforts.

"It’s to the benefit of the resellers or the VARS because it allows them to customize their solutions into these large opportunities as well," Kerby explained. "There are several large guys out there in the cloud that leverage channel-based solutions and might not go to a big guy, because of the ability for them [VARs] to be agile and provide a more custom solution for their particular environment."

Lyle Epstein, president of president of Kortek Solutions, a Las Vegas-based VAR and service provider, agreed that this demand for flexibility within data centers is, in fact, attracting these "larger guys."

"I think it’s brought us into larger customers, whereas before we might have had smaller customers with just a few servers," Epstein said. "Now, we have larger customers who are really looking for optimization, and to get more out of their servers or replace those servers to lower their annual costs. We have definitely had more opportunity in the last couple years with that than we’ve had previously."

Virtualization has been a particularly popular choice among clients looking to infuse more flexibility within their data centers, Epstein said. As the industry eyes the trend more and more, data centers are turning to service providers for guidance. In one case, Epstein was able to reduce a client’s server count from 17 to one, after exposing the costly implications of server maintenance and hardware. "It just saves so much money," he said.

While not all data centers are taking the virtualization leap, many choose to upgrade their traditional servers and cooling equipment as another low-power alternative.

Curtis Irwin, senior engineer and small business specialist at Michigan-based service provider Fusion IT, told CRN that he "most definitely" sees a growing demand for efficiency within his data center clients. This demand, Irwin noted, is evidenced most through clients’ eagerness to upgrade old servers and cooling equipment.

"In most cases, if you are staying relatively up-to-date with equipment and cooling, you will be gaining efficiency," Irwin explained. "For the most part, we are seeing some benefits from the hardware upgrades to new equipment. It is definitely easier to sell someone on a new solution when you can make a clear cost/benefit analysis that shows them that an upgrade will clearly save them money in the long run."

The end gain of this search for efficiency extends beyond the walls of the data center. The movement is offering VARs an opportunity to meet client demands in new ways, including cloud computing and virtualization, and, in many cases, is leading to net new customers. Although vendors, clients and VARs alike are taking steps toward less power-hungry data centers, the quest for efficiency has always been in motion, and will no doubt continue, said AMD’s Kerby.

"Back when we first launched Opteron [in 1999], we actually had very compelling leadership in performance per watt and low-power computing," Kerby said. "From there, it’s been an interesting ride."