Page 1 of 4
The data center industry is a major consumer of electricity, but new technologies and design concepts are being employed by data center owners and operators to decrease the cost of powering and cooling those structures.
The amount of power consumed by data centers in the U.S. and around the world continues to grow, but not as fast as previously estimated, according to a recent study sponsored by the New York Times.
According to the study, total data center power consumption from servers, storage, communications, cooling, and power distribution equipment accounted for between 1.7 percent and 2.2 percent of total electricity use in the U.S. in 2010.
This was up from 0.8 percent of total U.S. power consumption in 2000 and 1.5 percent in 2005. However, it is down significantly from the 3.5 percent previously expected, based on historical trends.
The less-than-expected data center power consumption growth stems from a leveling off in the server installed base, and not from operational improvements or new technologies. Going forward, the server installed base is not expected to grow, according to the study.
More research is needed to understand the impact on data center power consumption from increasing storage capacity, the adoption of cloud computing, higher server processing power, and the percentage of servers which are powered on but not being used, the study's author said.
However, all these factors are important considerations when designing and operating a data center.
John Snider, CEO of NOVA, an Albuquerque, N.M. operator of data centers for the U.S. Department of Defense, said that increased virtualization and cloud computing, while decreasing the number of servers needed for a given operation, can still actually lead to increased power consumption as customers will look to put more processing and storage capabilities into smaller spaces.
"We're seeing exponentially higher computing power as servers get more dense without making them more efficient," he said. "So consumption is going up."
Philip Fischer, data center business development manager at APC by Schneider Electric, said that while increased virtualization can cut back on the number of servers needed, it can also lead to lower data center efficiency, he said.
"UPS and cooling equipment have the greatest efficiency when working at full load," he said. "If you decrease the load, efficiency may drop 5, 10, or 15 percent. But it's still a good thing, as total power use is reduced. So this begs the question: Is decreasing efficiency always a bad thing?"
Transitions In Data Center Design
A properly-designed data center in which power consumption factors are addressed from the beginning has the biggest impact on the cost of running a data center.
How well data center is designed depends heavily on its power efficiency as measured by PUE, or Power Usage Effectiveness. PUE is the ratio of total power used by a data center compared to the power used to run the IT equipment. A PUE of 2.0 means that, for every kWh of power used by IT equipment, another kWh is needed to run the data center infrastructure.
Improved designs mean that newer data centers in general are much more energy efficient than older ones, said Dave Leonard, senior vice president of data center operations at ViaWest, Denver, which operates 22 data centers and rents data center space to customers.
"Older data centers we acquire have a PUE of about 2.0," he said. "But newer-designed building PUE falls to about 1.3 over time. So there's a big difference between different building generations."
Next: Taking Advantage Of New Designs