Data Centers In Transition: Controlling Power, Cooling Costs

Printer-friendly version Email this CRN article


One new technique for improving data center cooling efficiency is eliminating the raised floor space which has been used for decades to bring cool air to equipment racks, and replacing it with overhead cooling.

Snider said this is a move whose time has arrived. "Think about it--heat rises," Snider said. "So why push cold air through the floor?"

Snider said he believes that, with the right education, raised floors will no longer be designed into data centers within five years. "The problem is, people like me who know networking are doing the designs, not the experts on the mechanical side," he said.

If one only thinks that cool air sinks and hot air rises, moving cool air come from overhead instead through the floor might seem reasonable, Henigin said. However, the problem with that concept is that cool air mixes with hot air rather than sinking en masse, he said.

"We like raised floors because they provide fine-grained control of cold air delivery," he said. "We use perforated floor tiles to provide 15 percent to 56 percent air flow, and put dampers in the tiles to control the amount of cold air released. You can't do that kind of control with overhead air."

Data centers are also starting to experiment with ways cut back on air conditioner use by running IT equipment at higher temperatures than in the past.

Traditional data centers have focused on keeping air temperatures at around 68 degrees Fahrenheit, but operators have been trying to push that temperature to 70 degrees or 75 degrees.

They have been helped in part by equipment vendors who are providing more leeway in terms of what temperatures are suitable for their equipment. Dell, for instance, was the first vendor to say its servers can operate in temperatures of up to 113 degrees Fahrenheit.

ASHRAE, the American Society of Heating, Refrigerating and Air-Conditioning Engineers, has said that inlet temperatures can now safely be as high as 80 degrees, Leonard said. However, making such as change is difficult in a multi-tenant environment.

"If it's an enterprise data center serving one customer, you only have to convince one guy that it's more energy efficient to run at 78 degrees than at 70 degrees," he said. "But we deal with multiple customers. We can't make the change so fast."

Raising temperatures could have negative impacts if not handled correctly, Loeffler said. For instance, small UPS units with lead acid batteries will work in higher temperatures, but the batteries may need to be replaced more often. "Bigger data centers have larger systems with batteries placed in separate rooms, making it easier to keep them cool," he said.

Another new technology making it into data centers is free cooling, or bringing cool outside air into the data center to replace part of the air conditioned air.

Snider, whose company is building a new data center in Albuquerque, N.M., said that location is ideal for free cooling because of its 5,000-foot elevation. "Many parts of the country can use free cooling," he said. "Companies can filter the air and get free cooling eight months out of the year."

CoreLink is currently implementing free cooling in its data centers to cut operating costs by 30 percent to 40 percent per year and be more price competitive, Duckett said. "Technology solutions are needed for cooling, but we can always count on Fall, Winter, and Spring," he said.

Next: What About Going Green?

Printer-friendly version Email this CRN article