Liebert President Says IT Managers Must Cope With Change

data center

Bauer said companies need to "expect the unexpected" when it comes to change and do so with increased budget efficiency. "There's more pressure on cost -- do more with less," he said. Bauer pointed to three trends that will have a "profound impact" on the future: virtualization, consolidation and green strategies.

In regard to virtualization, Bauer said there is a lot of speculation on how consolidation will take place. "We see situations where we're not going to be reducing but redeploying and replacing servers," he said. "We're really going to change the diversity programs of these rooms, and you're challenged on how you're going to use the technology in respect to spreading out the loads and using the infrastructure as you create a more diverse environment."

Bauer said he thinks what is going to happen is that a 20 percent space reduction in server space requirements (citing an industry-wide expectation) will allow companies to repopulate racks with new equipment. That should be the focus, he said, instead of trying to carve out usable space for another purpose.

The architecture of the infrastructure will be equally important, Bauer said. "With business being powered by blade server technology and high-density computing, this impacts high density cooling, rack power distribution and puts a focus on efficiency." The centralization of critical and non-critical applications, he said, becomes a more economical way to serve customers.

id
unit-1659132512259
type
Sponsored post

Bauer focused intently on ways of improving energy efficiency, citing a 2007 Data Center User Group survey that showed it as the third highest facility/network concern. He said determining whether to cut consumption or improve efficiency takes consideration when adopting an improved efficiency strategy. "Are you looking at things that waste power or reducing power consumed for work?" he asked. "The order in which you approach these issues is important."

Bauer compared efficiency in infrastructure to miles per gallon (MPG) used to measure gasoline efficiency in cars. "We don't have a version of MPG in this industry; that's something we clearly need to work on," he said. Factors ranging from ordering of servers to best match system architecture to measuring productivity at the server level complicate spending decisions.

"The biggest piece [of power consumption] may not be the area you want to spend money on," he said. "You may want to spend money on whatever is on the top of the list to improve your MPG." Once an MPG-like measuring system is in place, Bauer said IT managers would be able to determine how they're affecting efficiency as it relates to performance. "A hit-and-miss strategy is not something we want to have," he cautioned.

Bauer also took time to discuss future strategies in preparation for Web 3.0 -- in a sense, laying the groundwork for a topic not often discussed at this point. "In an age of information everywhere, workspace everywhere, how does our architecture handle this?" he asked. Bauer said build-out cycles need to be shorter and deployments need to be quicker.

Flexibility is the most important issue concerning the future of IT infrastructure, he said. "The future is really about dynamic critical infrastructure," he said. "We need to continually plan and assess, design and build for continuous change, optimize performance during operation, and manage changes without online disruption."

Rack level capacity, not room level capacity will be important in the future, Bauer predicts. He pointed to Sun Microsystems, which decided to use the concept of pods, 500 square foot facilities housing between 50-75 racks of equipment and full-standing mid-scale servers. "The concept is repeatable, standardized infrastructure configurations," he said. "You build every one of these the same way with flexibility. They create their own best practice: in-house infrastructure adaptive to change."

Monitoring systems must also evolve, he said. "We must move to real-time feedback so we can understand what the real energy performance of the infrastructure is," he said. Tracking and managing change will be critical. "There's going to be more dialogue between the facilities people and the IT managers," he said. "We've got to have some better systems to track those assets on both sides."

Issues in cooling are a "tremendous" challenge, he said, though he said he felt there were many different ways to approach the problem, including options between open and closed architecture. "If you don't have a plan for this, it's going to give you some problems down the road," he cautioned.

Bauer closed the speech by point out two best practices cases -- Time Warner in the case of infrastructure flexibility, and Hess Corp. for cooling capacity. "Time Warner deployed an architecture which allowed them to continually add capacity and to use excess capacity to serve as redundancy." He lauded the company for its ability to maneuver in the face of changing customer demand. "This is already in place," he said. "There are people out solving some of these problems."

Hess' implementation of a cooling system close to the source and easily moved to hot spots impressed Bauer because of its flexibility and scalability, he said.

This solution appealed particularly DVL Incorporated's senior project manager, strategic account services, Warren Grossenbacher. "We've got several clients with power density issues," he said. "They're asking us: How can we get cooling into small spaces?" A flexible, targeting cooling solution is the future, he said, agreeing with Bauer. "Rack-level cooling is definitely where it's going."