Powered By Blogger

Wednesday, July 25, 2007

Power & Cooling Meet The New Tech Wave

How Innovative Technologies Are Changing Data Center Infrastructures

nnovations in data center technologies continue to transform productivity throughout the enterprise, giving way to increased flexibility and less downtime. Yet as a parallel consequence, many of these same innovations are forcing data center and facilities managers to alter their overall outlook toward power and cooling needs.

In the long run, changes forced by newer technologies can be positive because the increases in productivity can eventually outpace the increased costs to cool the devices. However, older data center rooms aren’t necessarily equipped to adequately handle new equipment, such as blade servers, and in turn can encounter far more heat issues than what existed with older devices.

“Most advances in technology require additional horsepower to take advantage of new product features and benefits,” says Kevin Houston, virtualization and consolidation practice leader at Optimus Solutions (www.optimussolutions.com), which helps firms plan, build, and maintain their IT infrastructure. “Data center consolidation projects can be complex and can easily overwhelm an IT organization struggling to maintain current operations.”

Blades Burrow In

Ask any data center manager to select a technology that’s sparking the most change in power and cooling requirements, and you’re likely to hear “blade servers” as the answer. Although these devices save power on one hand, they often require overhauled cooling infrastructures on the other.

“Blade servers are challenging the existing data center design and requiring audits to identify and propose the additional power and cooling resources to maintain an environment that meets the server manufacturer’s optimum operation requirements,” says Bill S. Annino Jr., director of the converged network solutions group at Carousel Industries (www.carouselindustries.com), a communications systems integrator and reseller.

Unlike traditional servers, blade servers forgo redundant components in their chassis, instead using single components and sharing them among the computers in the chassis. Blade enclosures also use a single power source for all the blades within them, which helps to boost the overall efficiency of the server environment.

That single blade rack can replace several traditional racks, which will help enterprises save power. “We’ve seen a tremendous increase in interest for blade technologies, as customers recognize the benefits of physical consolidation through a centralized ecosystem of power and cooling,” Houston says. “Both HP and IBM have made significant investments in their blade chassis to reduce the power and cooling needs for a server environment with as much as 50% over an equivalent physical rackmount environment.”

On the downside, many older data centers in small and midsized enterprises are designed to deliver a consistent level of cooling throughout the entire environment. The high-density form factors of blades require more concentrated cooling in specific areas, which means that a transition to blades could require facilities to be revamped to accommodate hot spots. After all, the cold air requirements for blade racks can be quadrupledor morewhen compared to the requirements for traditional racks.

This challenge becomes more difficult when SMEs mix blades with traditional servers because they still need traditional cooling methods but also need to remove the heat generated in specific areas by the blades. In these mixed environments, some experts recommend removing the traditional racks near the blade to allow for greater heat dispersion.

Call For A Change

Other increasingly popular technologies are also making their mark on power and cooling demands, albeit in different ways. In particular, VoIP is enjoying increased popularity in SMEs, and managers are witnessing a trade-off between power consumption and overall costs.

“Although VoIP phones require additional power consumption by drawing greater Power over Ethernet, or PoE [power], it is important to always evaluate the total cost of ownership,” Houston says. “Although power costs may increase, the boost in productivity from operational efficiencies and application integrationnot to mention long-distance savingsoutweighs any increase to the energy bill in most scenarios.”

Elizabeth King, general manager of Hitachi’s Servers System Group (www.hitachi.us), notes that VoIP creates stress on networks, storage, and computing resources. Further, while the technology hasn’t reached mainstream status, “there is enough experience to indicate that it will require corporations to expand their computing and network capabilities,” she says.

Of course, more capabilities often lead to more equipment, and more equipment means increased cooling and power demands. “The impact of the VoIP revolution has definitely added to the computer room,” explains Gregory T. Royal, chief technology officer and executive vice president of Cistera Networks (web.cistera.com), a company providing enterprise application platforms and engines for IP communications. “The vendors in this space have yet to face up to the heat and power requirements of this growth. Over time I expect that we will move to more efficient servers and predominantly blade servers [to handle the requirements].”

According to Royal, in theory, the endpoints accomplish 90% of the media (or heavy) work involved with new VoIP solutions. For example, he says, a SIP (Session Initiation Protocol) server provides “traffic cop” capabilities, rather than actively participating in the calls. Again in theory, this means that a given CPU can support far more endpoints than traditional PBXes, he says.

“There has, however, been a trend for a long time for PBX to shrink into what are now essentially soft switches,” Royal says. “However, there is a great increase in media gateways, media conferencing, video, applications, and other CPU-intensive applications. Unified voicemail, conferencing, notification, and quality assurance systems are now all mainstream in IP PBX environments.”

While VoIP technologies continue to push more equipment into data centers, their presence has removed the need for larger, older PBX installations that consumed plenty of floor space, similar to mainframes of the past, Royal says.

Virtual Help

Additional technologies, such as load balancers, application accelerators, caching servers, and middleware are also being increasingly deployed in today’s data centers. But Royal notes that they also bring benefits. “A lot of these new technologies now use standard, off-the-shelf technology such as Intel CPUs, which means there are better economies of scale in dealing with [power and cooling] issues,” he says.

Power and cooling concerns are also being mitigated by virtualization technologies, which allow SMEs to boost production without increasing the number of devices in their data centers. And thanks to advances in virtualization management tools, managers can now more easily integrate the technology into their data centers.

“VMware Infrastructure 3, for example, offers a complete solution for management and optimization of virtual servers, storage, and networking,” Houston says. “We’ve seen consolidation of infrastructure through virtualization deliver transformative cost savings by greatly reducing power and cooling requirements.”

Cool It Down

Regardless of whether new technologies can run more efficiently or help to save space, there’s no denying that the number of overall devices in data centers is ramping upward as enterprises converge their data centers with business practices. But while this alignment can serve to help the business in general, it can literally cramp the style of managers looking to push down power and cooling costs.

BT Group (www.bt.com), which is building an all-IP network known as the 21st Century Network, requires plenty of power to run its data centers, but it is deploying innovative methods to handle the needs of today’s newer technologies. One of these involves the use of DC power, which BT has chosen over AC to handle its power-hungry data centers.

“BT estimates that this change alone reduces power consumption by 30%,” says Steve O’Donnell, global head of data center and customer experience management at BT. “While the acquisition of switches and servers that run on DC power is more expensive, the savings in power consumption offsets the cost.”

The company also has worked with suppliers to adapt equipment from recirculated-air cooling to fresh-air cooling, which O’Donnell says allows BT’s equipment to run within a range of 5 to 50 degrees Celsius (41 to 122 degrees Fahrenheit), compared with traditional higher ranges. BT has also revamped the physical structure of its data centers to reduce contamination from fresh air.