Too hot to handle: Is your data center cool enough?

data center coolingHow important is proper data center cooling? If your equipment matters to you, it should be critically important. A data center with poor cooling measures can see temperatures rise 30 degrees in just one hour. That’s a problem, as constantly warm temperatures (80 degrees and up) can damage equipment and shorten its useful lifespan.

Although cooling accounts for around 40 percent of annual data center operating costs, it usually isn’t the first priority when small to mid-size data centers are built. But as computing needs grow and heat production increases, inadequate cooling solutions compromise equipment performance and cause shutdowns. Data center expansion only makes these heat problems worse.

Costly cooling capacity upgrades aren’t the only answer. Data center operators can make small changes to increase the efficiency of the overall IT infrastructure without stretching the budget, and in most cases, heat-related problems can be solved by following rack cooling best practices to optimize airflow and increase efficiency to prevent downtime and reduce costs.

Here are four suggestions for keeping your data center cool while driving down energy usage.

1. Optimize airflow to reduce energy use.

The typical server rack exhausts air at 115 degrees Fahrenheit – 30 degrees higher than incoming air. When rack bays and cabling are improperly arranged, hot air recirculates, thwarting your cooling efforts.

You can optimize airflow by arranging racks in a hot-aisle/cold-aisle configuration, where racks face each other. This setup stops cold air from mixing with warm air, and centers are able to better contain and remove heat; energy usage can be reduced by up to 20 percent using the hot-aisle/cold-aisle layout. Though rearranging a data center for this setup will take time, it will pay off in the long run.

2. Increase efficiency with dedicated cooling.

General facility air conditioning is not enough to cool your data center. Because the general HVAC system’s thermostat is probably mounted on a wall, far away from the racks, it’s unlikely to reflect the temperature at the rack level, making your equipment vulnerable to temperature spikes. Plus, large temperature swings that can occur when air conditioning cycles on and off can shorten the equipment’s life.

Optimizing cooling and airflow in a data center can trim power consumption by up to 25 percent, and specifically designed cooling solutions, or Computer Room Air Conditioning (CRAC) units, are a more efficient way to prevent equipment from overheating.

3. Improve layout to prevent downtime.

Restricted airflow and hot spots, often caused by unmanaged cabling and high-density, high-wattage loads, can cause systems to overheat and fail unexpectedly. Poor cabling installation also increases the amount of time needed to repair problems and fix installer errors.

Clean up ad hoc cabling by using overhead managers, ladders, and troughs, or by using extra-wide or extra-deep racks. Data center operators can avoid concentrated areas of heat output by spreading loads throughout the data center, separating heavy-drawing components. And as your data center expands, separate high loads as you design the layout of a new server infrastructure.

4. Maintain proper temperature.

Colder is not necessarily better: Very cold temperatures are unnecessary for a data center, and increase cooling costs. You might be surprised to learn that the ideal temperature for a data center is 77 degrees Fahrenheit, so operating within a safe temperature range — somewhere between about 64 degrees and 80 degrees — is the best way to minimize costs and maximize the life of your equipment.

Stop adjusting the thermostat – do this instead

So your data center might be hot. Now what? The first step is an easy one: Take temperature samples. Though you should measure the internal room temperature, the most important data to gather is intake temperatures.

Another easy method of reducing energy usage and costs is replacing or removing unnecessary heat sources such as baseboard heaters and incandescent light bulbs. Developing a system of continual improvements and upgrading existing policies and procedures will also help you spot weaknesses in your data center’s efficiency and cooling infrastructure.

A full energy and cooling audit is the best method of understanding how your data center consumes energy and generates heat. While reconfiguring a data center with a modern cooling solution may seem like an expensive proposition, these changes can reduce energy costs and extend equipment lifespans while reducing the likelihood of expensive shutdowns and equipment failures.

About the author

craig_watkins_tripp liteCraig Watkins is the Rack & Cooling Solutions Product Manager at Tripp Lite. Craig managed the development of Tripp Lite’s award-winning portable AC unit as well as the popular line of rack enclosures. Most recently, Craig oversaw the development of Tripp Lite’s successful line of wall-mount racks expansion and the introduction of the first ever rack-mounted AC unit.

You may also be interested in:

Life in the fast lane: Maintaining Ethernet that drives your IT environment The need for speed is a central concern for data center operators. These admins juggle server virtualization, cloud computing, LAN/SAN convergence, an...
The IT superhero that saves money, energy, and time: Here’s hyperconverged infrastructure... Why is your morning commute frustrating? Rubbernecking, erratic drivers, and faulty stop lights might contribute to your irritation, but the real reas...
Video and police body cameras: Answers to 6 common questions There are roughly 18,000 police agencies in the United States and they’re all trying to solve the same case: How to deal with the storage requirements...

Submit a comment:

Your email address will not be published.

Please note: All comments will be moderated

12 + fifteen =