The Data Center Temperature Debate

Although never directly articulated by any data center authority, the prevailing practice around these critical facilities has often been “The cooler the better.” However, some leading server manufacturers and data center efficiency experts share the view that data centers can run much hotter than they do today without sacrificing uptime and with huge savings in both related costs. with refrigeration as in CO2 emissions. A server manufacturer recently announced that its server rack can operate with inlet temperatures of 104 degrees F.

Why do you feel the need to push the envelope? The cooling infrastructure consumes a lot of energy. Running 24/7/365, this system consumes a lot of electricity to create the optimal computing environment, which can range from 55 to 65 degrees F. (The “recommended” range ASHRAE current is 18-27 C or 64.4 degrees F up to 80.6 degrees F)

To achieve efficiencies, several influential end users are running their data centers more warmly and advising their contemporaries to do the same. But the process is not as simple as turning up the thermostat in your home. Here are some of the key arguments and considerations:

Charge: Raising the server inlet temperature will result in significant energy savings.
Arguments for:
o Sun Microsystems, a leading hardware manufacturer and data center operator, estimates a 4% savings in energy costs for each degree increase in server inlet temperature. (Miller, 2007)
o A higher temperature setting can mean more hours of “free cooling” possible through airside or waterside economizers. This information is especially compelling for an area like San Jose, California, where outdoor (dry bulb) air temperatures are 70°F or below for 82% of the year. Depending on geography, annual savings from economizing could exceed six figures.
Counterarguments:
o The cooling infrastructure has certain design setpoints. How do we know that increasing the server inlet temperature will not create a false economy, causing additional and unnecessary drain on other components such as the server’s fans, pumps, or compressors?
o Free cooling, while great for new data centers, is an expensive proposition for existing ones. The entire cooling infrastructure would require re-engineering and could be cost-prohibitive and unnecessarily complex.
o The costs of temperature-related equipment failures or downtime will offset the savings from a higher temperature setpoint.
Charge: Increased server inlet temperature complicates equipment reliability, recovery, and warranties.
Arguments for:
o Inlet air and exhaust air are often mixed in a data center. Temperatures are kept low to compensate for this mix and keep the server inlet temperature within the ASHRAE recommended range. Raising the temperature could exacerbate already existing hotspots.
o Cool temperatures provide a blanket of cool air in the room, an advantage in the event of a cooling system failure. Staff can have more time to diagnose and repair the problem and, if necessary, properly shut down the equipment.
o For the 104 degree F server, what is the probability that every piece of equipment, from storage to network, is reliable? Would all warranties still be valid at 104 degrees F?
Counterarguments:
o Raising the temperature of the data center is part of an efficiency program. Temperature rise should follow best practices in airflow management: use blanking panels, seal cable cuts, remove cable obstructions under the raised floor, and implement some form of air containment. These measures can effectively reduce the mixing of hot and cold air and allow practical and safe temperature increase.
o The 104 degree F server is an extreme case that encourages thoughtful discussion and critical inquiry among data center operators. After your study, perhaps a facility that once operated at 62 degrees now operates at 70 degrees F. These changes can significantly improve energy efficiency, without compromising equipment availability or warranties.
Position: Servers are not as fragile and sensitive as you might think. Studies conducted in 2008 underscore the resilience of modern hardware.
For arguments:
o Microsoft ran servers in a tent in the humid Pacific Northwest from November 2007 to June 2008. They experienced no failures.
o Using an airside economizer, Intel subjected 450 high-density servers to the elements: temperatures up to 92 degrees and relative humidity ranges from 4 to 90%. The server failure rate during this experiment was only marginally higher than that of Intel enterprise installations.
o Data centers can operate in the 80s and still be ASHRAE compliant. The upper limit of its recommended temperature range increased to 80.6 degrees F (up from 77 degrees F).
Counterarguments:
o High temperatures, over time, affect server performance. Server fan speed, for example, will increase in response to higher temperatures. This wear can shorten the life of the device.
o Studies from data center giants like Microsoft and Intel may not be relevant to all companies:
o Its huge footprint in the data center is more immune to the occasional server failure that can result from excessive heat.
o They can leverage their purchasing power to receive gold-plated warranties that allow for higher temperature settings.
o They will most likely update their hardware at a faster rate than other companies. If that server completely times out after 3 years, no problem. A smaller business may need that server to last more than 3 years.
Position: Higher inlet temperatures can create uncomfortable working conditions for data center staff and visitors.
Arguments for:
o Consider the 104 degree rack. The hot aisle could be anywhere between 130 and 150 degrees F. Even the high end of the ASHRAE operating range (80.6 degrees F) would result in hot aisle temperatures of around 105 to 110 degrees F. performs maintenance on these racks would endure very uncomfortable working conditions.
o In response to higher temperatures, the server fan speed will increase to dissipate more air. Increasing the fan speed would increase the noise level in the data center. Noise can approach or exceed OSHA sound limits, requiring occupants to wear hearing protection.
Counterarguments
o It goes without saying that as the server inlet temperature increases, so does the hot aisle temperature. Companies must carefully balance worker comfort and energy efficiency efforts in the data center.
o Not all data center environments have a high volume of users. Some high-performance/supercomputing applications operate in a lights-out environment and contain a homogeneous collection of hardware. These applications are suitable for higher temperature setpoints.
o The definition of data center is more fluid than ever. The traditional brick-and-mortar installation can add instant computing power through a data center container without an expensive construction project. The container, separated from the rest of the building, can operate at higher temperatures and achieve higher efficiency (some close-coupled cooling products work in a similar way).
recommendations

The movement to increase data center temperatures is winning but will face opposition until concerns are addressed. Reliability and availability are at the top of any IT professional’s performance plan. For this reason, most to date have decided to err on the side of caution: remain calm at all costs. However, higher temperatures and reliability are not mutually exclusive. There are ways to safeguard your data center investments and become more energy efficient.

Temperature is inseparable from airflow management; Data center professionals must understand how air circulates, enters, and passes through server racks. Computational Fluid Dynamics (CFD) can help by analyzing and plotting projected airflow on the data center floor, but because cooling equipment doesn’t always perform to spec and the data you input can miss some key bottlenecks On-site monitoring and adjustments are critical requirements to ensure your CFD data and calculations are accurate.

Overcooled data centers are prime environments for raising the temperature set point. Those with insufficient hotspots or cooling can start with inexpensive remedies like blanking panels and grommets. Close-coupled cooling and containment strategies are especially relevant since server exhaust air, which is often the cause of thermal challenges, is isolated and prohibited from entering the cold aisle.

With airflow addressed, users can focus on finding their “sweet spot” – the ideal temperature setting that aligns with business requirements and improves energy efficiency. Finding it requires proactive measurement and analysis. But the rewards — smaller energy bills, improved carbon footprints, and a corporate responsibility message — are well worth the effort.

Leave a Reply

Your email address will not be published. Required fields are marked *