1. Keep hot and cold separate. A typical data center has rows and rows of servers, Weihl explained, each taking in chilled air from the front and blowing hot air out the back. Simply aligning the servers so that fronts and backs face each other results in having rows of hot air alternating with rows of cold air. This is often done with a plastic roof covering the server aisles and heavy plastic curtains, like those used in meat lockers, at each end to allow for access. This keeps the cold air from being heated by the hot air, lowering cooling costs.
2. Turn up the thermostat. Because typical data centers don"t have good control over airflow, they need to keep thermostat settings at 70 degrees Fahrenheit or lower, said Weihl. Google runs its centers at 80 degrees, and suggests they can go higher. "Look at the rated inlet temperature for your hardware. If the server can handle 90 degrees then turn the heat up to 85, even 88 degrees," he counseled.
3. Give your chillers a rest. This involves using fresh air to cool servers as much as possible, and to use evaporative cooling towers, which lower temperatures by using water evaporation to remove heat, much the way perspiration removes heat from human bodies.
There"s more. Weihl counseled to "know your PUE," or power usage effectiveness, a metric used to determine the energy efficiency of a data center. (PUE is determined by dividing the amount of power entering a data center by the power used to run the computer infrastructure within it.) While typical data center PUEs range from 2.0 to 3.0, Google"s run around 1.2. Said Weihl: "A PUE of 1.5 or less should be achievable in most facilities."