Jump to content

Computer cooling

From Simple English Wikipedia, the free encyclopedia

When using a data center it is important to use cooling. As new computer hardware becomes smaller, it uses same amount or more electricity. It then makes more more heat. The main goal is to move unused heat away from the hardware.[1]

This can be managed by using sensors. The sensors can measure the heat, and manage the temperature. Ways to manage heat include air conditioning and water cooling.[2]

Techniques

[change | change source]

The aim of cooling is to transfer heat away from the data center. There are two ways for removing the remaining heat, either using air or liquid (typically water).

Air Cooling

[change | change source]

Heat flows from warmer objects to cooler ones. Using this, fans and other cool items can be used to move heat away from the hardware. Some hardware creators consider while constructing IT equipment.[3]

Room based cooling, as an air cooling system, uses the most classical type of refrigerating technique: the basic idea is that coolers push cold air in the environment, extracting warm air to the outside. Conditioners not only refresh, but also mix the air to avoid hot spots. This idea results to be efficient if the power load used for mixing the air is a small fraction of the total load of the datacenter. Simulations show that this approach is valid for relatively low power load densities (320-750 ). Sometimes, when warm and hot air are being mixed the previous distinction is not plausible and this is where issues arise.[4]

Hot/Cold Aisle Configuration

[change | change source]

Hot/cold aisle is a format for racks of servers and generally IT apparatus inside a data center where the rack fronts face the rack fronts of the adjacent row. When cold aisle containment approach is used, row doors, aisle ceilings or overhead vertical walls are installed to contain the cold air coming from the cooling systems into the cold aisles. As the opposite, in hot aisle containment the hot air is contained so that the CRAC units just receive hot air.[5]

The aim of this configuration is to decrease the amount of produced energy and reduce the costs regarding the cooling process by controlling the air flow.

Blanking Panels

[change | change source]

If racks have empty space that it is not used then extra heat is being released into data center’s airflow in order to “fill in” this unused space.The use of blanking panels can prevent the hot air from being transmitted in the data center’s airflow.

Rack Placement

[change | change source]

The top of the rack is considered to be the location which produces the greatest amount of heat. So, the specific placement of the racks can reduce the heat circulation from racks hot spots. The way that we can regulate this issue is by ordering rack components in a way that the more loaded equipment is placed on the lower racks and in such way we can ensure an optimal cooling. Since heavier equipment flow the most air, a lower rack placement will ensure that less heat is spreaded at the top of the rack.

Free Cooling

[change | change source]

The free cooling method is composed of two main systems: the air-side economization and the water-side economization. The first one basically uses the air that comes from the outer environment in order to control and adjust the desired temperature in the IT apparatus. This technique is quite productive with respect to its cost (since we just take advantage of air from the external environment) but on the other hand has some disadvantages since it is likely to provoke humidity and moisture in the data center.

In a similar way, water-side economization works with air from the outdoors and is commonly used with evaporation processes.

Liquid Cooling

[change | change source]

Liquid cooling can improve the efficiency of a data center as it provides an optimal and more targeted cooling. For example chilled water can be transferred directly to a rack of servers which is quite helpful as we want to provide cooling to specific places. Instead of trying to “balance” a desirable temperature throughout an entire room, it initializes the cooling process in the exact point that is needed; at a particular rack or cabinet. On the other hand, the use of liquid-based systems inside a data center could have some cons. Possible leaks are definitely a huge threat for the entire IT apparatus, transport of liquid may cause condensation and since the chilled liquid must be contained somewhere it requires more technological equipment and a suitable framework inside the data center; so in this case we need more resources. For the above reasons, although liquid-based systems are proved to be more reliable and effective, they appear to be quite expensive with respect to air cooling techniques and slightly more difficult to implement.[3]

Single Phase Cooling

[change | change source]

Using the setup of a liquid-based system, here the IT apparatus can be cooled by using water, or alternative cooling liquids to be pumped at server or rack level: cold plates are located as close as possible to the heat generating components. This technology makes use of the higher heat transfer capacity of the liquids, leading to greater savings in terms of energy requirement, in fact, single phase cooling systems can reduce, and sometimes eliminate needing for CRAC units.

One of the most important disadvantages is that they require high pumping power. Other side effects are microbe formation, electrical conductivity of the liquids and erosive nature due to high fluid velocities.

As the surface to be cooled increases, it is more difficult to achieve isothermality.[6]

The main advantage, following an on-chip cooling approach is that, since direct discharge liquid stream is used, the energy capture is easier, permitting the waste heat recovery.

One of the pioneers of this technology has been IMB, with the Aquasar supercomputer, in 2009.[7]

Two-phase Cooled Systems

[change | change source]

Two-phase cooling means using devices that can transform at least part of the cooling fluids into vapour when in contact with warm IT components. The phase transition permits a much more efficient heat exchange, in terms of cooling medium mass: in other words: to exchange a fixed amount of heat, two-phase cooling uses less refrigerating liquid in comparison with the quantity which is necessary in single-phase cooling.

It was shown how,[8] using multiphase systems, 4 times less mass-flow rates, 10 times less pumping power and 2 times less facility size.

The core of the evaporator is composed by a mini or microchannel heat-sink by which the plate surface is maximized; minichannels have a pitch of 2 to 8 mm, while microchannels have a pitch of less than 1 mm. Microchannels are more capable in transferring the heat thanks to their higher surface, but they are more easily clogged, corroded and eroded.[6]

Here is a schematic with the key components in a two-phase cooling loop. The main elements are the evaporator, the condenser, the accumulator and the pump; in this system, a pre-heater is then used before the evaporator, to adjust the temperature close to the saturation point.

Cooling System Alternatives

[change | change source]

There are many options available to services or IT experts when it comes to specifying cooling solutions. These options may differ according to the size or the particular type of the solution needed.

Airflow direction

[change | change source]

[1]Referring to the airflow process, large floor-mounted systems flow air in a downward direction (leading towards a lower flow level) or an upward direction (leading towards a higher flow level) and some can even flow horizontally (leading towards a horizontal flow level).

Fire, smoke, and water detection devices

[change | change source]

These devices are best used in combination with IT monitoring and building management systems for quicker notification. They also provide early warning or automatic shut off during undesirable events.

Humidifiers

[change | change source]

As a device that increases humidity, humidifiers can play an important role in the cooling process by preventing any unexpected overheating which may occur due to static electrical discharge.They are usually placed inside precision cooling devices in order to replace water vapor lost during the cooling process. Reassuring that the room has no current high or low humidity-related issues, the use of humidifiers can be in all computer room air conditioners and air handlers.

Reheat systems

[change | change source]

When performing the particular technique, rooms should not have vapor barriers and also room temperatures should suggest humid climates. These systems in order to let the whole system provide increased dehumidification of the "internal" air, they add heat to conditioned cold air coming out of a precision cooling device.

Economizer coils

[change | change source]

The advantage of using this approach is that it provides a surprisingly operating cost reduction when used.In this approach, diol (ethylene glycol) is used to cool the IT environment in a way which is similar to a chilled water system.

Insights

[change | change source]

Data centers technology is quite complex and advanced and this has as a result to require huge amounts of energy. Therefore, data centers need to be as efficient as possible and data environmental control has a significant role to this issue.

The recommended temperature for a data center is estimated between 21°C and 24°C. In addition to,some studies regarding the desired temperature in a data center shown that maybe it is not so optimal (money-wise) for firms to keep the temperature below 21°C.[9] Furthermore, typical data center limit temperatures have been estimated to be 85°C for processors and DIMMs, while disk drives can usually work in stricter conditions, at a maximum of 45°C.

It is also remarkable that, while the conventional heating, ventilation and air conditioning systems (HVAC) for similar size rooms deal with heat fluxes in the order of 40-90 ,[10] cooling systems in data centers work with extremely higher heat loads: modern data center infrastructures can need to refresh apparatus that produces 6-10 as power density.[11] In the next few years, it is estimated that the power density consumption will still raise up to 15 .[12]

Google for instance mentions the significance of taking serious action to reinforce energy efficiency in case of either running a small data center or a huge service. According to Google, many manufactures construct the suitable equipment to be used at temperatures higher than the standard 21°C we aforementioned. This may lead to next generation servers having the ability of performing at higher temperatures, fact that can make data center using less equipment and therefore save money. So for an effective management and control data centers should be able to perform accurate temperature analysis by measuring the amount of energy spent for processes that concern cooling.Nevertheless, the best solution may be consulting the corresponding equipment manufacturer so with his expertise can point out the suitable approach.

Many opinions of experts about this subject point out that data centers could have better performance if they could adopt a mixture of cooling methods. Another important factor that could drastically contribute to an optimal performance is the location where data centers are placed. Constructing data centers either in places with cooler climates or near to sources of cold water not only could increase energy efficiency in a high level but also could reduce costs by deploying the external environment.[9]

References

[change | change source]
  1. 1.0 1.1 "The Different Tecnology for Cooling Data Centers" (PDF).
  2. "Raritan Blog".
  3. 3.0 3.1 "Basics of Data Center Cooling". The Data Center Journal.
  4. "A review of data center cooling technology, operating conditions and the corresponding low-grade waste heat recovery opportunities" (PDF).
  5. "www.datacenterknowledge.com - Approaches to Data Center Containment". 8 November 2012.
  6. 6.0 6.1 "1-act.com - Webinar on pumped two-phase cooling". Archived from the original on 2020-08-06. Retrieved 2018-06-29.
  7. Ganapati, Priya. "Wired.com on IBM Supercomputer". Wired.
  8. HannemannR,MarsalaJ,Pitasi M. "Pumped liquid multiphase cooling".{{cite book}}: CS1 maint: multiple names: authors list (link)
  9. 9.0 9.1 "A beginner's guide to data center cooling systems".
  10. Rambo J, Joshi Y. "Modeling of data center airflow and heat transfer: state of the art and future trends". Distrib Parallel Database 2007; 21:193–225.
  11. Rasmussen N. (2005). "Guidelines for specification of data center power density [White paperNo.120]". {{cite journal}}: Cite journal requires |journal= (help)
  12. Little AB, Garimella S. "Waste heat recovery in data centers using sorption systems". J Therm Sci Eng Appl 2012; 4(2):1–9 (021007).