Jump to content

Temperature chaining

From Wikipedia, the free encyclopedia
(Redirected from Thermal cascade)

Temperature chaining can mean temperature, thermal or energy chaining or cascading.[1]

Temperature chaining has been introduced as a new concept at Datacentre Transformation[2] in Manchester by the company Asperitas[3] as part of a vision on a Datacentre of the Future.[4] It is a method of transforming electrical consumption in datacentres into usable heat. The concept is based on creating high temperature differences in a water based cooling circuit in a datacentre. The premise is that every system in a datacentre can be equipped with a shared water infrastructure which is divided into multiple stages with different temperatures. The different temperatures are achieved by setting up different liquid cooling technologies with different temperature tolerances in a serial cooling setup as opposed to a single parallel circuit. This creates high temperature differences with a low water volume. This results in a datacentre environment which is capable of supplying constant temperature water to a re-user, thus transforming the facility from an electrical energy consumer into a thermal energy producer.

History

[edit]

Temperature or energy chaining is applied in heating systems where hydraulic designs allow for return loops and serial heaters.[5]

The temperature chaining principle is also used in refrigeration systems which adopt cascading circuits.[6][7]

The Amsterdam Economic Board has presented the 4th generation of district heating networks which will adopt thermal cascading to increase flexibility and to make the district networks future proof.[8]

Within datacentres, the traditional approach towards the critical IT load is cooling. Temperature chaining works on the basic premise that the IT is a heating source. To harvest this heat, liquid cooling is used, which allows the application of hydraulic heating designs[5] to the datacentre.

Liquid cooling infrastructure in datacentres

[edit]

Introducing water into the datacentre whitespace is most beneficial within a purpose-built set-up. This means that the focus for the design of the datacentre must be on absorbing all the thermal energy with water. This calls for a hybrid environment in which different liquid based technologies are co-existing to allow for the full range of datacentre and platform services, regardless of the type of datacentre.

The adoption of liquid cooled IT in datacentres allows for more effective utilisation or reduction of the datacentre footprint. This means that an existing facility can be better utilised to allow for more IT.

The higher heat capacity of liquids allows for more dense IT environments and higher IT capacity. With most liquid technologies, the IT itself becomes more efficient. This is caused by the reduced or eliminated dependence on air handling within the IT chassis. Individual components are cooled more effectively and can therefore be used with higher amounts of energy and closer to each other. When liquid penetrates the IT space, internal fans are reduced or completely eliminated which saves energy. This also reduces the emergency power requirements within the facility.

Liquid datacentre technologies

[edit]

Liquid cooling technologies can be roughly divided into four different categories: cooling at the room, rack or chip level and immersion.

Computer Room Air Conditioning or Air Handlers (CRAC/CRAH) can be water cooled.

Indirect Liquid cooling (ILC)[9] involves water cooled racks with (active) rear door or in-row heat exchangers which are water cooled. The advantage of the active rear doors is that all the heat from air cooled IT is immediately absorbed by the water circuit when it leaves the rack which eliminates the need for CRACs, also in partial ILC implementations. This makes cooling systems very efficient, and supports limited efficiency on the IT itself by assisting ventilation.

Direct Liquid Cooling (DLC)[10] effectively cools parts of the IT with purpose built coolers which combine cold plates and pumps that are mounted directly onto the chips instead of a traditional heat sink. This generates energy efficiency on the IT side due to the reduced amount of fan energy. Although the water circuit captures all of the heat from the largest heat sources inside the chassis, this approach may still require CRAC units or combinations with ILC for rejection of thermal energy from the rest of the IT components.

Total Liquid Cooling (TLC)[11] completely immerses the IT components in liquid. There is hardly any energy loss and IT equipment is made very energy efficient, eliminating kinetic energy (fans) from being used by the IT. Since water conducts electricity, an intermediate dielectric substance is required which requires forced or convective transfer of heat. This dielectric can be oil or chemically based. The infrastructure and power advantages are maximised with this approach and the energy footprint is fully optimised.

Since there is no such thing as one solution for all, any platform should be designed with the optimal technology for its different elements. Therefore, each part of a platform should be set up with a mix of optimised technologies. For example, storage environments are least suitable to be cooled directly by liquid due to the low energy production and the common dependency on moving parts. These can be set up in water cooled racks. High volumes of servers which require the least maintenance can best be positioned in a Total Liquid Cooling environment. Varying specialised server systems which require constant physical access are best situated in Direct Liquid Cooled environments.

A prerequisite for each technology before it can be applied in a temperature chaining scenario is a level of control (by PLC) over its own cooling infrastructure and compatibility in the sense of fittings and liquid compatibility.

Temperature chaining

[edit]
Example of temperature chaining in a datacentre
Example of temperature chaining in a datacentre

By adopting a hybrid model, systems can be connected to different parts of a cooling circuit with different temperatures. Each liquid technology has different temperature tolerances. Especially where the liquid penetrates the chassis, the stability of temperatures becomes less of a concern. Therefore, different technologies can be set up in an optimised order of tolerance to allow a multi-step increase in temperature within the cooling circuit.

This means that the water infrastructure becomes segmented. Instead of feeding each cooling setup in a parallel infrastructure, the inlets of different technologies or different parts of the infrastructure are connected to the return circuit of another part of the infrastructure. In essence, the output of a liquid cooled rack should not be routed to a cooling installation, but to a different type of liquid cooling environment. By chaining the segmented liquid circuits in larger environments, very high return temperatures can be achieved, which enables the practical and effective reusability of thermal energy and decreases investments needed to make large scale heat reuse a viable option.

The different liquid technologies can be applied with different temperature levels. There is a difference between normal optimised environments and more “extreme” environments where the solutions and IT equipment are more compatible or specialised for high temperature operation.

Example of temperature tolerances for different technologies
Technology Inlet range Outlet range Maximum delta/rack
Normal Extreme Normal Extreme
CRAC (generic) 6-18 °C 21 °C 12-25 °C 30 °C N/A
ILC (U-Systems) 18-23 °C 28 °C 23-28 °C 32 °C 12 °C
DLC (Asetek) 18-45 °C 45 °C 24-55 °C 65 °C 15 °C
TLC (Asperitas) 18-40 °C 55 °C 22-48 °C 65 °C 10 °C

Liquid Temperature Chaining can be implemented by adopting intermediate cooling circuits with different temperature ranges. Segmented environments can be connected with supply and return loops, mixing valves and buffer tanks to stabilise and optimise the return temperatures and volumes of each individual segment.

A major advantage of this strategy is the fact that temperature differences (dT) within a cooling circuit can be drastically increased. This reduces the volume of liquid required in a facility and reduces the cooling overhead installations.

After all, it is much more efficient to cool a large dT in a small volume of water than a small dT in a large volume of water.

Heat reuse infrastructure example

[edit]

This example only provides insight into optimised liquid infrastructures to explain the concepts of Temperature Chaining and how different liquid technologies can fit into this concept. For simplification purposes, there are no redundant scenarios outlined. Return loops, buffer tanks and intermediate pumps to deal with volumetric and pressure aspects within different stages are not detailed.

The open circuit heat reuse infrastructure is the most sustainable infrastructure by far. In this situation, the datacentre receives water of a certain temperature and all the heat generated by the IT equipment is delivered to another user with this water circuit. This means that the facility does not only reject the heat, but also the water which contains the heat to allow an external party to transport and use the warmed-up liquid. This results in a complete lack of cooling installations and the datacentre effectively acts like a large water heater. Water flows into the datacentre and comes out at high temperatures.

The ILC racks in this setup effectively function as air handlers which maintain the entire room temperature and absorb all thermal energy leakage from the DLC and TLC environments.

Temperature chaining concept for heat reuse
Temperature chaining concept for heat reuse

Micro infrastructure example

[edit]
Micro datacentre temperature chaining for reuse
Micro datacentre temperature chaining for reuse

In smaller footprints, temperature chaining can be achieved by creating a small water circuit with a mixing valve and buffer tank. This allows the output of the liquid installation to be routed back to the cooling input to gradually increase the cooling circuit and achieve a constant high output temperature. Although this is not the multi-stage approach, it is a common and well proven practice for achieving constant input or output temperatures.

The advantage of this approach is the compatibility with variable input temperatures which are common with dry cooling installations.

References

[edit]
  1. ^ "Cascaderen – DatacenterWorks". datacenterworks.nl (in Dutch). Retrieved 2018-02-12.
  2. ^ Communications, Angel Business. "DATACENTRE TRANSFORMATION MANCHESTER". www.dtmanchester.com. Retrieved 2017-07-25. {{cite web}}: |first= has generic name (help)
  3. ^ "Asperitas". asperitas.com. Retrieved 2017-07-25.
  4. ^ "The datacentre of the future by Asperitas – Asperitas". asperitas.com. Retrieved 2017-07-25.
  5. ^ a b "Hydraulics in building systems". Siemens. 2017-07-04.
  6. ^ US 3733845, Lieberman, Daniel, "CASCADED MULTICIRCUIT, MULTIREFRIGERANT REFRIGERATION SYSTEM", published May 22, 1973 
  7. ^ US 7765827, Schlom, Leslie A. & Becwar, Andrew J., "Multi-stage hybrid evaporative cooling system", published August 3, 2010 
  8. ^ AmsterdamEconomicBoard (2016-02-22). "Presentatie 4TH GENERATION THERMAL NETWORKS AND THERMAL CASCADING". {{cite journal}}: Cite journal requires |journal= (help)
  9. ^ "ColdLogik Rear of Cabinet Cooling Solution | USystems". www.usystems.co.uk. Retrieved 2017-07-25.
  10. ^ "Data Center, Server, and PC Liquid Cooling - Asetek". www.asetek.com. Retrieved 2017-07-25.
  11. ^ "AIC24 – Asperitas". asperitas.com. Retrieved 2017-07-25.