0% found this document useful (0 votes)
50 views244 pages

CP Manual v6.6 - Part 2

This document discusses trends and technologies related to data center cooling. It covers topics like continuous cooling, free cooling approaches using air-side or water-side economization, thermal storage, and the use of outside air for "ventilation-only" cooling. It provides examples of leading organizations that employ innovative cooling strategies like direct evaporative cooling with no mechanical backup. The document aims to educate about best practices for minimizing mechanical cooling through free cooling techniques.

Uploaded by

quangdunghui
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views244 pages

CP Manual v6.6 - Part 2

This document discusses trends and technologies related to data center cooling. It covers topics like continuous cooling, free cooling approaches using air-side or water-side economization, thermal storage, and the use of outside air for "ventilation-only" cooling. It provides examples of leading organizations that employ innovative cooling strategies like direct evaporative cooling with no mechanical backup. The document aims to educate about best practices for minimizing mechanical cooling through free cooling techniques.

Uploaded by

quangdunghui
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 244

Module 4

Trends & Technologies

v6.6 1
Module Topics

Continuous Cooling

Leaders at the Edge

Free Cooling – Air Side

Free Cooling – Water Side

The X Factor

Fan Energy & Efficiency

Energy Efficiency Metrics

Other Cooling Considerations

2
Hidden Risk with High Density

Rate of Temperature Change

• ASHRAE TC9.9 2015


– 5°C/h (9°F/h) for data centres employing tape drives and 20°C/h
(36°F/h) for data centres employing disk drives
– No more than 5°C variation in any 15 minute period

• Issues with Thermal Runaway


– Equipment produces heat during UPS autonomy, cooling awaits
generator start
– Temperature rise moderated by thermal inertia
• Fabric (walls, ceiling, floor)
• Cabinets and other metal

3
Temperature rise example, no inertia

7,5 degree is over ASHRAE

8kW per rack, 0.4 deg C per second

4
Actual recorded rise

C/ S

5
Actual recorded rise

6
Thermal Storage
• To keep Continuous Cooling through changeover between mains power
and generator power you can either provide UPS power to the aircon
system or use thermal storage
• This is not to be confused with the (similar) thermal storage used in e.g.
district cooling schemes – the cooling for Data Centre must always be
available, so the storage is not used day-to-day but only in emergency

7
Module Topics

Continuous Cooling

Leaders at the Edge

Free Cooling – Air Side

Free Cooling – Water Side

The X Factor

Fan Energy & Efficiency

Energy Efficiency Metrics

Other Cooling Considerations

8
Why are we “Cooling”?

• Data center designers and operators are striving to improve


energy efficiency. This has caused many to question many
long held practices and beliefs

• Do we really need to “cool” the IT equipment (implying mechanical


cooling) or can we simply “ventilate” the exhaust heat?

• Traditionally Data Centers have been designed and operated as


closed loop air conditioning, with minimal introduction of outside air.
While such aircon still represents the majority of existing data centers,
and most current designs, a more significant percentage of new data
centers is incorporating the use of ventilation as part (or all) of the
“cooling” scheme.

9
Trends Influencing Data Center Cooling Design

New Approach
• Internet Search, Social Media & Web Services
– Different Availability, Reliability, and Fault-Tolerance Criteria issues allows
testing and deploying “unconventional” cooling strategies
• Moves Physical Reliability of a site to IT architecture for overall system
availability (shift loads and backups to other sites)
• Internet Search, Social Media & Web Services are the Missionaries:
Google - Yahoo – Facebook – Microsoft - Amazon
• Commercial – Leading Edge DC designs
• However, even some conventional DCs (Enterprise, Colo & Hosting)
are adopting some of the technologies and practices
– NetApp – Free cooling – Airside Economization - Energy Star rated
– Deutsche Bank - Adiabatic cooled

10
Cooling Trends

• 2011 witnessed a shift in the thinking of data center design


(ASHRAE TC9.9 Thermal Conditions)

• While Web Services, Social Media and Internet Search are


generally considered to be on the “Leading Edge” regarding
the use of outside air as the primary method of “cooling”, it
has now spread and has to begun to enter the thinking of
designers and some operators in other markets (Enterprise
and Colocation)

• ASHRAE and The Green Grid, as well as the Global Data


Center Energy Efficiency Task Force have openly endorsed
the concept of outside air economizers as the primary
method of heat removal from the data center

11
ASHRAE TC 9.9 2015 Class A1- A4 Summary

12
Which Group do you belong to?
• Keep the temperature at 65°F (18.5°C) and humidity tightly
controlled at 50% RH
– “I won’t get fired if the utility bill is high, but I will if the application
goes down.”
• Keep the temperature at 72-75°F (22-24°C) at and humidity
at 50% ±5% RH
– “We typically run at moderate temperatures 22-24°C (72-75°F) now
and may consider going a little higher, and loosening the humidity
range, to save energy …. but waiting to see what others are doing”
• I am a leading edge type and will do anything to show how
much energy I can save. I want to run at hot as possible and
ignore humidity controls
– “Facebook and Google are too conservative”
– “In fact I want to use some of my racks as coffee roasters”
– “I want the most effective and efficient cooling system for my DC”

13
Best Energy Practice - “Free Cooling”

• Free Cooling is the ability to minimize or eliminate the


amount of time Mechanical Cooling is in operation, by taking
advantage of ambient conditions

• Effectiveness is Location Dependent


– Generally Based on the number of “Degree Days”

• Economizers
– Water - Effective in Colder Climates– requires additional equipment
– Air - More Hours - Simplest – Requires Filtration

14
Module Topics

Continuous Cooling

Leaders at the Edge

Free Cooling – Air Side

Free Cooling – Water Side

The X Factor

Fan Energy & Efficiency

Energy Efficiency Metrics

Other Cooling Considerations

15
Air-Side Free Cooling Techniques

• Direct Air Side Economizing


– With or Without Supplementary Mechanical Cooling
– With or Without Adiabatic or Evaporative Cooling

• Air to Air Heat Exchangers


– Indirect Evaporative Air Cooling (Ecoflair)
– Heat Wheel (Kyoto cooling)
– Other designs

16
Yahoo’s “Chicken Coop”
Lockport NY

17
Simple Ventilation Only – Yahoo!

18
Facebook – Direct Evaporative Cooling
(no mechanical supplement)

• The system basis of design utilizes a direct evaporative cooling concept


where no chillers or compressors are needed for cooling the IT load
• The design utilizes a built-up system where the mechanical airside
functions are located in a field-constructed upper storey

19
Facebook – Direct Evaporative Cooling
(no mechanical supplement)

See details on next page

20
Facebook
(OpenCompute.org)

1. Outside air (OA) enters through


drainable louvers in the penthouse
2. The air proceeds into the OA
intake corridor
3. OA mixes with data center return air and passes through the filter room
4. Air enters the evaporative cooling/humidification room and may get sprayed
by the misting system. (determined by temperature-humidity conditions)
5. The air passes through mist eliminators to prevent water carryover (not shown)
6. The air enters the supply fan room into the data center cold aisles
7. The air enters the front of the server cabinets, passes through to the
contained hot aisles, which then enters the return air plenum. The air
then is returned back to the filter room or exhausted out of the building by
natural pressure and/or relief fans

21
Adiabatic cooling principle
• Water is atomised into air stream

22
Evaporative Air-Side Economizer
(with Supplementary Mechanical Cooling)

[24C @ 13C]

[13C @ 13C]
[13C]

Source: Mike Scofield and Tom Weaver Conservation Mechanical Systems

23
Air-Side Economizer Summer / Hot Day Operation
Modulating

Closed
Outside Air Cooling
Coil
[35C]
95°F
Supply Air to Datacenter
[18C]
Economizer dampers 65°F

Closed

Hot Return Air from Datacenter


[32C]
90°F

24
Air-Side Economizer Mild Temperature Operation
Modulating

Outside Air Cooling


Coil
[16 to 32C]
65 to 90°F
Closed
Supply Air to Datacenter
[18C]
Economizer dampers 65°F

Hot Return Air from Datacenter


[32C]
90°F

25
Air-Side Economizer Cold Day Operation
Closed

Outside Air Cooling


Coil
[<16C]
<60°F
Supply Air to Datacenter
[18C]
Economizer dampers 65°F

Hot Return Air from Datacenter


[32C]
90°F

26
Sample Energy Saving Air Side Economizer
Outside Air Availability Outside Air Recir/Mix Mech. Cooling Saving
F Hrs/Yr %/year % % % %

30 200 2.3% 40% 60% 0% 100% “FREE COOLING”


35 300 3.4% 50% 50% 0% 100%

40 400 4.6% 60% 40% 0% 100%

45 500 5.7% 70% 30% 0% 100%

50 600 6.8% 80% 20% 0% 100%

55 700 8.0% 90% 10% 0% 100%

60 800 9.1% 100% 0% 0% 100% Transition to Mix-Reheat


SAT=Min Temp
65 900 10.3% 100% 0% 0% 100%

70 1000 11.4% 100% 0% 0% 100%


75 1400 16.0% 100% 0% 0% 100% Target SAT =75F
80 1200 13.7% 100% 0% 25% 25% Partial “Free Cooling”
85 550 6.3% 100% 0% 50% 50%

90 150 1.7% 100% 0% 75% 75%


Transition To Recir-
95 50 0.6% 50% 50% 100% 0%
100% Mech Cooling
100 10 0.1% 0% 100% 100% 0%

105 0 0.0% 0% 100% 100% 0%

Total/average 8760 100% 71% 29% 28% 72%

27
Case Studies
Air Side Economization
(with supplementary mechanical cooling)

• NetApp
– First “Energy Star” Data Center
– Fully operating data center
– Located in North Carolina

• Deutsche Bank
– Major Financial Firm
– Proof of Concept – in operation May 2011
– Located in New York City Metro Area

28
Air Side Economization
Case Study – NetApp, Inc.

© NetApp Inc.

29
Air Side Economization
Case Study – NetApp, Inc.

• NetApp Inc. (Raleigh, NC) was the first to achieve the EPA’s Energy
Star® for Data Centers rating. (July 2010)

• They maintained 23.3°C (74°F) average supply air temperature

• They allow up 75% RH before activating mechanical dehumidification


– (Note this is above ASHRAE 2008 Recommended – within allowable )

30
Air Side Economization
Case Study – NetApp, Inc.
The Result:

• They are able to utilize only outside air 67% of the time
resulting in:

– ~ 5870 hours of “Free Cooling” per year

– (no mechanical cooling, just fan energy)

• They achieved an annualized PUE of 1.35 *

*11-12 months of verified energy use required for EPA Program


(not just a PUE snapshot at 2 a.m. on a cold night)
including all losses; UPS, etc.

31
Air Side Economization
Case Study Deutsche Bank

32
Air Side Economization
Case Study Deutsche Bank
This case study is noteworthy on several fronts:
• It was designed to be cooled using only outside air nearly 100
percent of the year

• More significantly, unlike the Internet Search, Social Media or Web


Services sites, this is an enterprise data center with production servers
for a major financial firm

• The design uses Adiabatic Cooling, an adaptation of the age-old


“swamp cooler” method of evaporative cooling. The moisture is added
to the air going directly into the data center, which raises the humidity
while lowering the intake air temperature (when ambient air conditions
permit). As a result, the site has a projected PUE of 1.17 (including the
UPS losses) and is expected to operate without any mechanical cooling
99 percent of the time

33
Air Side Economization “Free Cooling” –
Other Costs..
• Direct Outside Air - Requires high MERV Filters
(typically MERV 13)
– High Frequency of Replacement – Cost – Sustainability may require
re-usable (Washable ) filters “Greeness” (Carbon Footprint)
• Trading-off: energy use vs sustainably (land fill full of filters)
• High MERV filtering of outdoor air increases fan energy
• ASHRAE TC9.9 Particulate and Gaseous Contamination
Guidelines for Data Centers

• Early Equipment Failure ? – Unknown – X Factor – covered in


Section 5

34
Direct Air-Side Economisers –
Problems
• The “Y-Factor” …. Humidity

• The “Z-Factor” …. Dust

• Other air contaminants (including gaseous bio threats)

• Filter replacement costs & impact on fan power

• Impact on fire protection (outdoor smoke/dust triggers detectors)

Alternatively, using Air-To-Air Heat Exchangers


Outside Air does not enter the Data Center

35
Indirect Evaporative Air Cooling

• For example, Schneider Ecoflair

• 3 Modes
– Air-to-Air HX
– Evaporative Cooling
(indirect- water sprayed on tubes)
– DX Cooling

• Modular
– 250 kW or 500kW per Module

36
Ecoflair

37
Ecoflair

38
EcoBreeze

39
Air-To-Air Heat Exchangers
Outside Air Does Not Enter Data Center
• Kyoto Heat Wheel Design

• “Kyoto Cooling” utilizes a “heat wheel” to transfer the heat


from the computer room to the outside air

• Wheel provides isolation for computer room and outside air

• Hot Aisle isolated from Cold Aisle


– Flooding of computer room with “cold” air possible
– Hot Aisle is contained and ducted back to Heat Wheel
– Claims that it can begin to becomes effective at a 5F/3C temperature
differential between the outside air and the source air temperatures
– Requires a significant temperature differential for 100% economization

40
Heat Wheel Cooling - Illustration
Heated air flows through the
heatwheel and is cooled down
to a temperature of 18-27°C
(adjustable)
Physical separation of
hot and cold air

Heated Data Centre air is collected


above the Data Centre ceiling
Exhaust
air

Heatwheel

‘Cool‘
outside air

Cold supply air in


front
of the IT equipment
Room Flooded

41
Indirect Air Economiser with Adiabatic Cooling

1. Exhaust air discharge to outside 2. Hot air <45ºC


3. Outside air summer max 35ºC 4. Cool supply air 25ºC
42
Indirect Air Economiser Istanbul

Resultant pPUE cooling 1.13

43
Air-To-Air Heat Exchangers vs
Direct Air-Side Economizers
• Direct Air-side Economizers introduce outside air into the
Data Center
– This requires filters – in many cases 2 levels –
• First level pre-filter for larger particles (MERV 8) and
• Secondary level filter for smaller particulate and dust (MERV 13)
– While Air Side Economizers generally are the most energy efficient heat
removal systems, the introduction of outside air may not be acceptable to
some data center operators
• Air-to-Air Heat Exchangers offer energy savings without introducing
outside air in the data center
– Can be used without additional cooling up to 24˚C (75F), i.e. differential of
3C/5F
– They require twice the fan energy at peak outside temperatures, since
they use separate fans
– Need a lot of space and $$
• Both Types need a particular building shape for large air intake/exhaust

44
Humidity Control ?
Do we still need it?
– Geographic Location Dependant
– Broader Range as of TC 9.9 2015
• A1 “Allowable” 8-80% RH
• Room level – Not in each CRAC / CRAH
– Consider separate Ultrasonic system, not electric heated Steam
– Saves direct energy – no electric heating
– Also provides Cooling- Adiabatic effect
Concerns
• High Humidity
– Corrosion Issues?
– The Y-Factor
• Low Humidity, Static Discharge Damage “ESD” ?
– Now less of a problem – Source ESDA (see www.esda.org )
– ASHRAE TC9.9 allows down to 8%

45
Water-Side Economizer Hours with no required mechanical cooling1,2
Air-Side Economizers: Energy Savings Los Angeles San Jose Denver Co Chicago Boston MA Atlanta Seattle

Potential
Outdoor Air Supply Air
% of Yr below % of Yr below % of Yr below % of Yr below % of Yr below % of Yr below % of Yr below
Wetbulb Bin CWS1 oF(oC) Temp2
o o o o wetbulb wetbulb wetbulb wetbulb wetbulb wetbulb wetbulb
F( C) F( C)
59 (15) 66 (19) 70 (21) 68% 78% 93% 75% 75% 56% 90%
53 (12) 60 (16) 64 (18) 36% 46% 77% 64% 63% 44% 68%
47 (8) 54 (12) 58 (14) 13% 21% 63% 55% 52% 33% 45%
41 (5) 48 (9) 52 (11) 3% 6% 51% 46% 41% 22% 21%

Air-Side Economizer Hours with no required mechanical cooling3,4,5


Los Angeles San Jose Denver Co Chicago Boston MA Atlanta Seattle
Outdoor Air Supply Air
% of Yr below % of Yr below % of Yr below % of Yr below % of Yr below % of Yr below % of Yr below
Drybulb Bin Temp3
o drybulb drybulb drybulb drybulb drybulb drybulb drybulb
F (oC) o
F (oC)
69 (21) 70 (21) 86% 80% 82% 80% 83% 65% 65%
63 (17) 64 (18) 59% 64% 72% 70% 71% 51% 51%
57 (14) 58 (14) 32% 39% 61% 62% 61% 41% 41%
51 (11) 52 (11) 6% 18% 51% 52% 50% 29% 29%

Adiabatically Humidified/ Cooled Air-Side Economizer Hours with no required mechanical cooling3,4,5
Los Angeles San Jose Denver Co Chicago Boston MA Atlanta Seattle
Relative
Supply Air
availability of air-side economizer hours
Outdoor Air
Wetbulb Bin
o
for selected
Temp US cities as a function of supply air temperature
% of Yr below % of Yr below % of Yr below % of Yr below % of Yr below % of Yr below
3
wetbulb wetbulb wetbulb wetbulb wetbulb wetbulb
% of Yr below
wetbulb
F (oC) o o
F ( C)
Prepared by DLB Associates
69 (21) 70 (21) 99% 100% 100% 93% 95% 82% 100%
63 (17) 64 (18) 87% 93% 99% 83% 85% 65% 98%
57 (14) 58 (14) 53% 70% 89% 71% 72% 51% 85%
51 (11) 52 (11) 23% 39% 73% 60% 60% 39% 62% 46
“That’s all very well, but we can’t do air-side
free cooling in a place like Singapore…”
“… can we??”

47
“That’s all very well, but we can’t do air-side
free cooling in a place like Singapore…”
“… can we??”

Toshiba container at NTU – direct airside free cooling with mechanical


supplement (DX) – 28-32°C, <80%RH (24°C dew point), PUE = 1.31
48
Module Topics

Continuous Cooling

Leaders at the Edge

Free Cooling – Air Side

Free Cooling – Water Side

The X Factor

Fan Energy & Efficiency

Energy Efficiency Metrics

Other Cooling Considerations

49
Water-Side Free Cooling

Air-Side Free cooling depends on moving huge volumes of air into the
data centre and back out again – that means it can only be done with
certain building shapes and without neighbouring blockages.

On the other hand, if we can get a water source that is cool enough,
we can use that cool water as the ‘chilled water’ supply to the CRAH
units, thus skipping the chillers.

Any time we can bypass the Mechanical Cooling (i.e. refrigeration


cycle) in this manner is called Water-Side Free Cooling (or Water
Economisation).

Note that the warmer the data centre, the warmer we can use ‘cool’
water, and so the more economisation we can do.

50
Water-Side Free Cooling
Data Center

Valves Not shown


Multi-Port CW
Distrib ution Manifold >
< < Return CW

First Chiller Array


100-500 KW Cooling Capacity
Roof
Valved & Capped >
“FREE COOLING” Packaged Chillers
Fluid Cooler Array

Future
w/Pump & Tank
Chiller

Chiller

Chiller
Chiller

55 Ton

55 Ton

55 Ton
55 Ton

All Manifold Ports


Capped and Valved
Pre-Installed
Free
Fluid Fluid Fluid Cooling Valve
Cooler Cooler Cooler Control Multi-Port
Motorized Package
Array Array Array Manifold
Diverter
Valves

51
Evaporative cooling principle
• Water is applied to the heat exchanger which evaporates, increasing
the cooling effect
• Common applications include cooling towers and evaporative cooling of
air heat exchangers (e.g. Ecobreeze coil)
• We can use this to enhance the water-side economisation

52
Water side economiser options

Combination system

53
Water-Cooled DX CRAC
with Economizer Coil

Warmer Air Economizer Coil Control Valve Opens to Coil


when Condenser Water is Cooler than Return Return Air
Air, Precooling Air to Evaporative Coil and CRAC Unit
Dry Cooler Reducing Compressor Load
Filter
Warmer Condenser Water Return
Economizer
Pre-cooling Coil
Evaporative
Cooling Coil

Refrigerant
Ambient Air Condenser Supply Air Fan

Cool Condenser Water Supply


Refrigerant
Condenser Water Pump Compressor
Supply

• • • • • Dotted lines refer to refrigerant flow


Can use a Cooling Tower instead of Dry Cooler

54
“Free Cooling”

• Water Side Economizers (chilled water)


– No Impact of interior of Data Center layout – works with existing
CRAH systems
– No Outside Air introduced, no additional Filtration requirements
– Requires a minimum temp differential to Chilled Water Return
temp
– 5F (3˚C) Typically minimum differential (RWT-OAT) for partial
Economization
– 5-10F (3-5.5˚C) for “substantial” mechanical cooling load reduction
• (depending on size and type of outside heat exchange system)
– 10-20F (5.5-11˚C) differential (SWT-OAT) Typical for 100% free
cooling
– Lower ambient Temps required – fewer Free Cooling days
compared to Air Side Economizers

55
“Free Cooling”

• Economizers allow you to utilize so called “free cooling” for


your datacenter

• The Green Grid has online tools to help you calculate if


your site has enough free cooling days or total annual
hours to justify spending valuable time and money on an
economizer

• You can choose from two common types of economizers:


air and water

• Generally, the effective operational temperature range for


air side systems is broader than for water-based
economizers

56
Annual cooling degree days

City For 18°C For 25°C For 35°C

London 171 7 0
San 254 11 0
Francisco
Tokyo 1004 184 0
Moscow 270 37 0
Singapore 3592 1054 0
Sydney 909 94 2

Cooling degree days are a measure of how much (in degrees), and for how long (in
days), the outside air temperature was above a certain level.
http://www.degreedays.net/

57
Air-Side Free Cooling
Maps Class A2

©The Green Grid

58
Water-Side Free
Cooling Maps

©The Green Grid

59
Module Topics

Continuous Cooling

Leaders at the Edge

Free Cooling – Air Side

Free Cooling – Water Side

The X Factor

Fan Energy & Efficiency

Energy Efficiency Metrics

Other Cooling Considerations

60
ASHRAE TC 9.9
Whitepaper Excerpts
• the purpose of the recommended envelope was to give guidance to data
center operators on maintaining high reliability and also operating their
data centers in the most energy efficient manner.

• To allow for the potential to operate in a different envelope that might


provide even greater energy savings, this whitepaper provides general
guidance on server metrics that will assist data center operators in
creating a different operating envelope that matches their business
values.

• Any choice outside of the recommended region will be a balance


between the additional energy savings of the cooling system versus the
deleterious effects that may be created in reliability, acoustics, or
performance.

61
ASHRAE TC 9.9

X Factor

62
ASHRAE TC9.9
The X-Factor

63
The “X” Factor - IT Equipment Reliability

The values in the columns marked “x-factor” are the relative failure rate for
that temperature bin.

As temperature increases, the equipment failure rate will also increase.


A table of equipment failure rate as a function of continuous (7 days x 24
hours x 365 days per year) operation is given in Appendix C for volume
servers.
Example: Time-at-temperature weighted failure rate calculation for
IT equipment in the city of Chicago.

Reference “X” Factor = 1.0 (Tightly controlled @ 20°C)


For an air-side economizer, the net time-weighted average reliability for a data
center in Chicago is 0.99 which is very close to the value of 1.0 for a data center
that is tightly controlled and continuously run at a temperature of 20°C.

64
The “X” Factor - IT Equipment Reliability
Major US Cities – Air Side Economizer
• Time weighted failure rate x-factor calculations for Class A2 for air side
economization for selected major US cities.

• The data assumes a 2.7°F (1.5°C) temperature rise between the


outdoor air temperature and the equipment inlet air temperature.

65
The “X” Factor - IT Equipment Reliability
Major US Cities – Water Side Economizer
• Time weighted failure rate x-factor calculations for Class A2 for water side
dry-cooler type economization for selected major US cities.

• The data assumes a 21.6°F (12°C) temperature rise between the outdoor
air temperature and the equipment inlet air temperature.

66
The “X” Factor - IT Equipment Reliability
Worldwide Cities – Air Side Economizer
• Time weighted failure rate x-factor calculations for Class A2 for air side
economization for selected major Worldwide cities.

• The data assumes a 2.7°F (1.5°C) temperature rise between the


outdoor air temperature and the equipment inlet air temperature.

67
The “X” Factor - IT Equipment Reliability
Worldwide Cities – Water Side Economizer
• Time weighted failure rate x-factor calculations for Class A2 for water side
dry-cooler type economization for selected Worldwide cities.

• The data assumes a 21.6°F (12°C) temperature rise between the outdoor
air temperature and the equipment inlet air temperature.

68
Module Topics

Continuous Cooling

Leaders at the Edge

Free Cooling – Air Side

Free Cooling – Water Side

The X Factor

Fan Energy & Efficiency

Energy Efficiency Metrics

Other Cooling Considerations

69
Fan Laws
• Fan Power Commonly Expressed in Horse Power “HP”
• 1 HP = 746 watts (direct conversion shaft /blade HP
– excludes motor efficiency 80-90%)
• VFD – for AC motors – Better Efficiency with EC Fans
• Efficiency can also be expressed as CFM per Watt
– (Higher is more Efficient)

• Highly dependent on fan type and design, as well Back


Pressure (Static Pressure)
• Smaller fans need faster speeds to move the same
volume of air as larger fans using slower speeds – that
makes them less efficient
• 1 cfm=0.47 litres/sec = 0.00047 m3/s
1 l/s = 2.1 CFM

70
Fan Laws
• Airflow varies directly with the fan speed (RPM)
𝐴𝑖𝑟𝑓𝑙𝑜𝑤2 𝑟𝑝𝑚2

𝐴𝑖𝑟𝑓𝑙𝑜𝑤1 𝑟𝑝𝑚1

• Pressure (P) varies approximately as the square of the air flow, so


𝑃2 𝑟𝑝𝑚2 ²

𝑃1 𝑟𝑝𝑚1

• Power (kW) varies according to Airflow x Pressure


kW2 rpm2 x 𝑃2

kW1 rpm1 𝑃1

• Therefore power (kW) varies approximately as the cube of the airflow


kW2

Airflow2 ³
kW1 Airflow1

71
Fan Affinity Laws
These “laws” only help to estimate actual power vs Speed/CFM and only
approximates real world results
– Static Pressure will impact actual performance
– Variation in fan aerodynamic efficiency at difference speeds
– Actual fan performance curve may be closer to “square” law

In CRAC/CRAH
– EC Fans are generally more efficient than Centrifugal Fans/Blowers
– Additional frictional energy losses for Belt Driven fans/blowers

72
Fan Energy – Delta-T
• Q = M x Cp x dT, so a smaller dT will need we need bigger M i.e. bigger
air flow, thus more fan power
• Fan Energy Impact of low Delta-T across CRAC/CRAH
• Example
• 10 KW Fan Power (@ 100% Speed) w/VFD drive or EC fan
Air-Side Delta-T = 8°F [4.67C], CRAH supply fans with VFD
Total CRAH Fan Power = 10 kW
Assume New Air-Side Delta-T = 12°F [7C] and fan VFDs turn down
New CFM = (8/12) = 0.67 i.e. 2/3 of the previous air flow
New FAN Power = 10KW x (0.67 x 0.67 x 0.67) = 3KW (up to 70% Savings!)
(actual fan power expected in the range 3-to-4 kW)

73
Delta-T vs Airflow vs Fan Power
BTU Fan Fan Percent
(1KW) 1.08 ∆T (F) CFM Speed Power Change Notes:

3413 1.08 15 210.7 133% 112.4 237% Very Low ∆T


3413 1.08 16 197.5 125% 92.6 195% ↑↑↑
3413 1.08 17 185.9 118% 77.2 163% ↑↑

3413 1.08 18 175.6 111% 65.0 137% Low ∆T


3413 1.08 19 166.3 105% 55.3 117% ↑

REFERENCE
3413 1.08 20 158.0 100% 47.4 100% (CFM-Spd-Power)

3413 1.08 21 150.5 95% 40.9 86% ↓


3413 1.08 22 143.6 91% 35.6 75% ↓↓
3413 1.08 23 137.4 87% 31.2 66% ↓↓↓
3413 1.08 24 131.7 83% 27.4 58% ↓↓↓↓

3413 1.08 25 126.4 80% 24.3 51% High ∆T


3413 1.08 26 121.5 77% 21.6 46% ↓↓↓↓↓↓
3413 1.08 27 117.0 74% 19.3 41% ↓↓↓↓↓↓↓
3413 1.08 28 112.9 71% 17.3 36% ↓↓↓↓↓↓↓↓
3413 1.08 29 109.0 69% 15.5 33% ↓↓↓↓↓↓↓↓↓

3413 1.08 30 105.3 67% 14.0 30% Very High ∆T

74
Module Topics

Continuous Cooling

Leaders at the Edge

Free Cooling – Air Side

Free Cooling – Water Side

The X Factor

Fan Energy & Efficiency

Energy Efficiency Metrics

Other Cooling Considerations

75
Energy Monitoring is Key to Improving
Energy Efficiency
• Industry research has shown that after IT equipment, a data center’s
cooling system is usually the 2nd-largest energy consumer (and is
sometimes the largest!)
• The irony of this is that many data center managers have no real visibility
into their cooling system’s effectiveness or efficiency
• To begin to review a data center’s cooling system efficiency, the first step is
to measure how much energy the entire cooling system uses
• To analyse a data center’s cooling system efficiency, the second level is to
measure how much energy each sub-system of the cooling system uses:
– Fans (CRAC or CRAH) within the Data Center
– Compressors (internal to CRAC or External Chillers)
– Pumps (Chilled Water and/or Condenser Water
– External Condensers, Dry Coolers, or Tower Fans
• To fully optimize a data center’s cooling system, you need granular data,
including temperatures, humidity, fan speed, etc

76
Energy Monitoring

• Basic & Advanced Key Measurement Points


– Basic: Total Cooling Energy –> Main Cooling Panel
– Advanced: – More Granular View (each sub-system)
• CRACs & Fan Decks (individually if possible)
• CHAHs & Chiller Plant (Air and Water Temps)

• Integrated Management software can integrate energy and


cooling delivered to help analyze and so help to optimize
each CRAC/CRAH unit’s cooling load and effectiveness
(also need supply and return temps)

77
Basic! (you only need two numbers)

78
Going One Step Further – Energy Monitoring

79
PUE

• PUE has established itself as a data center standard, but it’s not
perfect, and there is concerns that it may not report properly:
– Originally based on power (kW) as instantaneous measurements
– (easy to fool)

– Some PUE claims exclude items


• This allowed boasting or misrepresentation by omitting items including
back-up generators, exterior systems, district cooling schemes, condensers
fans, dry cooler fans, pumps and other items

80
PUE - Category 1-3 Points of Measurement

What if we e.g. use


District Cooling?

81
PUE Version 2

• Now provides better methodology for mixed use buildings


• Provides Specific Source Energy Equivalent factors for
externally supplied:
– Electricity = 1.0 (reference)
– Chilled Water = 0.31
– Condenser Water = 0.03

source: Recommendations for Measuring and Reporting Overall Data Center Efficiency
Version 2 – Measuring PUE for Data Centers, 17th May 2011

82
The Green Grid – NEW Metrics

• PUE version 2 is hardly the last word in data center


efficiency that we should consider for cooling systems

• Additional metrics are available:


– pPUE: Partial PUE

– WUE: Water Usage Effectiveness

– CUE: Carbon Usage Effectiveness

83
pPUE: Partial PUE

• PUE = Total Facility Energy divided by the IT Equipment Energy


– This takes into account energy use within a entire facility

• Partial PUE is for energy use within a boundary

• pPUE = Total Energy use within a boundary divided by the IT Equipment


Energy within that boundary

• It is a self-improvement tool, and is definitely not for comparison with


any other data centre

84
pPUE: Partial PUE
• Partial PUE is for energy use within a boundary (such as a Container)
• pPUE = Total Energy within a boundary area divided by the IT
Equipment Energy within that boundary area

PUE = 600 / 475 = 1.26


(includes Chiller, Transf, etc)

pPUE of Container Only = 500 / 475 = 1.05


(excludes everything outside the box)

85
WUE: Water Usage Efficiency

• Large Data Centers typically utilize Cooling Towers which use


evaporation as part of the cooling process which uses millions of
gallons per month.
• Cooling Towers are used because evaporation is more “Energy Efficient”
than air-cooled (DX condenser or dry cooler) systems
• Water usage was not recognized as part of the common “efficiency”
calculations i.e. PUE or even the EPA, but may soon be, since water is a
natural resource that needs to be conserved
• Water will become significantly more expensive and could become a
constrained resource like power
• The water usage issue in not new, but in many cases is supplied by
municipal water systems which cannot meet the rising demand as more
Mega-Centers are built

86
WUE: Water Usage Effectiveness

WUE = Annual Site Water Usage


Annual IT Energy Usage

• WUE, a site-based metric is an assessment of the water used on-site for


operation of the data center. This includes water used for humidification and
water evaporated on-site for energy production or cooling of the data center
and its support systems
• The units of WUE are litres/kilowatt-hour (L/kWh)

• WUEsource, a source-based metric that includes water used on-site and water
used off-site in the production of the energy used on-site. Typically this adds
the water used at the power-generation source to the water used on-site

WUEsource = Annual Source Water Usage + Annual Site Water Usage


Annual IT Energy Usage

87
CUE: Carbon Usage Effectiveness

• CUE = Annual CO2 Emissions From Data Center


Annual IT Energy Usage

• The units of the CUE metric are


– kilograms of carbon dioxide (kgCO2eq) per kilowatt-hour (kWh)
(kgCO2eq) / kilowatt-hour (kWh)

88
CUE: Carbon Usage Effectiveness
• Total CO2 Emissions
– This component includes the CO2 emissions from local and
energy grid–based energy sources
• CO2 emissions should be determined for the mix of
energy delivered to the site
– e.g. the electricity may have been generated from varying
CO2-intensive plants — coal or gas generate more CO2
than hydro or wind.
• The mix also must include other energy sources such as
natural gas, diesel fuel, etc.
• Total CO2 emissions value will include all GHGs, such as
CO2 and methane (CH4)
• All emissions will need to be converted to ―CO2
equivalents

89
Module Topics

Continuous Cooling

Leaders at the Edge

Free Cooling – Air Side

Free Cooling – Water Side

The X Factor

Fan Energy & Efficiency

Energy Efficiency Metrics

Other Cooling Considerations

90
Impact of Cooling Systems Choices and
Designs - Data Center Size and Locations
• Dedicated Purpose-Built Data Center
– Can be design-optimized to maximize cooling efficiency
– State of the art systems
– Best Practices

• Office Park Low Rise – Mixed use building


– Sometimes limited by building restrictions
– Cooling system piping (CW-Glycol-Refrig) may need to run through
other tenant spaces or routed in less than optimal manner
– May not allow Chillers, or cooling towers
– Limited roof space, or cosmetic or acoustic noise issues with
ground level equipment

• High Rise – Metropolitan Office Building


– All of the Office Park Issues x 10

91
Design Goals and Assumptions

• Plan for change – Life cycle of data center


• Costs: CapEx vs OpEx
– Two different Groups
– Two different economic motivations and goals
– Include maintenance costs (e.g. cooling towers are high maintenance)
• Phased or Modular Build-out
• Lifecycle of Cooling Equipment
• Is Building Owned or Leased - Lease terms 5-10-15 years ?
• Expected operation life of data center
– “Old Think” 15-20Yrs vs “New Think” 7-10 years, consider TCO

92
Consider Segregated Zones
• Servers have broadest (and widening) environmental envelopes
• Storage (disk based) have more moderate environmental envelopes
• Tape has tightest environmental requirements, but lowest heat load
and airflow
• Who should decide what is the proper environmental parameters, and
any possible increased equipment failures?
– Facilities or IT, or both ?
SSD Servers Tape

<<< Separate >>> <<< Separate >>>


93
Temperature Control

• Return air temperature is how most old CRACs regulate


their operation and overall data center temperature and
humidity

• However, it is the most ineffective way to control the


cooling system in a data center, since it is not a true
indicator of the air temperature at the IT equipment

• Temperature sensors placed at the front of the racks are


ideal for monitoring IT equipment air inlet temperatures

• Most effective when used with aisle containment

94
Temperature Control

• By controlling the Supply Air Temperature, (within ASHRAE


guidelines) this should help ensure that the inlet air
temperature to the IT equipment is within target
parameters

• However, it does not account for (uncontained) cold aisles


or rack level recirculation issues which could cause higher IT
inlet temperatures

• A more intelligent system is required which looks at all 3


temps:
– Supply Air (at discharge from CRAC/CRAH)
– Cold Aisle or Rack Face
– Return Air (at CRAC/CRAH)

95
Humidity Control
Impacts Cooling System Efficiency
• Beside Cooling, a presumed function of a CRAC is Humidity Control and the CRAC
Humidification & Reheat Process is thought important for maintaining “proper”
levels of humidity. This is still a very common mindset. However this consumes
significant additional energy and the reheat may not be effective.

• In most cases each individual CRAC/CRAHs monitor’s it own return airflow and uses
it to try to control humidity
– This is the most common practice, but is a very poor practice
– Most often results in “battling CRACs”
– Consider Disabling the humidification unit on some of the CRACs
….. better yet, disable them all
– ASHRAE 2004 40-55% RH
– ASHRAE 2008 Slightly Broader DP 8°F/5.5°C - 60% RH
– ASHRAE 2011 – “Allowable” 20-80%
– ASHRAE 2015 – “Allowable” 8% - 80%

96
Humidity Control
Impacts Cooling System Efficiency
• Consider a moderate level of humidity control Hi-Low set points:
– Broadening the set points i.e. 35% - 70% RH will lower energy use
– Above “Recommended” (15°C dew point) – but less than 80% RH
“Allowable”*
(but should ensure stay below 17°C dew point allowable

• Shut off or stagger set-points on multiple units


– “Primary” CRAC/CRAHs set to 35%-70%
– “Secondary” (Back-up) set points 30-75% will trigger if Primary fails or is
overwhelmed – (indication of problems in a closed envelope)

• Better not to use CRAC/CRAH for humidity control – independent


system or centralized humidification control is preferred
• Measure dew point rather than relative humidity
– Does not vary with temperature
– Truer measure of air moisture content
– Measure only one or two places in the room
97
Cooling or Ventilating other Areas

• Larger Data Centers have segregated areas for sub-systems


• UPS Rooms – Mechanical Cooled or may use outside air
ventilation (with cooling only to limit temp/humidity
extremes) – Check UPS Specs!
• Battery Rooms – Watch temps closely – 68-72°F/20-22°C
preferred
– 77°F/25°C recommended maximum
– Severe Reduction in battery life from heat above design temperature
– De-rate life by:
• Wet Cell 2.5 % per℃ (per 1.8℉)
• VRLA 5 % per℃ (per 1.8℉)
– At 90F/32C Battery Design Life would be reduced by approx 50%
– Ventilation:
• Wet Cells require higher Ventilation levels – prevent gas buildup
• VRLA normally non-gassing - just normal ventilation

98
Cooling or Ventilating other Areas..
continued
• Transformer Rooms can use outside air ventilation
– Note temp rise specs of transformers

• Generator Rooms – Must use significant outside air ventilation

• Switchrooms and Mechanical rooms (Water Cooled Chillers) consider to


use ventilation
– Be wary of the effects of high humidity on electronics e.g. power meters

• Carrier Entrance – Demarc – Telco hand-off


– Typically NEBS Standards* – (higher temps and less/no Humidity control)
– Caution: some “Telco” equipment is really IT gear (routers, etc)
* For Singapore refer ‘COPIF’ for MDF room

99
Raised Floor Design?

• Hiding cabling under raised floors has been one of the traditional reasons
to have raised floors since its inception. In practice it has been proven
that the raised floor plenum performs best when empty (no cables).
However, it can also contain chilled or condenser water piping, and thus
offers protection of the IT gear in the event of leaks
• Perforated floor tiles and even high flow floor grates are usually still
limited to the 1 cold tile per rack which limits the practical maximum
airflow rates
• Raised floor is an added capital and maintenance expense
• Higher cabinet equipment densities result in cabinets that can range
from 900-1360kg (2000-3000lbs) and thus require a higher weight rating
resulting in a more expensive raised floor system
• Ramp can use a significant portion of the whitespace
Alternatively consider a no-raised floor design, where the room is flooded
with cool air e.g. with hot aisle containment

100
Solar Heat Load Summary

• Solar heat gain is substantial burden in a single story data


center with a a flat roof (or on the top floor on a mixed use
building), even in cooler geographic areas

• Do not ignore or underestimate its effects on the direct


building heat gain and on the de-rating effect on roof
mounted cooling equipment.

• Heat load = 100-120 W/sqft solar energy!


– Geographic dependent

101
Reducing Roof Heat Loads …after Insulation & White Paint

Solar heat

Average Return
24C

∆ 8C

Supply 16C

• Increased Cooling Requirements


• Potential for ACUs to run over capacity unnoticed as power consumption
suggests there is still capacity left

102
Reducing Roof Heat Loads
Thermal Barrier Utilisation
Solar heat

Venting Venting

Average Return
24C

∆ 6C

Supply 18C

• Create a thermal barrier removing additional solar heat load

103
Reducing Roof Heat Loads
Maximising the Thermal Barrier
Solar heat

Average Return
24C

∆ 6C

Supply 18C

• Remove and also utilise the additional load with


a thermal barrier made up of solar panels

104
Hybrid Approach

• Different Types of Cooling Systems can used together in the


same room

• High Density areas can have different systems (row or


contained cooling), while Low Density may be addressed by
traditional raised floor CRACs

• CFD Modeling can help Optimize Airflow

105
Hybrid Design

Best of All Technologies


(can be used in the same space)
• Up to 5 kW per cabinet
– Raised Floor with Perforated Tiles or Grates
– Offers Flexibility
• 5-15 kW per cabinet
– Bring the Cooling close to the load (e.g. in-row)
– Hot or Cold Aisle Containment
– Hot Air Extraction – Chimney Cabinets or Rear Door
• 15+ kW per cabinet
– Cabinet Contained Cooling
– Hot Air Extraction – Chimney Cabinets or Rear Door

106
Any Questions?

107
EXERCISE

108
BONUS EXERCISE

109
Module 5
The Future – Looking Forward

110
What do YOU see?

111
Module Topics
In this module we will cover:

Air Flow Variation

Liquid Cooling

Containers

Energy Re-use

Thinking Forward

112
How Much Air Flow Do I Need?

Q = M x Cp x dT

So you think you know how much air flow you need?

113
How Much Air Flow Do I Need?

Q = M x Cp x dT

So you think you know how much air flow you need?

The air flow needed by new IT equipment is now expected to


vary according to:
1. The IT processing load (so in a new data centre, expect
the IT fan speed/air flow to be constantly changing)
and
2. IT intake air temperature

114
Typical Server Airflow vs Intake Temperature

Source ASHRAE TC-9.9

115
Power & Airflow of 1U Energy Star Server

Heat Load 495W – 1690BTU

Temp Rise 15.8C 28.5F


Heat Load 135W – 460BTU

Temp Rise 9.9C 18F CFM


56
15

CFM
5 to 25
116
IT Equipment Intake Temps and Airflow
• Higher Intake Air Causes the Fans in the IT gear to Speed Up
• The CRAC/CRAH must then also raise its own Fan Speed/Airflow
to meet the IT gear’s need for more airflow and deliver the
required cooling
• The overall result of higher intake air temps is that both the IT
fan and the CRAC/CRAH fans increase in speed – using more
energy.
• It is therefore important to “fine tune” the data center
temperature to find the ideal intake temperature vs total energy
point – beyond a certain point, the total energy may rise with
the temperature instead of decreasing!
• Studies have found that this begins to
occur at approx. 25-25.5°C (77-78°F)
for typical IT equipment, however,
each data center is different
(see “knee” in chart)

117
Balancing Act – if the IT Fans Speed Up
1KW = Power = 1KW
Recirculation 3412 = (Heat-BTU) = 3412

CRAC IT

20°F = Delta-T = 15°F

11°C = Delta-T = 8°C

--------------------------------

160 CFM = Airflow = 210 CFM

75 L/S = Airflow = 100 L/S

118
Balancing Act – if Excess Cool Air
1KW = Power = 1KW
Bypass (loss of Efficiency) 3412 = (Heat-BTU) = 3412

CRAC IT

20°F = Delta-T = 25°F

11°C = Delta-T = 14°C

--------------------------------

160 CFM = Airflow = 125 CFM

75 L/S = Airflow = 60 L/S

119
Balancing Act - Ideal
1KW = Power = 1KW
3412 = (Heat-BTU) = 3412

CRAC IT

20°F = Delta-T = 20°F

11°C = Delta-T = 11°C

--------------------------------

160 CFM = Airflow = 160 CFM

75 L/S = Airflow = 75 L/S

Start with selecting the CRAC correctly to


match the IT air flow …. but then IT changes!
120
Design Challenges

• As can be seen, it is no longer simple to calculate airflow


requirements in a data center with highly dynamic heat
loads
• Newer Energy Star IT equipment will have a much wider
airflow range based on power used and intake temperature
• In addition, raising intake temperatures will cause all IT
equipment fans to speed up
• While each data center is different, it will become more
important to monitor and compensate for these varying
airflow conditions
• Airflow rates and static pressure measurements, as well as
temperature may become necessary inputs to control
CRAC/CRAHs and other airflow units

121
Strategies for Mis-Matched Airflows

Airflow to the Rack –


• IT gear is constantly changing each time IT equipment is added or changed
• Individually or Technology Refresh
• New IT equipment has a much Broader Range of Airflow

Total At the Room Level Cooling System (CRAC/CRAH)


• Variable Speed Fans can be controlled by
Temperature
– Where to Measure?
• Static Air Pressure
• Vary airflow at the tile level - match tile to cabinet airflow requirement
• Optimum would be to take chip temperature information from the IT
equipment, and use that to control CRAC fan speed … is this the Future?

122
Module Topics

Air Flow Variation

Liquid Cooling

Containers

Energy Re-use

Thinking Forward

123
Water vs Air
• Air Cooling
– Universal - Easy … Common Standard
– Easy to install, service and remove
– Lower Heat Transfer – Far More Fan Energy Required than water pumps

• Water is ~3,500 times more effective than air as a heat transfer medium! (by
volume and delta T)
– Requires Plumbing
– CRAH Computer Room Air Handler - very common
– Air to Water heat exchangers (really air cooling)
(most large data centers use “water via CRAH – via raised floor”)
“Close Coupled” Inrow – sometimes referred to a Liquid Cooling – Misnomer

124
Liquid Cooling?
Is this a liquid-cooled server?

125
Liquid Cooling?
Is this a liquid-cooled server?

No! (almost, but no)


Although the heat is initially taken away from the chip by a liquid, the liquid
does not take the heat away from the server. The server still requires full
air flow to remove the heat
126
Liquid Cooling

Immersion, internal-to-external liquid circulation, and cooling pads are all


forms of liquid cooling – the liquid takes the heat out from the computer,
thereby reducing the amount of air flow required through the IT equipment
127
Liquid Cooling:
• Can use Water, Glycol, Refrigerant, Oil or a combination
• Requires pipes going into cabinets
• “Non-standard” IT equipment (but liquid-cooled servers
that had previously been limited to research projects and
high-performance computing, are now becoming
commercially available)
• High Density will initially drive
the adoption of liquid cooling
• Once the energy efficiency
benefits are realized, it might
also be adopted for medium, or
even low density cabinets

 80kW cabinet is available commercially

128
ASHRAE Latest Release “Liquid Cooling”
Thermal Guidelines for Liquid Cooled Data Processing Environments -
Released September 2011 by ASHRAE TC 9.9, Updated 2015
• Describes classes for the temperature ranges of the facility supply of
water to liquid cooled IT equipment.
• Also reinforces some of the information provided in the Liquid Cooling
Guidelines book on the interface between the IT equipment and
infrastructure in support of the liquid cooled IT equipment.
• Because of the energy densities found in many high performance
computing (HPC) applications, liquid cooling can be a very appropriate
technology.
• 30 kW racks are typical with densities extending as high as 80 to 120 kW.
Without some implementation of liquid cooling these higher powers
would be very difficult if not impossible to cool.
• Advantages of liquid cooling increase as the load densities increase.
• More details can be found in “Liquid Cooling Guidelines for Datacom
Equipment Centers”, part of the ASHRAE Datacom Series.

129
ASHRAE Thermal Guidelines for
Liquid Cooled Data Processing Environments

130
Highlights of ASHRAE “Liquid Cooling”
Equipment Classes
• Class W1/W2
– Typically a data center that is traditionally cooled using chillers and a
cooling tower but with an optional water side economizer to improve on
energy efficiency depending on the location of the data center. See Figure
3a slide 51/52
• Class W3
– For most locations these data centers may be operated without chillers.
Some locations will still require chillers. See Figure 3a slide 51/52
• Class W4
– To take advantage of energy efficiency and reduce capital expense, these
data centers are operated without chillers. See Figure 3b slide 51/52
• Class W5
– To take advantage of energy efficiency, reduce capital expense with chiller-
less operation and also make use of the waste energy, the water
temperature is high enough to make use of the water exiting the IT
equipment for heating local buildings. See Figure 3c slide 51/52

131
Highlights of ASHRAE “Liquid Cooling”
Equipment Classes

Source : ASHRAE TC9.9 2015


132
Highlights of ASHRAE “Liquid Cooling”
Equipment Classes (Metric)

133
Highlights of ASHRAE “Liquid Cooling”
Equipment Classes (Imperial Units)

134
Highlights of ASHRAE “Liquid Cooling”

135
Temperature Stack
• Figure B-2 shows the stack of
approach temperatures, figuratively
depicting the temperature rise from
ambient (cooling tower or dry
cooler) through to the liquid
provided to the IT equipment.
• The left stack is for a dry cooler
from the 99.5°F value found in the
lower chart in Figure B-1 through a
Cooling Distribution Unit (CDU) and
any system preheat (e.g. series cooling inside
the IT) and finally the temperature rise
through to the case temperature

• The right stack uses a cooling tower


with the 79.7°F wet bulb
temperature from Figure B-1.

136
IBM Hydro Cluster
Liquid Cooled CPUs

IBM Corp

137
IBM Hydro Cluster

IBM Corp

138
IBM
Hydro
Cluster

IBM Corp
139
LIQUID COOLING EXAMPLE : AIST

USES NOVEC
140
LIQUID COOLING EXAMPLE : SUBMER

Only connect power and network


Rated at 6kW of IT, but after recognizing omission of IT fans and
improved chip cooling leading to better processing efficiency it
would be equivalent to a higher kW of traditional IT
141
Module Topics

Air Flow Variation

Liquid Cooling

Containers

Energy Re-use

Thinking Forward

142
Containers

• While containers may not be suitable for every data center,


some of their underlying design elements can be examined
and may be useful in a more conventional DC

• ‘Close Coupled’ Cooling

• Outside Direct Air Cooling

• Outside Indirect Air Cooling

• Minimal use of Chilled Water or DX cooling (Economising)

143
Container and Modular Data Centers

• Chilled Water Cooled


• Air Cooled- w/ DX for “Peak Temp Trim”
• ISO Container sized (20 & 40ft)
• Larger Modular systems
– Typically shipped in container sized sections

Fans

144
Container with Isolated Air Side
Economizer Air-to-Air Heat
Exchanger and Exterior Adiabatic Cooling

145
Module Topics

Air Flow Variation

Liquid Cooling

Containers

Energy Re-use

Thinking Forward

146
Thinking Forward

• Densities will continue to rise and energy efficiency will


become even more important

• New cooling technologies, as well as “acceptable” operating


conditions and practices in the data center are evolving

• This trend will continue to accelerate..

• However, while previously we were satisfied if cooling


systems kept the IT equipment at a “safe” operational
temperature, we are now looking for ways to recover and
re-use the “waste” heat energy

147
Thinking Future

• Improving the design of cooling systems has been


primarily focused on optimizing effectiveness to
meet rising heat density loads, while trying to
improve the energy efficiency of fans,
compressors, etc.
• If we were able to reduce the average PUE of
data centers from the present value of 2.0
(majority ranging 1.5-2.5), to a very low range of
PUE of 1.1 – 1.2, it would represent a huge
improvement.
• However, the IT load still converts virtually every
kW into heat, which is “waste” energy that is
expelled to the environment, so it directly
contributes to global warming.

148
Thinking Further

• Only recently has the concept of


capturing and re-using the heat been
given any serious consideration

• This is difficult, since the heat from IT


equipment is “low grade” heat, however
it is still possible to use it to:
Warm other buildings

– Heat water

– Or any other purposes, that would have


burned fuel or used electrical energy.

149
Moving Beyond PUE
ERE: Energy Reuse Effectiveness
I am going to be re-using waste heat from my data
center in another part of my site:

Then my PUE will be 0.8! X


•While re-using excess energy from the data center can be a
good thing to do, it cannot be rolled into PUE

•The definition of PUE does not allow this

•There is a separate metric to do this! ERE

© 2011, The Green Grid

150
ERE: Energy Reuse Effectiveness

Total Energy (150 MWh)


PUE = IT Energy (100 MWh)
= 1.5

– ERE is similar to PUE - but allows offset of Reuse Energy

(Total Energy) – (Reuse Energy)


ERE = IT Energy

– ERE range:
• Minimum “0” (All energy Re-used)
• NO Maximum (little or NO energy Re-used)

– Example: (Total 150 MWh) – (Reuse 20 MWh)


ERE=1.3
IT Energy 100 MWh
Note: PUE is still 1.5
© 2011, The Green Grid

151
ERE: Energy Reuse Effectiveness

152
153
ERE: Energy Reuse Effectiveness

• Under the definition of ERE, the “Waste Heat” must be re-used outside
of the *data center boundary area, such as office space, other
buildings. The Waste Heat can be re-used directly as Heat or converted
to:
– Cooling (heat converted to cooling via an absorption chiller)
– Electrical Energy (converted via a technology)

• If the Waste Heat is re-used within the data center boundary area, then
it is not considered as ERE, but it will still beneficial and help lower the
PUE, since it would lower the “Total Energy” within the Data Center
boundary area

*(Data center whitespace or directly related support areas – UPS. Battery, mechanical or electrical rooms)

154
155
ERE: Energy Reuse Effectiveness

– ERF: Energy Reuse Factor


– ERF Sub-set of ERE

Reuse Energy
ERF = Total Energy

– ERF range:
– Minimum 0 (no Reuse)
– Maximum of 1.0 (all energy Re-used)

– Example: Reuse 20 MWh (directly or energy equivalent)


ERF=0.133
Total Energy 150 MWh

© 2011, The Green Grid


156
ERE: Energy Reuse Effectiveness

Impediments to Re-Use of waste heat

• “Low Grade Heat” (low temperature) which is difficult to use


effectively.

• High/Additional Capital Cost


– Adds significantly to the cost, especially if retro–fit, with poor ROI

• Complexity:
– While in some cases some of the heat can be used to simply warm office
space during the winter, it still must be rerouted to the outside the rest of
the year, adding to the complexity and cost

157
ERE: Energy Reuse Effectiveness
Potential technologies to allow for more effective Reuse of waste heat
• Low Grade Heat (simplest)
– Warm other space or heat water (even partially)

• Upgrade to Higher Grade Heat


(Super Heater or other technology- may require some additional energy)
– Warm other space or heat water (complete heat source)
– To operate Absorption Chiller
– To operate Stirling Cycle Engine (Heat Differential engine)
– Steam Turbine

• Convert to Electrical energy


– Thermocouple - Peltier Device
– Stirling Cycle Engine (drive Generator)
– Other technologies to be developed

158
IBM HydroCluster
• Envisions Re-using Heat from Servers to heat homes or other buildings

159
Spot the Data Centre in this Picture

160
Energy Energy Energy

• As we continue to improve cooling efficiencies, PUE is getting closer to


nearly perfect (i.e. 1.1) and perhaps lower….

• The next challenge is to make more common and effective use of the
major underlying byproduct - the waste heat

• Rather than simply being happy that we have improved the cooling
system efficiencies to the point where only minor improvements can be
made

• We should make energy re-use and recovery a target

161
Module Topics

Air Flow Variation

Liquid Cooling

Containers

Energy Re-use

Thinking Forward

162
Thinking Forward

• The Data Center as we know it is changing rapidly.

• Previous heresy to most designers and operators, such as 27°C (80°F)


Cold Aisles or using outside air, is becoming common accepted practice

• It takes approximately 18-24 months for a data center to be designed,


built and commissioned.

• In that time, the IT gear will have moved ahead 1-2 generations

• Cooling equipment technologies as well as IT equipment environmental


operating envelopes may have progressed by the time the site is
commissioned.

163
Sea, River and Lake Water Cooling

• Many non-traditional cooling systems are being developed and tested to


deal with more cooling increases with improved energy efficiency

• By using natural bodies of water as a source or cool or cold water, it


eliminates the need for mechanical cooling, and thereby saves energy

• While Utility Power plants have used natural bodies of water to cool power
plants for many years, only recently have data centers begun to test this.

• While it can improve PUE since it eliminates the need to have or run
mechanical chillers, it still “dumps” waste heat into the environment

• It does not reuse the heat for a useful purpose.

164
Microsoft Seawater Cooling

165
Standards are Developing

BS EN 50600-2-3:2014

166
If Your Data Center Goes Down and
Nobody Notices – Does it Matter?
Are Two Tier 1 sites better than one Tier 4 Data Center?
• Part of the inherent IT design philosophy of Internet Search, Social Mediaand
Web Services sites is the concept of site redundancy and overlap
• By being able to failover and transfer the compute request in the event of
the loss of:
– A Single Server
– A Cluster of Servers
– An Entire Data Center
• This has lessened their reliance on Infrastructure redundancy significantly
improving energy efficiency and placed on the IT architectural design.
• This allows some sites to minimize the risk of a Tier 1 (N) design, because
there are 2 or more data centers geographically separated, each capable of
sharing or servicing the compute load.

167
IT equipment becoming more robust,
can operate at a much wider environmental window

• Currently Available Storage Product


– Solid State Drives “SSD”
– Enterprise Drive 400 GB
– 2.5” Form Factor – Hot swap
• Interchangeable with existing equipment
• Operating Temp 32 to 140°F / 0-60°C
• Humidity 5-95% RH
• Shock Operating 1000Gs
• Average Idle Power (W) 5.92
• Average Operating Power (W) 6.67

168
Current trend
Which free air cooling methods are you currently using in your data centre(s)?

N=Unweighted Asia Middle North Latin Europe Secondary Eastern Total


Pacific East America America Primary Western Europe (N=2649)
(N=562) Africa (N=368) (N=359) (N=503) Europe (N=272)
(N=271) (N=314)
Air cooled
chillers with 19.5 29.3 27.1 22.4 30.8 38.0 18.5 26.7
free cooling
Direct air
cooling (fresh
20.3 32.0 28.5 23.6 27.0 28.4 31.1 26.5
air from
outside)
Water-side
economizers
33.4 16.8 41.7 17.5 30.2 26.8 8.6 26.3
(water cooled
CRAC)
Water cooled
chillers with 23.3 15.1 40.7 12.8 35.7 32.6 15.8 26.1
free cooling
Evaporative
24.2 14.4 42.6 13.8 17.5 13.5 9.2 19.4
cooling
Indirect air
6.8 12.1 7.0 7.3 7.9 6.3 6.6 7.6
economizers
Other methods 9.0 13.4 9.3 16.2 12.1 12.9 13.0 12.2

169
Future trend
Which do you plan to introduce into your datacenter(s)?

N=Unweighted Asia Middle North Latin Europe Secondary Eastern Total


Pacific East America America Primary Western Europe (N=2649)
(N=562) Africa (N=368) (N=359) (N=503) Europe (N=272)
(N=271) (N=314)
Direct air
cooling (fresh
28.9 19.5 29.4 34.7 30.3 26.5 17.9 28.3
air from
outside)
Water cooled
chillers with 20.4 21.3 22.6 31.7 18.5 20.7 14.7 22.2
free cooling
Air cooled
chillers with 18.8 16.6 24.2 30.2 22.0 11.5 14.2 21.0
free cooling
Indirect air
economizers
(thermal 23.1 23.8 23.8 27.2 12.7 21.2 6.9 20.3
wheel/Kyoto
cooling)
Water-side
economizers
21.5 21.3 16.2 29.7 13.9 19.6 9.6 19.8
(water cooled
CRAC)
Evaporative
16.6 15.9 16.7 21.6 9.5 11.2 7.3 14.7
cooling
Other methods 14.0 15.5 12.1 14.0 6.9 12.3 16.5 12.5

170
The Ultimate “Cooling” System Design Goals

• Be Inherently Resilient (Redundant or Fail Safe)


• Use Very Little or No Electrical Energy
• Use Very Little or No Water
• Provide Energy Recovery/Reuse
• Be Simple to Operate and Maintain
• Be constructed with overall sustainability in the materials
and construction process chain (e.g. no refrigerant gases)
• Do all of the above, at minimal Capital Expense

171
Final Thoughts
• Q> Which is the “Best” Cooling System
– A> It depends.
• Q> Which is the most Efficient Cooling System
– A> It depends.
• What is the most Cost Effective Cooling System
– A> It depends.
• What is the Worst Cooling System
– A> It depends.
• Will the power densities continue to Rise?
– A> It depends
• Make sure that you design for flexibility, because when it
comes to IT, the only constant is … change!

172
Don’t keep your head in the same old sand when designing cooling systems

173
Any Questions?

174
EXERCISE

175
Module 6
CFD Fundamentals

176
Module Topics

In this module we will cover:

Introduction to CFD

The Business Case

Case Studies

Calibration & Validation

Review and Looking Forwards

177
CFD

Computational Fluid Dynamics

Usually abbreviated as CFD, is a branch of fluid mechanics that uses


numerical methods and algorithms to solve and analyze problems that
involve fluid flows. Computers are used to perform the calculations
required to simulate the interaction of liquids and gases with surfaces
defined by boundary conditions.

Airflow Modelling for Data Centres

178
CFD as an Option

Lets spend a couple of minutes discussing what we know…

• Benefits

• Concerns

179
CFD – History

Navier Stokes - 1823

Space Industry
1960s

Data Centres
2000 onwards

It’s not new! The application of CFD has been evolving for over 50 years now

180
CFD – Where Did it Come from?

1960s 1970s 1980s 2000 Year

In-house CFD numerical methods / code development


General Purpose Commercially Available 1st Generation

Problem Specific i.e. Electronics Cooling 2nd Generation

Data Centre Specific 3rd Generation

Over time, software has become more accessible and user


friendly with many now targeting specific industries

181
CFD – How does it work?
A 3 dimensional model is firstly created, incorporating all of the important objects
that contribute to blockage of airflow and thermal conditions within the room. The
fewer assumptions that are made, the more accurate the predictions will be gained.

ACU Cooling Curves /


Control Strategies

Server Fan Curve


Operating Conditions

182
CFD – How does it work?

The model is then divided into many cells (gridding) to accurately represent
each individual item within the model.

Just like sensors, each


cell can hold information
on many variables:
• Velocity
• Temperature
• Pressure
• Humidity
• Concentration

The more cells, the greater the detail


and resolution, but the longer it will
take to solve.
Greater resolution software is more
expensive

183
CFD – How does it work?
The discretised model can now be solved iteratively. For every iteration,
values are calculated for each cell. Error between the cells for any variable
imbalance is then summated to provide a total error in the solution (see graph
below). If the residual error number is too large, the process begins again
(iteration)until the solution error is reduced to an acceptable level

Starting Error
Residual Error

Acceptable Error

Iteration Number

184
CFD – How does it work?
Once complete, the predicted thermal information can then be extracted in
many different forms:

Block Colours

Tabular Reporting

Airflow Animations Planes

185
The Application of CFD to the Data Centre
Industry
Equipment Manufacturer IT / Procurement Facilities Manager

Simulation has the ability to bring together all of the thermal aspects from
the chip to the data centre. (IT Supply Chain)
This in turn provides the opportunity to utilise not only as an analysis tool
but also a communication facilitation tool bringing together all associated
departments from IT, to facilities, including management.

186
The Application of CFD to the Data Centre
Industry
IT Equipment Simulation tools have been used in the
production of IT equipment for over 30 years
(Every electronic device that even you personally
purchase today will have had some form of simulation
applied to it!)
Market competition, the pressures of time to
market, cost, and resilience of product have all
been factors in the successful adoption of CFD
into a standard design process
Extremely fast technology changes - Server
model refresh can be as little as 6 months! No
place for physical prototypes
Usually a single department with experienced
design engineers controlling all parameters

187
The Application of CFD to the Data Centre
Industry
Data Centre Seen in the data centre arena around the year
2000 when chip densities started to become an
issue
Utilised predominantly in the design phase or
one-off troubleshooting exercises.
Most simulations are carried out by external
consultants to the end customer as they feel
they do not have the engineering capability in-
house
Not recognised as part of a standard process so
seen by many as time consuming and costly
(This is changing slowly!)
Multiple departments have input into the space
Still evolving

188
The Application of CFD to the Data Centre
Industry

Both are designed to manage


the power distribution and
cooling of electrical
components within a box

Design remains fixed Design changes with time so


Datacentre problem is much
greater
The Problem: Electronics Cooling Problem
The Challenge: How to incorporate into existing processes
Creation of software that does not require the
skills of a rocket scientist

189
Section Topics

In this module we will cover:

Introduction to CFD

The Business Case

Case Studies

Calibration & Validation

Review and Looking Forwards

190
Operational Intent

Design Capacity

Utilized
Capacity
• Space
• Power
• Cooling
Utilization

• Network

Time Next Data Center

The operational intent is to reach 100% capacity over the proposed


lifetime of the data centre

191
The Reality…

Design Capacity

Lost Capacity

Utilized
Capacity
• Space
• Power
• Cooling
Utilization

• Network

Time Next Data Center

Operational changes create a mid-life crisis and lost capacity.

192
Data Centre Capacity

Assumption Reality

1
1 2 2
4

Capacity is made up of four components: Space (U), Power (kW), Cooling (kW), Network (ports).

Capacity is lost when there is no location in the data centre where all the components are
available together in sufficient capacity for a proposed installation of equipment.

193
Data Centre Fragmentation
Total Space
Available U: 33 Available U: 33 Available U: 33 Available U: 33 Available U: 33
Available kW:4 Available kW:4 Available kW:4 Available kW:4 Available kW:4
33U 33U

33U 33U
33U

Total Power

4kW 4kW

4kW 4kW
4kW
All cabinets start of with 100% available capacity

194
Data Centre Fragmentation
Capacity is lost when there is no location in the data centre where all the
components are available together in sufficient capacity for the proposed install.
Available U: 13 Available U: 15 Available U: 26 Available U: 17 Available U: 17
Available kW:
1.2 Available kW:
1.7 Available kW:0 Available kW:2 Available kW:
1.7

Next Change: Cluster

Total Space: 11U


Total Power: 1.4kW

Total Space Total Power

All cabinets fail on at least one of the parameters

195
Data Centre Fragmentation
Total Space
Available U: 13 Available U: 15 Available U: 26 Available U: 17 Available U: 17
Available kW:
1.2 Available kW:
1.7 Available kW:0 Available kW:2 Available kW:
1.7
17U
13U

17U

15U
26U

Total Power

1.7kW
1.2kW
2.0kW

1.7kW

At this stage, available capacity becomes lost capacity

196
Lost (Unavailable) Capacity – it all adds up

With the advent of DCIM tools, it is becoming easier to handle space, power, and
networking to minimise lost capacity as these are visible and power/network may
have flexibility – and once you install, you know how much you’ve got

Imagine how much more difficult it would be if power, data and network cables
were invisible and unsure in value...

Unfortunately for a complete picture on capacity planning, there is one variable we


really need to assess... and it is invisible!

197
Cooling is Invisible!
• Most IT Equipment is air cooled
– There are no standards for cooling design.
– IT equipment designs change over time.
– IT deployment decisions often do not
consider air flow or cooling.
– Data Center cooling infrastructure is fixed
over its lifetime.
– The owner/operator has the responsibility to
deliver sufficient airflow and cooling to
every piece of IT equipment at all times.
• But air is invisible!
– Hotspots can lead to lost capacity.
– Do you really know how much effective
cooling capacity you have available

198
Planning an IT layout without considering
airflow causes capacity to be lost

Mid-range equipment,
2 layouts, one works... one fails.

199
Example: IT demands do not match design
assumptions
• We will consider a simple small example data hall:

• Design Specification
– 200kW data hall
– Raised Floor
– Range of 3kw and 4kw Cabinets
– N+1 cooling redundancy

• Design Assumptions
– Hot aisle cold aisle arrangement
– All IT equipment breathes front to back
– Temperature rise across the rack is 12°C

200
Confirming Design Assumptions through CFD

Cooling Capacity
40kW

40kW 100%

30kW

30kW
100 %

30kW

30kW

Utilization
Cooling
Total Load 200kW
Time

201
Accommodating IT Demand

• Day 1 IT demand is to migrate some existing free standing mid-range


equipment from an existing data centre

• Design Specification
– 200kW data hall
– Footprint
– N+1 cooling redundancy

• Design Assumptions
– Hot aisle cold aisle arrangement
– All IT equipment breathes front to back
– Temperature rise across the rack is 12°C

202
The mid-range equipment fits into the room
based on design and installation
specifications Cooling Capacity

80kW
60%

0kW

0kW 100 %

0kW

0kW 40 %

Total Load 140kW


80kW
120kW

203
The other four rows are loaded up according
to the original design
Cooling Capacity

80kW 40%

10kW

10kW 100 %

10kW
60 %
10kW

Total Load 140kW


80kW
120kW
120kW

204
Lost Capacity is only realised when things go
wrong
Cooling Capacity

30%
80kW

15kW

15kW 100 %
30% Lost Capacity
15kW
70 %

15kW

Total Load 140kW


80kW
120kW
140kW

205
But this Lost Capacity could have been
predicted when the problem was caused

• Let’s wind back the clock…

206
The Lost Capacity was caused by the
installation of the first 40%
Immediate effect Assumption

80kW
60%

0kW

0kW
Simulation shows that the
0kW original deployment was ok
at the time, but what are
0kW the long term effects?

Total Load 140kW


80kW
120kW

207
A simulation of the long term effects would
have shown that this deployment sacrifices
30% of the capacity of the data center
The Future Assumption

40%
80kW
60%

15kW

15kW
Reality
15kW
30% 40%
15kW

Total Load 140kW


80kW
120kW
140kW 30%

208
Utilisation and Stranded Capacity - Summary

Problem:
• Any IT deployed in a data centre will disrupt the airflow and
cooling even in empty zones of the room.

Symptom:
• The simple example we just looked at illustrates a common
feature of data centres that is not well understood:
• If your data centre is running at 40% of design load, you do not
have 60% capacity left!
• Thermal hotspots will occur before the data centre reaches
capacity.

Reaction:
• Resilience concerns prevent further installation of IT devices
• Data Centre is effectively ‘closed’.
• More data centres are built to gain more capacity.

209
Foresight is necessary to avoid lost capacity

Any short term change could sacrifice


available capacity in the long term.

To evaluate a change it is necessary to predict short and long term effects:

• Predict resilience and efficiency in the short term

• Predict any lost capacity in the long term

210
First simulation is to check the initial
resilience
Cooling Capacity

0kW
60%
0kW

80kW

The original deployment


0kW was ok at the time, but
what are the long term
0kW effects?

Total Load 140kW


80kW
120kW

211
Simulating the future shows that the room
will be loaded to 90% before warnings
Cooling Capacity
10%
25kW

25kW

80kW
100 %
10% Lost Capacity
90 %
25kW
10kW

15kW
25kW

Total Load 180kW

212
The Business Case: Build a new data centre?

Do we end up spending $ ?? Million on a new Data Centre?

Or spending $ ?? Million on trying to upgrade aircon for the other data centres?

+
Utilization

Utilization
60% 60% 60% 60%

Time Time
$??M*
Build new Data Centre?

* Annualised Cost Estimates based on Uptime Institute true TCO calculator based on a 1000m2
Cost of IT equipment ignored

213
The Business Case : Better Data Centre Management

Minimise stranded capacity in the original data centres

+20% +20% +20%

=
Utilization

Utilization
60% 60% 60% 60%

Time Time
$??M*
Build new Data Centre

Annualised Cost Estimates based on Uptime Institute true TCO calculator


Cost of IT equipment ignored

214
Business Case : Energy Savings

Potential Applications for Energy Savings:

• Utilisation of Free Cooling

• Reduce airflow to optimise fan power

• Increase ACU and chilled water set points

215
Frequency of change in an operational data
centre

Projects & Facilities IT


Engineering

• Changes are made continuously, deviating from initial design assumptions.


• Different departments makes changes independently.
• Any change could sacrifice resilience and cause lost capacity.

216
The Virtual Model – Predict before you
commit

Real Facility

Projects &
Facilities IT Virtual Model
Engineering

• Changes are first checked in a Virtual Model.


• Departments make changes with knowledge of other departments’ plans.
• Lost capacity is minimised.

217
The Virtual Model (or “Digital Twin”)

The Virtual Model is a full 3-dimensional


mathematical representation of the physical
data center at any point in time

218
Take a Phased Approach Design
3. Change Management /
1. Validate M&E Design 2. Validate IT Layout
Revalidation

Create Virtual Facility Incorporate IT plans Deploy Deploy


from drawings. into Virtual Facility.

Estimate load Identify any lost


distribution in racks. capacity.

Deploy Deploy
Designing a New Data Centre

Match ΔT of IT to design Revise IT layout to match


assumptions. cooling.
Re-Calibrate Deploy
Test cooling system at Improve cooling delivery
Load Plan
desired resilience. for IT Layout.

• Predict the consequences of change.


Phase 1 will not Phase 2 will not
guarantee resilience guarantee against • Deploy new hardware with confidence.
of future IT Layouts. future lost capacity if
plans change. • Adapt load management plans.

219
Take a Phased Approach Existing Facility
1. Improve Facility 3. Change Management /
2. Maximise IT Utilisation
Resilience & Efficiency Revalidation

Create Virtual Facility Incorporate IT plans Deploy Deploy


from survey. into Virtual Facility.

Calibrate against site Identify any lost


measurements. capacity.

Deploy Deploy
Improving an existing Data Centre

Identify resilience and Revise IT layout to match


efficiency issues. cooling.

Simulate fixes and Re-Calibrate Deploy


Improve cooling delivery
Load Plan
make changes. for IT Layout.

• Predict the consequences of change.


Phase 1 will not Phase 2 will not
guarantee resilience guarantee against • Deploy new hardware with confidence.
of future IT Layouts. future lost capacity if
plans change. • Adapt load management plans.

220
Section Topics

In this module we will cover:

Introduction to CFD

The Business Case

Case Studies

Calibration & Validation

Review and Looking Forwards

221
Case Study 1 – Troubleshooting/Assessing Options

• 100 m2
• Underfloor Obstructions
• Chilled Water Pipes
• Power Cables
• Cable Trays
• 25 Floor Grilles
• 4 ACUs
• 120 kW of cooling capacity
• 24 Cabinets
• 40 kW of IT load
• IBM Blades – 4 Units
• Rackable C3106 – 12 Units
• HP Proliants DL360 G4 – 15 Units
• NetApp DS14 MK2 - 12 Units
• Cisco Systems Catalyst 6509 – 4
Units

222
Case Study – Simulation Results
Cooling requirement Key results Key conclusions
ACU Supply to Grille • 16,950cfm delivered, 7,633cfm required • Air supply can be
• Air supply is 2x more than is needed reduced
Grille to Equipment Inlet • Air supply can be
73% of
cooling air is reduced
bypassing • ACU performance
limited by short
circuiting of supply
air
Exhaust to ACU Return 44% of exhaust air is • Cooling paths are ill
recirculating defined leading to
potential over heat
of equipment
Doubled-up • Overheating must
Equipment Thermal Safety
networking switches
be controlled
are overheating
before the cooling
system can be
optimized

223
Case Study – Design Options
Option 1
ACU- Network
02 Switches

• Adjust damper settings for floor


grilles
Damper
Settings
• Shut off ACU-02; Reduce Supply
Flow Rate by 25%
• Redistribute Networking Equipment
Option 2

• Option 1+
• Blanking &
Baffling

Blanking

224
Case Study – Design Options
Option 3 False
Ceiling

• Option 1 +
• Option 2 +
• Chimney Cabinets
Chimney • Addition of a false ceiling
Cabinets

Option 4
Hot-Aisle
Containment

• Remove peripheral ACUs


• Blanking & Baffling
• Add In-Row Coolers
• Add Hot-Aisle Containment
In-Row
Cooling

225
Case Study - Results

226
Case Study - Results

227
Case Study 2 : Innovative Design
New Build, Turkish Financial Institution with Economiser Cooling System

• 3 Aisles, 72 Racks,
4-10kW per Rack
• Contained Hot Aisle

Economiser Refrigeration annual Refrigeration Refrigeration annual Total annual cooling


System annual run time run time COP energy energy pPUE
Indirect air side economiser 8327 hrs 433 hrs 2.9 67230 kWhr 195543 kWhr 1.05
Water side economiser 3518 hrs 5242 hrs 3.9 604899 kWhr 1074452 kWhr 1.27
DX 0 hrs 8760 hrs 2.9 1359310 kWhr 1587721 kWhr 1.40
228
Design Verification New Build, Turkish Financial Institution
Stress Testing

• No issues present when running


in normal operation
Cabinets coloured by
server inlet temperature

x x

x
• Downtime through maintenance
or failure of the central ACU will
raise inlet temperatures past
ASHRAE limits
229
Design Verification New Build, Turkish Financial Institution
Analysis
38°C Air
x 34°C Air

• Utilising different sets information,


we can see the reason why inlet
temperatures increase even though

x
there is adequate cooling capacity
in the system

• A variation in return temperatures


triggers a variation in supply flow
15000 l/s 13300 l/s causing the cooling capacity imbalance

230
Design Verification New Build, Turkish Financial Institution
Solution

Return • Moving all 3 return sensors to


Sensor the same location should
Location
remove the variation in
recorded temperature
• This in turn will also remove
any variation in supply air flow
thus balancing load across all
available ACUs

231
Design Verification New Build, Turkish Financial Institution
Verification

• No issues present when running


in normal operation
Cabinets coloured by
server inlet temperature

x x

x
• Downtime through maintenance
or failure of the central ACU will
now have little effect on the
server inlet temperatures
232
Section Topics

In this module we will cover:

Introduction to CFD

The Business Case

Case Studies

Calibration & Validation

Review and Looking Forward

233
Calibration & Validation against existing
installations
When using a model to fix/upgrade/optimise/manage an existing data
centre, it is best to calibrate the model against actual measurements

• Once a model is calibrated it can be used to investigate immediate issues


and plan future changes to the facility before they are done in real life.

• Calibration to ensure the virtual model is an accurate representation of


the real facility will give confidence in the CFD results.

• Physical measurements of temperature, flow rates and pressure should


be taken to allow for comparison between the real and the virtual model.

234
Calibration & Validation against existing
installations Survey Equipment
• Become familiar with the airflow system Vane Anemometer
• Server Airflow
• Everything starts and finishes at the cooling unit • Grille Airflow

• Manufacturer specifications are a great starting


point but do not assume they are correct
Balometer
Return Air Paths • Grille Airflow
• Grille Temperature

Temperature Sensors
Equipment • ACU Temperature
Air Paths • IT Temperature
• Aisle Temperature

Wilson Grid / Manometer


Supply Air Paths • ACU Airflow
• Void Pressure

235
Calibration & Validation against existing
installations

236
Calibration & Validation against existing
installations
• Ensure your measurement techniques are providing the correct numbers!

The hood on a
balometer adds an
extra obstruction to
the air which finds it
easier to leave
through an adjacent
grille. The recorded
number will
therefore be wrong.

An alternative
method would be to
use a vane
anemometer in
conjunction with
the balometer.

237
Section Topics

In this module we will cover:

Introduction to CFD

The Business Case

Case Studies

Calibration & Validation

Review and Looking forward

238
CFD Summary - Benefits
• Design/Optimise the architectural form (e.g. test raised floor height)

• Manage IT deployment in the room to:


– Maximize cooling/energy efficiency
– Maximize space utilization
– Maximise capacity utilisation
– Compare alternatives
– Test changes before they go live
– Ensure redundancy/resilience by testing failure scenarios

• Design and plan your equipment configuration at:


– Room level
– Cabinet level
– Or if you are a supplier even the detailed design within the equipment itself
– Analyse outdoor rooftop equipment air flows

239
CFD Summary - Concerns
• $ Cost:
– Is it in the budget?
– ROI is unclear
• Quality/Accuracy of Software (not all CFD programs apply the equations in the same way)
• Requires expert knowledge to utilise interpret
– Complex software
– Qualified staff required for model checking and result interpretation
• Rubbish in, rubbish out
– Bad model design
– Data collection issues
– Easy to make mistakes
• Time consuming
– Models take time to build and then to run (and adjust/rerun)
– Not part of previous project process, there for deemed an overhead

240
CFD Summary - Evaluate your needs

• What are your Requirements


– Design
– Troubleshooting
– Management

• Long / Short term benefits


– Are there savings to be made?
• Need to extend data centre life?
• Need to improve efficiency?
• Need to increase capacity?
• Need to increase resilience?
• Need to plan changes?

241
CFD Summary - Evaluate your needs

• Cost
– Utilisation of external consultants
– Bring in-house
• Software
– License Costs
– Level of Accuracy
– User Friendliness
– Customer Support
– Productivity Gains
– Integration into existing company processes / software
• Resourcing Costs

242
CFD – What The Future Holds

• CFD is becoming a more common inclusion in the design


process, although (with some justification) there remains
some scepticism about accuracy

• Complete Data Centre Infrastructure Management by


including cooling predictions

• Day-to-Day updating of model (Digital Twin)

• Input to Artificial Intelligence (Digital Twin)

243
Any Questions?

244

You might also like