Example DC EE Assessment

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

Agency X

Data Center Energy Efficiency Assessment

Lawrence Berkeley National Laboratory

Final Report
November 20, 2015
________________________________________________________________________

Disclaimer

This document was prepared as an account of work sponsored by the United States
Government. While this document is believed to contain correct information, neither the
United States Government nor any agency thereof, nor The Regents of the University of
California, nor any of their employees, makes any warranty, express or implied, or assumes
any legal responsibility for the accuracy, completeness, or usefulness of any information,
apparatus, product, or process disclosed, or represents that its use would not infringe
privately owned rights. Reference herein to any specific commercial product, process, or
service by its trade name, trademark, manufacturer, or otherwise, does not necessarily
constitute or imply its endorsement, recommendation, or favoring by the United States
Government or any agency thereof, or The Regents of the University of California. The views
and opinions of authors expressed herein do not necessarily state or reflect those of the
United States Government or any agency thereof or The Regents of the University of
California.

This report is prepared as the result of visual observations, environmental monitoring, and
discussions with site staff. The report, by itself, is not intended as a basis for the engineering
required for adopting any of the recommendations. Its intent is to inform the site of
potential energy saving opportunities and estimated cost savings. The purpose of the
recommendations and calculations is to determine whether measures warrant further
investigation.

Acknowledgments

Authors

Steve Greenberg, Dale Sartor, William Tschudi

1
________________________________________________________________________

TABLE OF CONTENTS
1. EXECUTIVE SUMMARY ..................................................................................... 4
2. FACILITY OVERVIEW ............................................................................................. 6
3. FACILITY ENERGY USE........................................................................................... 7
4. COOLING SYSTEM DESCRIPTION ......................................................................... 9
5. ELECTRICAL SYSTEM DESCRIPTION ......................................................... 11
6. BENCHMARKING................................................................................................... 13
6.1 OVERALL ENERGY EFFICIENCY METRIC .......................................................................................13
6.2 AIR MANAGEMENT AND AIR DISTRIBUTION METRICS..................................................................13
6.3 COOLING PLANT METRICS ............................................................................................................14
6.4 ELECTRICAL POWER CHAIN METRICS ..........................................................................................16
7. RECOMMENDED ENERGY EFFICIENCY MEASURES ........................................ 19

2
________________________________________________________________________

List of Abbreviations

AC – Alternating Current
ASHRAE – American Society of Heating, Refrigerating, and Air-Conditioning Engineers
BTU/sf-y – British Thermal Units per square foot per year
CRAC – Computer Room Air-Conditioner (with internal refrigerant compressor)
CRAH – Computer Room Air Handler (with chilled water coil)
DC – Direct Current
EEM – Energy Efficiency Measure
ECM – Electronically Commutated Motor
°F – degree(s) Fahrenheit
GWh/yr – GigaWatt Hours per year (millions of kWh/yr)
HVAC – Heating, Ventilating, and Air-Conditioning
IT – Information Technology
kV – kiloVolts (thousands of volts of electrical potential)
kVA - kiloVolt-Amperes of apparent power
kW – kiloWatts of real power
kWh – kiloWatt hour
PDU – Power Distribution Unit
PUE – Power Usage Effectiveness
RCI – Rack Cooling Index
RTI – Return Temperature Index
RH – Relative Humidity
sf – square foot
TCO – Total Cost of Ownership
UPS – Uninterruptible Power Supply
V – Volt(s)
VFD – Variable Frequency Drive (for operating motors at variable speed)
W/cfm – Watts (of electrical power input) per cubic feet per minute (of air flow)
W/gpm - Watts (of electrical power input) per gallon per minute (of water flow)
W/sf – watts per square foot

3
________________________________________________________________________

1. Executive Summary
This energy assessment, sponsored by AGENCY X and the Federal Energy Management
Program (FEMP), focuses on AGENCY X data center on the East Coast. AGENCY X leases the
data center facility from the General Services Administration (GSA) which, in turn, leases it from
a local vendor.

Lawrence Berkeley National Laboratory (LBNL) staff performed the assessment, which
established an estimate of baseline energy end use and identified potential energy-efficiency
measures (EEMs). Observation of the building physical conditions, environmental conditions,
and energy use led to the recommendations for operational and energy efficiency improvement
opportunities identified in this report.

This data center is planned to be one of the central consolidation sites for a federal agency. As
such, the center plans to increase the amount of IT equipment resulting in higher electrical load
and heat density. Efficiency measures will need to consider the changing nature of the IT
configuration and be able to adjust to maintain efficient operation over a range of electrical
loading.

It should be noted that this report is not based upon an investment-grade assessment. The
precision of the calculations used to determine Energy Efficiency Measures (EEMs) is limited
because:
 The data center is embedded in a large office building.
 Only limited power and environmental measurements were obtained during operation for
short durations of time.
 UPS power input readings were not observed at the meters as configured; only output
readings appeared to be available.
 A common chilled-water system serves the entire building, not just the data center.
 As-built information of the facility was not generally available.

Despite these limitations, valuable observations and recommendations have been made.
Assumptions and calculation methods are noted throughout the report.

Energy-Efficiency Measure Summary


Table 1 summarizes the EEMs and potential savings identified by the LBNL assessment. Further
details for each EEM are contained in the report.
A number of energy efficiency opportunities with varying payback periods were identified
during the assessment. Based on an estimated energy cost of $0.065/kWh, energy cost
savings of approximately $314,000/yr are possible through measures that have an average
payback period of 3.3 years and represent approximately 42% energy savings in overall
data center energy use (relative to the June 2015 baseline). Table 1 below summarizes the
projected economics for the recommended measures:

4
________________________________________________________________________

Grouped Energy Efficiency Estimated Estimated Estimated Estimated


Measures (EEMs) Installed Yearly Yearly Simple
Cost Energy Dollar Payback –
$ Savings Savings Years
kWh
1. Install a monitoring system for $202,000 373,000 $24,300 8.3
data center infrastructure
management (DCIM)
2. and 3. Implement a $761,000 3,849,000 $250,000 3.0
comprehensive air-management
program; Rebuild the CRAH units
with variable-speed plug fans and
supply air temperature controls
4. and 5. Install small chiller for $80,000 465,000 $30,200 2.6
print shop; Re-commission the
chiller plant, including chilled
water temperature reset,
condenser water temperature
reset and the water-side
economizer
6. Turn off unneeded UPS $3000 144,000 $9400 0.3
modules and PDUs
Totals $1,046,000 4,832,000 $314,000 3.3

Table 1: Saving and Payback Summary

Thanks to a pro-active staff, in only 4 months the AGENCY X data center has
progressed toward being an effective and energy-efficient facility (from a PUE of 2.3
to 1.7). Implementing the recommendations in this report would make it truly
exemplary, with better cooling and power service to the IT equipment and a PUE of
about 1.3.

5
________________________________________________________________________

2. Facility Overview
The data center is embedded in a large building that currently houses other functions (e.g.
offices, print shop, etc.) It was built in 2007, and is now slated to be a federal agency’s
consolidation site. The quantity of IT equipment will likely be increasing for several years.
The data center has a 3 ft. raised floor, and the distance from the raised floor to the dropped
ceiling is just over 9 feet. The height of the plenum above the dropped ceiling is 18’. The IT
equipment layout in the main data center is generally configured in a hot-aisle/cold-aisle
arrangement as shown in Figure 1 below. Note that in this report we use the abbreviation
CRAH (Computer Room Air Handler) instead of CRAC (Computer Room Air Conditioner, and
how the units are labeled) because CRAH is a more-precise description of the units.

The main area of the East Coast data center is about 22,600 sf within an 81,000 sf facility.
Two separate rooms contain the uninterruptible power supplies (UPSs) that serve the IT
equipment through power distribution units (PDUs) located in the data center spaces.

Typical Hot Aisles (every other) Typical Cold Aisles (every other)

Figure 1. Main Data Center (MSF; Room 167). The 12 rows of IT racks are in red; the
CRAH units (labeled as CRACs) are the large gray rectangles

6
________________________________________________________________________

3. Facility Energy Use


The total electrical demand for the entire building was on average approximately 1400 kW
with a yearly energy use of approximately 12 GWh/yr. Approximately 45% of this energy
use was related to the IT equipment. The data center was not separately sub-metered. The
assessment team estimated the data center energy use through a combination of temporary
sub-metering, equipment energy use estimates, spot measurements and spreadsheet
calculations.

IT Equipment Loads
Summarized in Table 2 below is the IT equipment average power use in kW.
Data Area (sf) IT equipment load (kW) Power Density (W/sf)
Center
Areas

All 22,600 616 27


Table 2. IT equipment load and density

Data Center Energy End Use


The electrical end use breakdown associated with the data center space was determined
and is listed in Table 3 and illustrated in Figure 2. This breakdown is based on June 2015
site visit data, adjusted for the increased IT load found in October 2015. Thus it shows a
baseline Power Usage Effectiveness (PUE, the ratio of total data center energy to IT input
energy) of approximately 2.3, prior to subsequent operational improvements described
elsewhere in this report.

7
________________________________________________________________________

Data Center End Use Average Percent of Data


Load Center Total
(kW) (%)
IT equipment 616 kW 44%
Transformer and PDU loss 24 1.7
UPS loss 63 4.6
Chillers 202 15
Chilled Water pumps 27 2.0
Condenser Water pumps 25 1.8

Cooling Towers 20 1.5


Lighting 10 0.7
CRAH fans 140 10

CRAH Humidity control 259 19

TOTALS: 1386 kW 100%


Table 3 - Summary of Data Center Electrical End Use

Figure 1. –Electrical Breakdown by End Use. Based on June 2015 operation


updated to October 2015 IT load.

8
________________________________________________________________________

4. Cooling System Description


Cooling for the data center is provided by 37, down-flow computer room air-handling
(CRAH) units spaced around the data center spaces (34, 20-ton; 2, 5-ton; 1, 12-ton). There
are also 4, 30-ton; 4, 5-ton; and 1, 15-ton up-flow CRAHs cooling the rooms housing the UPS
and power distribution equipment. The IT equipment was mostly arranged in hot and cold
aisles. Cool air is supplied from a 3-foot raised floor and delivered into the cold aisles
through perforated floor tiles. Hot air is returned to the CRAHs and enters at the top of the
units. It was observed that some aisles contained tiles with large openings that likely
contributed to low static under-floor air pressure in some areas. All of the tiles are
equipped with adjustable dampers.
The CRAH units individually control temperature and humidity for the air they supply
(using sensors in the return air). At the October site visit, readings and measurements
from the CRAH units in the MSF data center space showed that air was being returned in the
range of 69 to 77° F, with an average of 74° F. The supply air temperatures for these units
were in the range of 49 to 68° F, with an average of 56° F. These data indicate that the
supply air temperature was much cooler than necessary, in general. In June, the CRAH’s
return-air set-points were 68 degrees and 42% RH for the data center and 70 degrees and
40% RH in the UPS rooms; the temperature Agency X d-bands were +/- 2 degrees and the
RH Agency X d-bands are +/- 2% RH. In October, the setpoints were 70° F for the data
center units and 68 for the UPS room units; the humidity controls were set to 45% RH.
At the February 2015 visit, it was observed that CRAH units were “fighting” each other with
some units humidifying and some dehumidifying. At the June 2015 visit, nearly all of the
units were actively dehumidifying (maximum cooling) and most of these were reheating to
keep the temperature from dropping too low (even so, several were in low-temperature
alarm). At the October 2015 visit, all the data center units were in either cooling or
dehumidifying only with no humidification or (re)heating in evidence, with the large
savings noted below. Most modern IT equipment is designed to operate reliably when the
intake air humidity is maintained between 20% and 80% RH (see Table 4). Maintaining
tight humidity control comes at an energy cost and often results in CRAH units
simultaneously humidifying and dehumidifying, a form of wasteful simultaneous heating
and cooling. Reheating with electric heat adds to the chiller plant load and reduces the
cooling capacity available to cool the IT equipment. The staff is to be congratulated for
recognizing and largely correcting this situation.

9
________________________________________________________________________

Table 4. ASHRAE Recommended and Allowable temperature and humidity


ranges for IT inlet air

For much of the year, it is possible to maintain an acceptable upper humidity limit without
ever needing to actively dehumidify and for the lower end of the humidity range, ASHRAE
has recently published research results that show that low humidity has a negligible effect
on electrostatic discharge (as long as appropriate grounding is applied) and they are
revising their thermal guidelines accordingly, with the result that humidifying won’t be
necessary except in extremely dry conditions (less than 14° F dew point).
It should be noted that the data center’s proactive staff adjusted temperature setpoints
upward between the June and October visits, and disabled the reheat in the CRAH units.
They also changed the chiller plant pump operation, and the combination of reduced cooling
load and a more-efficient chiller plant has resulted in very large energy savings and a
reduction in the PUE from 2.3 to 1.7. The savings associated with these changes are
captured in the CRAH retrofit measure (EEM 3) and described in Section 7 of this report.
Chilled water is supplied to the CRAH units from a central chilled-water plant that services
the entire building. The data center however, is the dominant load on the chilled-water
plant. Chilled water is supplied at 42° F, which is much cooler than is required to provide
cooling for the IT equipment. The set point is driven by the print shop in the building,
which requires a low setting for humidity control. Cooling for this relatively small load is
adversely affecting the efficiency of the data center as well as the capacity of the chiller
plant. The chilled-water plant consists of three 250-ton screw type chillers with variable-
speed chilled water pumps and cooling tower fans. Two chillers and all three of the chilled-
water pumps, condenser water pumps, and tower fans were run all year as found at the
June 2015 visit, but the staff adjusted the operation to two each of the pumps and towers. A
plate-and- frame heat exchanger (rated at 83 tons at the design condition of 50 degree
condenser water temperature but capable of 500 tons at 40-degree condenser water) to
allow for free cooling was installed as part of the original construction but was not
operating during the site visits and is not currently in use even when conditions allow it.

10
________________________________________________________________________

See EEMs 4 and 5 in Section 7 for recommended changes to the chiller plant and its
operation.

5. Electrical System Description

Utility Feed and General Description:


There are two separate electric utility feeds to the building, each with a 3750 kVA
transformer. The system topology is double-fed from these transformers, through the UPSs,
PDUs, and to the IT equipment in the racks. The system voltages are 13 kV to the utility
transformers, 480V through the switchgear and UPSs to the PDUs, and 208V to the IT power
supplies.

UPS System:
Eight 500 kVA/450 kW Uninterruptible Power Supply (UPS) systems (MGE model EPS
7000) provide power conditioning and battery back up to the main power supplied to the IT
equipment. The topology of these units is double-conversion, meaning all of the power is
converted from AC to DC and then back to AC. The output power meters on the UPS systems
were functional and gave readings reasonably consistent with the other readings taken
during the June site visit (e.g. UPS outputs, one-time checks with portable metering, and the
PDU inputs agree to within about 3%); these meters were used again at the October site
visit. It should be noted that the A-3 output meter reads high and might be reporting the
total from the paralleling unit; this problem should be corrected.

Table 5 shows the loading of the A and B UPS systems, including Output (from the built-in
meters); % load (calculated from the output meter and the UPS rating); efficiency (from
manufacturer’s curve); input and losses are calculated from the output and the efficiency.
These units are lightly and uniformly loaded with good efficiency given their light loading;
see Section 6.4 for further discussion.

Units UPS-A (total UPS-B (total Combined


of 4) of 4)
UPS Input kW 356 347 703
UPS Output kW 321 312 633
Losses kW 35 35 70
Efficiency % 90 90 90
Load Factor % 17.8 17.3 17.6
Table 5. UPS Electrical Measurements

11
________________________________________________________________________

Distribution transformers/PDUs:
Twenty-four Power Distribution Units (PDUs) with built-in transformers are used to
distribute electrical power to the IT equipment; 16 PDUs are rated at 150 kVA and 8 are
rated at 300 kVA. The PDUs are equipped with meters to enable reading the input and
output power (kW) and energy (kWh). These meters were used to check the UPS meter
readings and further validate the IT input power and thus the PUE. These meters also
provided data to estimate the power loss for the transformers in the PDUs. Several of the
meters were not reading correctly (specifically D4, E/CB4, S1A1, S1A5, S1B2, and S1D1;
kWh registers blank or kW, kWh or both with outputs greater than inputs) so some
estimating was necessary. Two of the PDUs were energized but were serving no IT load.

Lighting:
The data center contained a standard dropped ceiling with fluorescent T-8 lighting fixtures
and there were no automatic lighting controls. There were 79 fixtures each with four 32W
lamps; with electronic ballasts these fixtures draw about 120 watts each.

Standby Generation:
Three standby generators, each rated at 1890 kVA/1512 kW at 480 volts, provide back-up
power in the event of a power failure. Each of the generators has an electric engine block
heater to keep the engine warm to facilitate rapid starting and ability to pick up load. The
block heaters are controlled by thermostats.

12
________________________________________________________________________

6. Benchmarking
The purpose of this section is to summarize the metrics that were calculated as part of the
assessment process and compare them to data from other facilities, where available.

6.1 Overall Energy Efficiency Metric


The PUE (total energy/IT energy) metric was calculated based on the June 2015 data and
found to be 2.3, which is worse than average. Based on data taken in October 15,
operational improvements had dropped the PUE to about 1.7, which is about average. See
also Figure 3.

AGENCY X, AGENCY X,
June 2015 October 2015

Figure 3 - Data Center Power Usage Effectiveness (PUE)

6.2 Air Management and Air Distribution Metrics


Representative IT equipment intake and exhaust temperatures were collected from a
sample of IT equipment in the main data center. In addition, measurements of supply and
return air temperatures were taken from the CRAHs. The goal was to establish an
understanding of the air management performance, identify any issues such as hot spots or
inadequate airflow. From these temperature measurements, the following metrics were
calculated:

Rack Cooling Index (RCI):


RCI is a dimensionless measure of how effectively the IT equipment is cooled within the
desired intake air temperature range (ASHRAE recommended values—see Table 4). It
provides a measure of the conditions at the high (HI) end and at the low (LO) end of the
specified temperature range. RCIHI=100% means that no intake temperature is above the
maximum recommended, and RCILO=100% means that no intake temperature is below the
minimum recommended. Using the ASHRAE Class A1 temperature recommendation, “poor”
conditions are ≤90% whereas “good” conditions are ≥96%.

13
________________________________________________________________________

Return Temperature Index (RTI):


The Return Temperature Index (RTI) is a dimensionless measure of the actual
temperature differential in the equipment room as well as a measure of the level of
net by-pass or net recirculated air in the data center. 100% is generally the target;
>100%  recirculation air; <100%  by-pass air.

Table 6 below summarizes the metrics, calculated from data taken in the main data center
room at the October 2015 visit, and provides interpretation.

Metric Name Unit Value Interpretation


CRAH Temperature Differential F 18 Good but artificially high due
to some units turned off.
Average Rack Temperature Rise F 6 Low; consistent with air
management opportunities.
Return Temperature Index (RTI), % 300 Large amounts of
measure of by-pass air and recirculation air due to higher
recirculation air. rack than CRAH air flows.
Poor.
Rack Intake Temperatures (average) F 76 High considering how low the
CRAH setpoints are.
Rack Cooling Index-High (RCIHI), % 64 Poor. Excessively high
measure of conformance with temperatures at many
ASHRAE recommended intake locations. Indicative of
temperature specification, high end insufficient supply air.
of temperature range

Rack Cooling Index-Low (RCILO), % 80 Poor. Excessively low


measure of conformance with temperatures at many
ASHRAE recommended intake locations. Indicative of low
temperature specification, low end of setpoint temperatures at the
temperature range CRAHs.

Airflow Efficiency W/cfm 0.5 Good.

Ratio of Total CRAH Flow to Total None 0.33 Poor. Very low CRAH flows
Rack Flow relative to IT flows. Ideally
1.0 but best practice is a bit
higher than 1.
Fan motor efficiency % 87.5 Good but not premium
efficiency; typical of 20-ton
units.
Econ Utilization Factor % 0
No air-side economizer.
Table 6. Air Management and Air Distribution Metrics

6.3 Cooling Plant Metrics


Table 7 below summarizes the cooling plant metrics, and Figure 4 compares the Chiller
Plant Wire to Water Efficiency and Chiller Rated Efficiency at Design to other data centers.

14
________________________________________________________________________

Metric Name Unit Value Notes/Interpretation


Chiller Plant Wire to Water Efficiency kW/ton 0.87 Roughly average; lower is
better. As found at June site
visit; this is the efficiency of
the entire plant. Improved to
0.82 in October.
Chiller Rated Efficiency at Design kW/ton 0.532 Good. Lower is better. From
NPLV Mechanical Schedule on
drawing M6.0; this is the
efficiency of the chiller by
itself, at design conditions.
Cooling Tower Design Efficiency gpm/HP 50 Data from drawing M6.0.
This efficiency is good.
Cooling Tower Design Approach F 7 From M6.0. Typical; 5
degrees is the design used
with water-side economizer.
Chilled Water Pumping Efficiency W/gpm 23 This is the designed value,
not measured.
Condenser Water Pumping Efficiency W/gpm 15 This is the designed value,
not measured.
Pump and fan motor efficiency % 91.0- High (good), but not
92.4 premium efficiency
Chiller Water-Side Econ Utilization % 0 Existing system could be
Factor used to improve efficiency.
See Section 7.
Table 7. Cooling Plant Metrics

AGENCY X Chiller AGENCY X Chiller


rated efficiency is plant efficiency is
here here

Figure 4 – Chilled Water plant and Chiller rated efficiency

15
________________________________________________________________________

6.4 Electrical Power Chain Metrics


The UPS system typically represents an efficiency opportunity in most data centers. In the
AGENCY X data center, the UPS was on an average loaded to approximately 18% of its rated
capacity. Since UPS efficiency is higher at higher load factors, loading to 50% total for 2N
system or 40% for each module is good practice from an efficiency point of view. The
efficiency at the AGENCY X units at 18% load factor is approximately 90% according to the
manufacturer. This means that the UPS efficiency is better than average for all systems
benchmarked at this load factor.
Table 8 below summarizes the metrics that were collected. Figure 5 plots the AGENCY X
UPS efficiency. Figure 6 compares AGENCY X UPS load factor to other data centers, and
Figure 7 shows measured IT load density.

16
________________________________________________________________________

Metric Name Unit Value Interpretation


UPS Load Factor - 18% Low but common

UPS System Efficiency % 90 Good for this topology and


load
Transformer Efficiency (upstream of % 98 Assumed
UPS system)

PDU (with built-in transformer) % 97 From PDU meters


System Efficiency
IT Ave Power Density W/sf 27 Low but common; grew
from 25 to 27 from June to
October
IT Peak Power Density (design) W/sf 64 Assumes 100% of UPS
output for A or B
IT Rack Power Density kW/rack 1.6 Low average. Lots of growth
potential.
IT Rack Power Density (design) kW/rack 4.6 Moderate average.

UPS output voltage V ac 480 Good. More efficient than


208V.
Stand-by Gen Block heater power kW total 45 Assumed at 1% of generator
rating
Table 8. Electrical Power Chain Metrics

Factory Measurements of UPS Efficiency


(tested using linear loads)
100%
AGENCY X
95%

90%
Efficiency

85%

80% Flywheel UPS

Double-Conversion UPS
75%
Delta-Conversion UPS
70%
0% 20% 40% 60% 80% 100%
Percent of Rated Active Power Load

Figure 5 - Measured UPS Efficiency Curves

17
________________________________________________________________________

AGENCY X

Figure 6 - UPS Load Factor

AGENCY X

Figure 7 - Measured IT Load Density

18
________________________________________________________________________

7. Recommended Energy Efficiency Measures


The following measures are recommended for further evaluation:

Grouped Energy Efficiency Estimated Estimated Estimated Estimated


Measures (EEMs) Installed Yearly Yearly Simple
Cost Energy Dollar Payback –
$ Savings Savings Years
kWh
1. Install a monitoring system for $202,000 373,000 $24,300 8.3
data center infrastructure
management (DCIM)
2. and 3. Implement a $761,000 3,849,000 $250,000 3.0
comprehensive air-management
program; Rebuild the CRAH units
with variable-speed plug fans and
supply air temperature controls
4. and 5. Install small chiller for $80,000 465,000 $30,200 2.6
print shop; Re-commission the
chiller plant, including chilled
water temperature reset,
condenser water temperature
reset and the water-side
economizer
6. Turn off unneeded UPS $3000 144,000 $9400 0.3
modules and PDUs
Totals $1,046,000 4,832,000 $314,000 3.3

Table 9. Savings from Energy Efficiency Measures

Data Center Infrastructure Management Measure (EEM 1)

We recommend that you install a data center infrastructure management system


(DCIM) at your facility. This system would have the following functionality:
 Track power usage vs. capacity in real time
 Track cooling requirement vs. capacity in real time
 Monitor cooling plant and computer-room AC units for performance and alarms
 Temperatures, flows, thermal power
 Monitor switchgear, UPS, and PDUs for performance and alarms
 Track PUE (overall and subcomponents) in real time
 Monitor rack inlet and outlet conditions for normal operation and alarms
 Many components are already in place or in progress
 Site Link in place to UPS, CRAHs
 Site Scan being deployed
 Improved user interface and reporting needed
 Add or repair meters as needed for real-time PUE monitoring

19
________________________________________________________________________

 Chiller plant input power and cooling delivered to the building


o individual chillers
o chilled water pumps—usually from a VFD interface
o condenser water pumps—will need additional meters
o cooling tower fans—usually from a VFD interface
o chilled water flow and temperatures—refurbish existing meters
 CRAH input power—either at CRAHs or distribution panels
 Lighting—at lighting panels

Many DCIM systems also provide for tracking of IT equipment, e.g. what software is
running on what hardware and what is the utilization of each piece of hardware.
This feature can identify “zombie” servers that take up space, power, and cooling
but contribute no computing value, as well as point to virtualization and power-
down opportunities.

The energy savings shown in the table assume 5% of the overall data center energy
can be saved: 2% in IT, 2% in cooling, and 1% in electrical losses. Note that while
the payback period is relatively long, no credit was taken for non-energy benefits for
the operation and maintenance, which can easily dominate the overall savings.

Air Management and CRAH Measures (EEMs 2 and 3)

Good air management is critical to reliable operation of the IT equipment and to


energy-efficient operation of the cooling system. The basic concept of air
management is to deliver air from the CRAHs to the IT inlets and to deliver air from
the IT outlets back to the CRAHs with as little mixing of hot and cold air as possible.
Mixing (bypass of cold air to hot air and recirculation of hot air to cold air) may
increase fan energy, reduce the efficiency and capacity of both the CRAHs and the
chiller plant, and result in IT inlet temperatures elevated by outlet air recirculation.

Air management typically includes hot-aisle/cold aisle arrangement of the IT


equipment racks (ideally with either hot or cold aisle isolation), blocking off bypass
and recirculation paths with blanking panels, floor cutout seals, etc., and balancing
the perforated floor tiles with the airflow requirements of the IT equipment. All of
these areas have significant energy-savings opportunities when combined with the
ability to adjust the supply air temperature and the flow rate, and the AGENCY X
center is a prime example. In general, hot and cold aisles have been established in
the data center, though there is a notable exception in the MSF between rows C and
D, racks 13-22, which should be a hot aisle but has perforated floor tiles (these tiles
should be used only in cold aisles). There are also several places where equipment is
installed in racks with the air flowing backwards relative to the rest of the
equipment in the rack or with respect to cold aisle-to-hot aisle air flow direction.
And there are numerous racks with incomplete blanking panels where there is no IT
equipment installed; such panels are especially important in racks such those in the
AGENCY X center that are open side-to-side from one rack to the next. Without

20
________________________________________________________________________

blanking panels, internal recirculation of hot air will seriously compromise the
ability of the cooling system to adequately serve the IT equipment, with the result
being compromise IT reliability. There are many cable penetrations through the
floor that could use better sealing devices to reduce the amount of cold air from the
underfloor leaking past the IT equipment, which is needed to prevent reducing the
CRAH capacity. Isolating the hot aisles with barriers such as strip curtains, in
combination with using the ceiling space as a hot-air return plenum, would greatly
reduce the hot air recirculation in evidence. This return plenum would include open
“egg-crate” ceiling panels above the hot aisles, and typically sheet-metal return air
chimneys extending from the ceiling to the CRAH inlets. The savings from air
management, realized in the CRAH measure (and indirectly in the chiller plant) are
listed in Table 9 with the CRAH measure; the cost of the air-management is included
in the combined measure 2 and 3.

There are two significant parts related to improving the energy efficiency of the
CRAHs in the data center. The first is adjusting the control set-points so that the IT
inlet conditions conform to the ASHRAE recommendations, which include up to 80.6
°F air temperature and a dewpoint range of 41.9 °F (to 60% RH) to 59.0 °F. See
Table 4 above and note that the lower humidity limit is being lowered further to 14
°F dewpoint, as discussed in Section 4 above. Since the CRAHs are presently
controlled using return air conditions, the temperature set-point will be
substantially higher, and the humidity control should be turned off. Eliminating the
humidifying, dehumidifying, and CRAH unit fighting will reduce the total CRAH
energy by roughly 65% while still keeping the IT inlet conditions in the
recommended ASHRAE range.

The second part related to the CRAHs is to rebuild them using plenum fans with
direct-drive, electronically commuted (ECM) variable-speed motors and change the
control from return-air to supply-air temperature. Such fans are inherently much
more efficient than the existing squirrel-cage fans in down-flow, underfloor plenum
applications; the inefficiency, maintenance, and particle generation of the belt drives
is eliminated, and the variable-speed motors are controlled to maintain the
necessary underfloor pressure without over-provisioning the system. Collectively
changing the fans and controlling them using underfloor pressure will reduce fan
power by about 70%. Because the IT inlet air temperature is what matters, and the
inlet air is coming from the CRAH supply air, using supply air temperature to control
the chilled water valve is much better than using the return air temperature. The
overall savings of the controls and fan retrofits is about 89% of the existing CRAH
usage. As noted above, a significant fraction of this opportunity has already been
realized by the pro-active staff at AGENCY X.

HVAC Measures: Chiller Plant (EEMs 4 and 5)


Two main chilled plant measures (EEMs 4 and 5) are bundled together in the
analysis assuming they would be done together. The first chiller plant measure
(EEM 4) involves installing a dedicated small chiller and chilled water pump to
serve the print shop. This measure is key to other chiller plant efficiency measures,

21
________________________________________________________________________

which in turn increase the CRAH efficiency. The print shop requires 42-degree water
for dehumidification purposes, and this roughly 20-ton load is the ‘tail wagging the
dog’ of the 500-ton chiller plant, which could otherwise supply water up to 50
degrees or higher with an appropriate reset schedule. A dedicated air-cooled chiller
could be installed and operated as needed to meet the print shop’s special needs and
allow the chiller plant to meet the cooling requirements of the data center and office
spaces. Not only would the plant operate more efficiently, but it would have more
capacity at the increased chilled-water temperatures.

The second chiller plant measure (EEM 5) is to re-commission the chilled-water


temperature reset, the condenser-water temperature reset, and the existing water-
side economizer. The original design called for a reset of the chilled water
temperature from 44 to 50 degrees depending on load; this range should be
revisited to determine if even higher temperatures are feasible at times. The higher
the chilled-water temperature, the more savings from the water-side economizer
and the more efficient the chillers will be when they operate. A related measure
(that has already been implemented between June and October 2015) is to operate
only two of the chilled-water pumps instead of all three. Not only does the extra
pump use energy, but most of that energy ends up in the chilled water, which
increases the load on the plant and decreases the cooling available to the building
loads.

The second part of EEM 5 is to re-commission the condenser water temperature


reset. The original design called for two modes: first, under chiller-only operation,
use a reset from 85 to 65 degrees depending on wet-bulb temperature (WB
temperature plus an “approach” temperature of 7 degrees, which is the design
approach); this range should be revisited to determine if even lower temperatures
are feasible (note the towers are presently controlled to maintain 80 degrees F all
year). The second mode, economizer plus chiller, uses a 42-50 degree reset with a 5-
degree approach and keeping the water temperature above 40 degrees F using the
condenser-water bypass. The lower the condenser water temperature can be, the
more savings there will be from the water-side economizer and the more efficient
the chillers will be when they operate. According to the chiller manufacturer, the
leaving condenser water temperature should be at least 25 degrees F higher than
the leaving chilled water temperature; this can be accomplished with low condenser
water supply from the cooling towers by modulating the existing bypass valve in the
chiller condenser water piping. A related measure (again already implemented by
the staff between June and October 2015) is to operate only two of the condenser
water pumps instead of all three; not only does the extra pump use energy, but most
of that energy ends up in the condenser water, which increases the load on the
towers and decreases their ability to make cool condenser water for the chillers.
Another part of good operation of the condenser water system is to enable the fans
in all cooling towers that have water flowing over them; with the existing variable-
speed fans in the AGENCY X towers, this scheme will minimize the fan energy
needed to provide the necessary heat rejection from the plant, as well as providing
on-line redundancy.

22
________________________________________________________________________

The last part of EEM-5 is to re-commission the existing water-side economizer.


At the specified design condition, the existing heat exchanger has the capacity of 83
tons, or a third of one of the 250-ton chillers. But when the condenser water is cold
enough (42 to 50 degrees per the original design), it can provide even more cooling
than a single chiller and carry the entire load. The condenser-water temperature is
presently controlled to 80 degrees F all year and should be reset both for
economizer operation and to increase chiller efficiency (see above). The economizer
heat exchanger is piped in series with the chiller evaporators, allowing it to operate
in “integrated” mode, i.e., as a pre-cooler to the chillers. This configuration allows
the exchanger to pick up as much load as possible before the chillers bring the
chilled water down to the needed temperature. This operation will allow the
chillers to be turned off during the winter and to operate in a top-off function for
some of the fall and spring, whenever the condenser water is cold enough. Since the
chillers use about ¾ of the total chiller plant energy, reducing or eliminating their
operation for much of the year will result in significant energy savings.

The savings in Table 9 include the increased usage from the small print shop chiller
as well as the savings from the temperature resets and use of the water-side
economizer. The combination of the resets, shutting off the extra pumps, and use of
the economizer improves the overall annual plant performance from 0.87 kW per
ton of cooling to 0.49 kW per ton, a savings of 44%.

Electrical Measures (EEM 6)


The UPS system typically represents an efficiency opportunity in most data centers.
In this data center, the UPS was on an average loaded to approximately 18% of its
rated capacity. Since efficiency increases with load, there is an opportunity to shut
down modules in each of the A and B UPS systems and still provide full redundancy.
Even if A or B system is down, the 3 on-line modules in the other system would only
be loaded to 46% capacity, meaning an additional loss of one of the remaining
modules would mean the remaining modules would be loaded to 69%. Until the load
in the center grows to preclude this option, turning off one module in each of the A
and B sets would save energy and still provide 2(N+1) redundancy.

In addition to the UPS opportunity, we found that two of the PDUs were energized
but not supplying any IT equipment; thus their no-load losses are going to waste.
Turning off these units (D5 and S1A3) until such time as they are needed would
directly save energy in their transformers and indirectly by requiring less cooling.

The savings in Table 9 reflects the combination of the above-mentioned UPS and
PDU measures.

23
________________________________________________________________________

Additional Measures

In addition to the recommendations above, the following strategies are


recommended:

1. It is recommended that the agency management investigate and adopt


a "total cost of ownership" (TCO) approach to their data centers.
Energy costs are already eclipsing the cost of the IT equipment over
its life and this will only get worse when energy prices rise. If actions
requiring capital investments were not taken in the past due to a lack
of understanding the high energy savings over time, this practice
should be reviewed.
2. An energy manager should be established with responsibility for
monitoring energy performance and tracking improvements over
time. Specific goals (targets) for energy reduction should be
implemented along with the commitment for capital expenditures
where the return on investment or TCO warrants such an investment.
3. If the agency operates multiple data centers, a mechanism to share
best practices should be established.

The pie charts below show the current data center energy breakdown (Figure 8)
along with the projected energy breakdown (Figure 9) after implementation of the
recommended measures. Note that the estimate of the absolute number of 811
average kW in Figure 9 assumes the IT load stays constant. As the IT load grows, the
absolute total number will grow, and the absolute energy use of the electrical and
cooling infrastructure will grow, but the PUE typically decreases since the
infrastructure generally gets more efficient as the load increases.

24
________________________________________________________________________

Figure 8. Current (June 2015) Facility Performance

Figure 9. Projected Facility Performance

25
________________________________________________________________________

Thanks to a pro-active staff, the AGENCY X data center has rapidly progressed
toward being an effective and energy-efficient facility (from a PUE of 2.3 to 1.7).
Implementing the recommendations in this report would make it truly exemplary,
with a PUE of about 1.3, and with better cooling and power service to the IT
equipment.

26

You might also like