0% found this document useful (0 votes)
175 views47 pages

Data Centers ENR116 PDF

Data centers consume large amounts of electricity to power servers and storage equipment. Opportunities exist to improve energy efficiency through measures such as increasing server utilization, improving air and cooling system management, and adopting more efficient power distribution and backup systems. Proper design of hot and cold air aisles can significantly increase cooling capacity and reduce energy usage.

Uploaded by

AnanditaKar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
175 views47 pages

Data Centers ENR116 PDF

Data centers consume large amounts of electricity to power servers and storage equipment. Opportunities exist to improve energy efficiency through measures such as increasing server utilization, improving air and cooling system management, and adopting more efficient power distribution and backup systems. Proper design of hot and cold air aisles can significantly increase cooling capacity and reduce energy usage.

Uploaded by

AnanditaKar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Data Centers

Energy Conservation Opportunities


Data Center
A Data Center is a repository for the storage, management and
disseminations of data

• Data center spaces can consume up to 100 times electricity as standard


office spaces

• With such large power consumption, they are prime targets for energy-
efficient design measures that can save money and reduce electricity use.

• However, critical nature of data center loads elevates many design criteria
Data Centers

– Accounts for 2% of anthropogenic CO2


– Roughly equivalent to aviation industry
– IT energy usage will double next 4 years

– https://www.youtube.com/watch?v=iuZDylVFbhs
How is energy typically used in the data center?
Data center Server/Storage hardware Compute Resources
Resource
IT Load Processor usage rate

55% 45% 70% 30% 80% 20%

Power supply, memory, Idle


Power and Cooling
fans, planar, drives . . .
Green Data Center
A Green Data Center has mechanical, lighting, electrical and computer
systems designed for maximum energy efficiency and minimum
environmental impact

Opportunities in:
•Reduction in power and cooling
•Increase server/ storage utilization
•Improvement in Data Center space

• https://www.youtube.com/watch?v=Q7ysv9UY2rE
Servers
• Servers take up most of the space and drive the entire operation.

• The majority of servers run at or below 20% utilization most of the


time, yet still draw full power during the process.
Storage Devices
• Power consumption is roughly linear to the number of storage
modules used.

• Storage redundancy needs to be rationalized and right-sized to avoid


rapid scale up in size and power consumption.
Power Supplies
• Rack servers tend to represent the largest portion of energy load in a
typical data center

• Most data center equipment uses internal or rack mounted


alternating current/direct current (AC-DC) power supplies.

• A typical rack server's power supply convert AC to DC at 60-70%


efficiency.
Videos - IBM
• https://www.youtube.com/watch?v=ymtrqigTEXU
Air Management
• Air management for data centers entails all the design and configuration
details that go into minimizing or eliminating mixing between the cooling
air supplied to equipment and the hot air rejected from the equipment.

• A few key design issues include the configuration of equipment’s air intake
and heat exhaust ports, the location of supply and returns, the large scale
airflow patterns in the room, and the temperature set points of the airflow.

• Effective air management implementation minimizes the bypass of cooling


air around rack intakes and the recirculation of heat exhaust back into rack
intakes.
Cable Management
• Under-floor and over-head obstructions often interfere with the
distribution of cooling air. Such interferences can significantly reduce
the air handlers’ airflow as well as negatively affect the air
distribution.

• Cable congestion in raised-floor plenums can sharply reduce the total


airflow as well as degrade the airflow distribution through the
perforated floor tiles.

• Both effects promote the development of hot spots.


Cable management strategy
• A minimum effective (clear) height of 24 inches should be provided for raised floor
installations. Greater under floor clearance can help achieve a more uniform pressure
distribution in some cases.
• A data center should have a cable management strategy to minimize air flow
obstructions caused by cables and wiring. This strategy should target the entire
cooling air flow path, including the rack-level IT equipment air intake and discharge
areas as well as under-floor areas.
• Persistent cable management is a key component of maintaining effective air
management.
• Instituting a cable mining program (i.e. a program to remove abandoned or
inoperable cables) as part of an ongoing cable management plan will help optimize
the air delivery performance of data center cooling systems.
Aisle Separation and Containment
• A basic hot aisle/cold aisle configuration is created when the equipment
racks and the cooling system’s air supply and return are designed to
prevent mixing of the hot rack exhaust air and the cool supply air drawn
into the racks.
• Data center equipment is laid out in rows of racks with alternating cold
(rack air intake side) and hot (rack air heat exhaust side) aisles between
them.
• Strict hot aisle/cold aisle configurations can significantly increase the air-
side cooling capacity of a data center’s cooling system.
• All equipment is installed into the racks to achieve a front-to-back airflow
pattern that draws conditioned air in from cold aisles, located in front of
the equipment, and rejects heat out through the hot aisles behind the
racks.
Aisle Separation and Containment
• Rows of racks are placed back-to-back, and holes through the rack
(vacant equipment slots) are blocked off on the intake side to create
barriers that reduce recirculation.
• Additionally, cable openings in raised floors and ceilings should be
sealed as tightly as possible.
• With proper isolation, the temperature of the hot aisle no longer
impacts the temperature of the racks or the reliable operation of the
data center; the hot aisle becomes a heat exhaust.
• The air-side cooling system is configured to supply cold air exclusively
to the cold aisles and pull return air only from the hot aisles.
Raising Temperature Set Points
• Higher supply air temperature and a higher difference between the return
air and supply air temperatures increases the maximum load density
possible in the space and reduce the size of the air-side cooling equipment
required

• The lower required supply airflow due to raising the air-side temperature
difference provides the opportunity for fan energy savings.

• Lower supply airflow can ease the implementation of an air-side


economizer by reducing the size of penetration required for outside air
intake and heat exhaust
Air Handlers - Central vs. Modular Systems
• Better performance has been observed in data center air systems that
utilize specifically-designed central air handler systems.

• A centralized system offers many advantages over the traditional multiple


distributed unit system that evolved as an easy, drop-in computer room
cooling appliance (commonly referred to as a CRAH unit).

• Centralized systems use larger motors and fans that tend to be more
efficient. They are also well suited for variable volume operation through
the use of VSDs and maximize efficiency at part-loads.
High-Efficiency Chilled Water Systems
• Use efficient water-cooled chillers in a central chilled water plant.

• A high-efficiency VFD-equipped chiller with an appropriate condenser


water reset is typically the most efficient cooling option for large
facilities.

• Chiller part-load efficiency should be considered since data centers


often operate at less than peak capacity. This can be optimized with
VFD compressors, high evaporator temperatures and low entering
condenser water temperatures.
Free Cooling Air-Side Economizer
• The cooling load for a data center is independent of the outdoor air
temperature.

• Most nights and during mild winter conditions, the lowest cost option
to cool data centers is an air-side economizer; however, a proper
engineering evaluation of the local climate conditions must be
completed to evaluate whether this is the case for a specific data
center.
Uninterruptible Power Supplies (UPS)
• UPS systems provide backup power to data centers, and can be based on battery banks,
rotary machines, fuel cells, or other technologies; efficiency ranges from 86% to 95%.

• A portion of all the power supplied to the UPS to operate the data center equipment is
lost to inefficiencies in the system.

• The first step to minimize these losses is to evaluate which equipment, if not the entire
data center, requires a UPS system.

• Redundancy in particular requires design attention; operating a single large UPS in


parallel with a 100% capacity identical redundant UPS unit (n+1 design redundancy)
results in very low load factor operation, at best no more than 50% at full design
buildout.
Power Distribution Units (PDU)
• A PDU passes conditioned power that is sourced from a UPS or generator
to provide reliable power distribution to multiple pieces of equipment.

• It provides many outlets to power servers, networking equipment and


other electronic devices that require conditioned and/or continuous power.

• Maintaining a higher voltage in the source power lines fed from a UPS or
generator allows for a PDU to be located more centrally within a data
center. As a result, the conductor lengths from the PDU to the equipment
are reduced and less power is lost in the form of heat.
Distribution Voltage Options
• Another source of electrical power loss for both AC and DC distribution is that of the
conversions required from going from the original voltage supplied by the utility to that
of the voltage at each individual device within the data center

• Minimize the resistance by increasing the cross-sectional area of the distribution path
and making it as short as possible.

• Maintain a higher voltage for as long as possible to minimize the current.

• Use switch-mode transistors for power conditioning.

• Locate voltage regulators close to load to minimize distribution losses at lower voltages
DC Power
• In a conventional data center power is supplied from the grid as AC power and
distributed throughout the data center infrastructure as AC power.
• However, most of the electrical components within the data center, as well as the
batteries storing the backup power in the UPS system, require DC power.
• As a result, the power must go through multiple conversions resulting in power loss and
wasted energy.

• One way to reduce the number of times power needs to be converted is by utilizing a DC
power distribution. This has not yet become a common practice and, therefore, could
carry significantly higher first costs, but it has been tested at several facilities.
• A study done by Lawrence Berkeley National Labs in 2007 compared the benefits of
adopting a 380V DC power distribution for a datacom facility to a traditional 480V AC
power distribution system. The results showed that the facility using the DC power had a
7% reduction in energy consumption compared to the typical facility with AC power
distribution.
Lighting
• Data center spaces are not uniformly occupied and, therefore, do not
require full illumination during all hours of the year.
• UPS, battery and switch gear rooms are examples of spaces that are
infrequently occupied.

• Zone based occupancy sensors throughout a data center can have a


significant impact on reducing the lighting electrical use.
• Careful selection of an efficient lighting layout (e.g. above aisles and not
above the server racks), lamps and ballasts will also reduce not only the
lighting electrical usage but also the load on the cooling system.
Use of Waste Heat
• Waste heat can be used directly or to supply cooling required by the data center
through the use of absorption or adsorption chillers, reducing chilled water plant
energy costs

• The higher the cooling air or water temperature leaving the server, the greater
the opportunity for using waste heat.

• Direct use of waste heat for low temp heating applications such as preheating
ventilation air for buildings or heating water will provide energy savings.

• Heat recovery chillers provide an efficient means to recover and reuse heat from
data center equipment environments for comfort heating of typical office
environments.
Thermal storage solution
• Shift energy usage to off-peak hours saving up to 30%
• Provide extra cooling capacity to enable growth and survive grid
failures
Cooling System with PCM

HVAC unit heat


PCM storage Chiller

heat Cooling
tower

Thermal storage device between computer room air conditioners and chillers
Efficiency Metrics
• Data Center Metrics and Benchmarking Energy efficiency metrics and
benchmarks can be used to track the performance of and identify
potential opportunities to reduce energy use in data centers.
Power Usage Effectiveness (PUE)
• PUE is defined as the ratio of the total power to run the data center
facility to the total power drawn by all IT equipment:
Data Center Infrastructure Efficiency (DCiE)
• DCiE is defined as the ratio of the total power drawn by all IT
equipment to the total power to run the data center facility, or the
inverse of the PUE:
Videos - Google
• https://www.youtube.com/watch?v=voOK-1DLr00
Videos
• https://www.youtube.com/watch?v=voOK-1DLr00
• Google

• https://www.youtube.com/watch?v=ymtrqigTEXU
• IBM

• https://www.youtube.com/watch?v=0uRR72b_qvc
• Microsoft

• https://www.youtube.com/watch?v=Q7ysv9UY2rE

• https://www.youtube.com/watch?v=XZmGGAbHqa0 – Google

• https://www.youtube.com/watch?v=xGSdf2uLtlo
• https://www.youtube.com/watch?v=O96PwWkJdUo
Best Practices
Data center utilization, management and planning
It is important to develop a holistic strategy and management approach to the data
center to support economic efficiency and environmental benefits.

 Organizational groups & General Polices


 Involve organizational groups: creation of Board approval, representatives from different
departments. E.g. software, ICT, power cooling and other facilities)
 Ensure that the existing equipment has optimal use before making any new investment

 Resilience level and provisioning:


 Ensure Business Requirement and Disaster Recovery (BC/DR) in accordance with
the architecture
 Avoid unnecessary fixed losses of provision of excess power and cooling capacity
 Maximize architecture design efficiency using variable ICT electrical load
36
ICT equipment and services
Best practices
Selection of new ICT and telecom
equipment:
Results
Tender process considering:
energy performance, humidity and Reduce power and
temperature cooling for the ICT
equipment
Measurement of energy
efficiency performance (eco- Maximize efficiency in
rating, service level, energy star) refrigeration and free
cooling
Max. temperature & humidity
supported Reduction of the use of
hazardous materials
Compliance with green
regulations (REACH and WEEE) Suitable use and control of
the electrical network
Energy & temperature reporting
hardware (IPMI, DCMI and SMASH)
Selection of equipment suitable
for the datacenter: power density
and airflow direction
37
ICT equipment and services
Best practices
Deployment of new ICT services
Virtualization and consolidation of servers
Results
Select/develop efficient software
Reduce physical infrastructure
Reduce hot/cold standby equipment
Accurate information about ICT assets
Management of existing ICT equipment and
services Improve storage efficiency

Audit existing physical equipment and Reduce large volume of data not
services required

Decommission unused and low value services Meet the business service level
requirement defined in data
Management systems to control energy: ICT management policy
workloads
Data management
Define polices to efficient storage of
information
Select lower power storage devices
Use technologies such as de-duplication,
compression, snapshots and thin provisioning

38
Cooling
Best practices
Airflow design and management
Equipment should share same the
airflow direction
Design raised floor or suspended
ceiling height Results
Separate from external Airflow protection of equipment
environment
Uniform equipment inlet
Cooling Management
temperatures
CRAC settings with appropriate
temperature and relative humidity Allow set points to be increased

Regular maintenance of cooling Control over CRAC


plant

39
3. Cooling system
Best practices
Temperature and Humidity Settings:
expanded ICT environmental
conditions
Free and economized cooling: Air
and water direct/indirect free cooling
High efficiency cooling plant: Select
the adequate CRAC units, cooling Results
towers, refrigerants, compressor,..
Optimizes cooling plants efficient
Computer room air conditioners
operation, without compromising
CRAC: Calculate the adequate
reliability
cooling capacity, disposition and
quantity of CRAC units Improvement of CRAC system:
Reuse of data center waste heat: reduce overcooling, decrease
Recycling the heat rejected from data server temperatures
center (can use heat pumps to raise Increase server reliability and
temperature)
density
Re-used energy from the
environment (air, waste heat,
water..) 40
Data center power equipment

Power equipment normally includes


uninterruptible power supplies, power
distribution units, and cabling, but may also
include backup generators and other equipment.
Best practices
Selection and deployment of power
equipment
Power systems, UPS and cabinet panels
Results
Energy efficiency batteries
Reduction of capital cost and
Direct Current DC power technology fixed overhead losses
Use new and renewable energy: solar, Reduce the amount of carbon
wind, hydraulic and geothermal emissions
Management of power equipment
Distributed equal power energy
Optimal power density to equipment
Wired power cables under raised floor Prevent damage and
Load balance management malfunction in datacenter
equipment

41
Monitoring
The development and implementation of an energy monitoring and reporting management strategy is
core to operating an efficient data center

Best practices
Energy use and environmental measurement
Meters for measuring: incoming energy, ICT
equipment, air temperature and humidity
Energy use and environmental collection and
logging
Periodic manual reading
Automatic daily and hour reading
Results
Energy use and environmental reporting
Improve visibility of data
Periodic written reports on energy consumption center infrastructure
Energy and environmental reporting console to Managing the energy
monitor energy efficiency
ICT reporting
Proper use of ICT equipment
Server, network and storage utilization and network

42
Design of network

Best practices
Selection of network equipment
(switches, routers, etc) with the best EE
performance Results
Network design: minimize the number
Maximize egress
of internal networks elements “grey
ports”
bandwidth

Plan for run-time energy consumption Reduce network


profiling of the network management complexity
Establish extended energy
conservation policies for network devices
Use network as medium to propagate
energy conservation policies throughout
DC

43
Energy Efficiency Measures IN Data Centers
• Improve UPS Load Factor
• Offline UPS units by transferring its load to the online units thus
increase load factor

• Chilled Water Plant


• Install/integrate waterside economizer
Other Measures
• UPS Rooms Cooling and Genset Block Heater
• Minimize cooling by widening temperature range
• Minimize cooling and fan energy by running CRAH with VFDs
• Minimize energy use by standby generator block heater

• Full Lighting Retrofit


• Reposition light fixtures from above racks to above aisles
• Reduce lighting
• Install occupancy sensors to control fixtures, install multiple circuits for large
areas
Air Management Adjustment

• Seal all floor leaks including that from floor mounted electrical panels
• Rearrange the perforated floor tiles locating them only in cold aisles
• Contain hot air to avoid mixing with cold air as it was done in one center
• Utilize contained racks with exhaust chimneys as it was done in one of the centers
• Seal spaces between and within racks
• Raise the supply air temperature (SAT)
• Install variable frequency drives (VFD) for CRAHs fan and control fan speed by air
plenum pressure
• Convert computer room air handler return air temperature control to rack inlet air
temperature control
• Raise the chilled water supply temperature thus saving energy through better chiller
efficiency
Sources
• Best Practices Guide for Energy-Efficient Data Center Design, EERE

• Case Study: Opportunities to Improve Energy Efficiency in Three


Federal Data Centers, EERE

• Google, IBM, Microsoft

You might also like