InternshipReport B3 DataCenter Final
InternshipReport B3 DataCenter Final
net/publication/343722092
CITATIONS READS
0 28,307
2 authors, including:
Xuan-Truong Nguyen
University of Science and Technology of Hanoi
25 PUBLICATIONS 61 CITATIONS
SEE PROFILE
All content following this page was uploaded by Xuan-Truong Nguyen on 18 August 2020.
BACHELOR THESIS
Data Center in Space domain application: basic design
concepts of infrastructure
By
NGUYEN HUU THANH
Internship period: 10 May – 10 August 2020
Department of Space and Applications
University of Science and Technology of Hanoi
ACKNOWLEDGEMENTS ............................................................................. II
LIST OF ABBREVIATIONS ......................................................................... III
LIST OF TABLES .......................................................................................... IV
ABSTRACT .................................................................................................... VI
I. INTRODUCTION ................................................................................. 7
1.1 Context and Motivation ...........................................................................................7
1.2 Objectives of the thesis ............................................................................................8
1.3 Structure of the thesis ..............................................................................................9
II. BACKGROUND AND RELATED WORKS ................................... 10
2.1 WHAT IS THE DATA CENTER? .......................................................................10
2.1.1 Data Center – basic concepts ......................................................................10
2.1.2 Data Center in daily application .................................................................12
2.2 DATA CENTER CLASSIFICATION AND INFRASTRUCTURE COMPONENTS
13
2.2.1 Data Center classification ...........................................................................13
2.2.2 Data Center Infrastructure: main components ............................................15
2.3 DATA CENTER APPLICATION IN SPACE DOMAIN ....................................22
2.4 OVERVIEW OF DATA CENTER IN VIETNAM ..............................................25
III. BASIC CONCEPT DESIGN OF A DATA CENTER IN USTH ... 27
3.1 TECHNICAL STANDARDS REFERENCES FOR DESIGN CONCEPT ..........27
3.1.1 Technical standards for design of physical facilities in Data Center .........27
3.1.2 Facts of the IT infrastructure in the USTH .................................................31
3.2 SIMPLE CONCEPT OF DATA CENTER DESIGN IN UNIVERSITY .............33
3.2.1 Site installation and main components consideration .................................33
3.2.2 Proposal design concept of the Data Center in USTH ...............................34
IV. CONCLUSIONS ................................................................................. 39
REFERENCES .............................................................................................. 40
i
ACKNOWLEDGEMENTS
ii
LIST OF ABBREVIATIONS
iii
LIST OF TABLES
LIST OF FIGURES
iv
Figure 13: Some Data Center in Vietnam certified by Uptime Institute (updated 2020)
https://uptimeinstitute.com/uptime-institute-awards/ ................................................... 26
Figure 14: Topologies of different Tier systems [19, 30, 31] ....................................... 28
Figure 15: Typical Cooling technology : row-based or Inrow and room-based or Inroom
[32] ................................................................................................................................ 30
Figure 16: Data Center in the University of Science and Technology of Hanoi: 5th floor
of the Building 9-floors, IT room area of 60m2 with 03 rack cabinets and 8-CPUs, other
Storages ......................................................................................................................... 31
Figure 17: actual Data Center in the University of Science and Technology of Hanoi
located in 5th floor (USTH building) with the air comfort conditioning, no-UPS, no-
backup Generator, no-access control, no-raised floor systems ..................................... 31
Figure 18: Layout arrangement of Data Center in the University of Science and
Technology of Hanoi: 5th floor: 03 rack cabinets and 8-CPUs, other Storages ............ 32
Figure 19: Layout arrangement of new - Data Center in the University of Science and
Technology of Hanoi: 5th floor: 20 rack cabinets ......................................................... 34
Figure 20: Topology for Data center in USTH according to the Uptime Institute Tier 3
for M&E infrastructure [19, 30, 31] ............................................................................. 36
Figure 21: Layout arrangement - Data center in USTH according to the Uptime Institute
Tier 3 ............................................................................................................................. 38
v
ABSTRACT
The 21st century is a booming era of digital technology; it is also the age of cloud
computing where internet-based data is handled from remote places. Data is being
entered, stored. processed, deposited and backed up all at the central information
infrastructure located in specific buildings that we called Information Technology rooms
or data center. Data center is a place where all the information technology servers, the
storage and network facilities are gathered in compliance with the art of technology, in
which the equipment requires 24 /7 continuous operation. Data centers are becoming as
important a part of business operations as office, retail, industrial assets. Nowadays,
with four concerns: Big Data, Internet of things, Industry 4.0 revolution and smart-cities
are all trends that will increase demand for computing power, thus data centers, on an
exponential rather than linear scale. However, we have some limitations regarding
knowledge about the data center architecture, especially the infrastructure design in a
typical location. Firstly, part of the report will cover the application of data centers in
the Space domain, in which a large amount of daily data collected from satellites is used
for data storage, computation and processing in order to target specific applications such
as weather forecasts, ... that require high-performance computing and storage tools or
facilities. This thesis intends to present a comprehensive literature review that accounts
for definition, basic concept design requirements, application, and infrastructure
topology of data centers. Based on these observations, an important part of the report,
we aim at the basic design related to the infrastructure of a small data center at USTH.
The primary design is based on the best practices and international reference standards
for data center infrastructure design such as Tier-3 level recommended by Uptime
Institute. The design scale is geared towards the current use of IT equipment as well as
the ability to expand in the next 5-years of USTH. We conclude that the design by
describing key components and challenges for future research on constructing effective
and accurate data center infrastructures.
Keywords: Data center, Cloud computing, Server, Infrastructure topology, Tier level.
vi
I. INTRODUCTION
In Vietnam [3], the new Cybersecurity Law is among the strengths that would
drive more demand for data centers. The law that came into effect earlier this year requires
7
technology businesses to store Vietnamese users’ data within the country and provide it to
the Ministry of Public Security on request. In the International Telecommunication
Union’s 2018 survey, Vietnam ranked 50th out of 175 countries in cybersecurity. The
country’s emerging and tech-savvy population is another factor. It has 64 million Internet
users. The fast-growing trend in the Asia-Pacific of co-locating data centers is underpinned
by the rapid pace of digitization and a surge in demand for cloud-based services across the
region. And in Vietnam, currently there are 17 colocation data centers from 3 areas in
Vietnam (Hanoi, HCM and Da Nang). Nowadays, with a large number of people using
Internet services and smartphones; create a trend of big technology companies moving
factories and transfering servers to Vietnam. The country is also in the development of 5G
information technology infrastructure, aiming toward smart cities. Therefore, the trend of
developing the Data Center market in Vietnam is being evaluated with full potential.
However, a full understanding of the infrastructure and the operation of data centers is still
being limited, focused mainly within some large service providers such as Viettel IDC,
CMC Telecom organisations... Currently, there are no statistical reports or aggregate
assessments about the number of Data Center, or guidelines related to the design of
infrastructure and operations of the Data Center in Vietnam.
8
1.3 STRUCTURE OF THE THESIS
The thesis is structured into three main parts starting with the introduction of the
topic. The rest of the work is organized as follows:
Part II gives an introduction on data centers concepts, and briefly presents the
architecture of data center. It explains the importance of power and cooling air supply, and
describes how they are used to ensure the realibility operation 24/7 of data center such as
IT load (server, network, storage). Data Center's important applications for businesses and
organizations in different fields, such as the field of Space science. Finally, it presents the
classification for Data center according to the power capacity, number of rack cabinets and
following the Uptime Institute classification by TIER 1-4. One important part related to
the standards in designing infrastructure like Uptime TIER, ANSI/TIA-942 and the recent
thermal guidelines ASHRAE T9.9 (2015) will be delivered. We also give a short overview
of the state of the Data center in Vietnam upto date.
Part III presents the proposal design for a typical Data Center in the University of
Science and Technology of Hanoi. We propose designing a Data Center infrastructure
(focused on Power and Cooling systems) according to TIER - 3 standards for University
of Science and Technology of Hanoi. This design aims to expand in the future as the
number of students in the university can increases up to 5000 students/year; number of
racks will be used for different departments in the university: SPACE and Applications,
Energy, Information and Comunication Technology, Water Environment
Oceanography…., prioritizing the storage of all universities’s databases such as student
records, faculty records, and other important USTH databases.
Part IV summarizes the work briefing the purpose of constructing the concept
design for the Data Center. It states few observations about the design standards that are
used and the first concept that we got.
9
II. BACKGROUND AND RELATED WORKS
10
Figure 2: Typical layout of a Data center arranged by three-main areas: server room,
power room, NOC [9]
To ensure stable and continuous operation of IT devices 24/7, in the early-state of
design and during operation phase of the data center, it is interested in two main concerns
(Figure 3): (i) What are the core components of a data center; (ii) What is in a data center
facility? The first concern includes routers, switches, firewalls, storage systems, servers,
and application delivery controllers. These components store and manage business-critical
data and applications. They compose: Network infrastructure connects servers (physical
and virtualized), data center services, storage, and external connectivity to end-user
locations; Storage infrastructure are used to hold the fuel valuable commodity sources;
Computing resources provide the processing, memory, local storage, and network
connectivity through servers. Data center facilities require significant physical
infrastructure to support the center's hardware and software. These include power
subsystems, uninterruptible power supplies (UPS), ventilation, cooling systems, fire
suppression, backup generators, and connections to external networks.
Figure 3: Two main groups: core components (IT systems) and physical infrastructures
11
2.1.2 Data Center in daily application
There are many applications of the Data Center, the criticality of data centers has
been fueled mainly by two aspects. First, the ever-increasing growth in the demand for
data computing, processing and storage by a variety of large-scale cloud services, such as
Google, Facebook and Amazon, by telecommunication operators, by banks and
government organizations, resulted in the proliferation of large data centers with thousands
or millions of servers, CPU. Second, the requirement for supporting a vast variety of
applications ranging from those that run for a few seconds to those that run persistently on
shared hardware platforms has promoted building large-scale computing infrastructures.
As a result, data centers have been touted as one of the keys enabling technologies for the
fast-growing IT industry and at the same time, resulting in global data center market size
is expected to reach revenues of around $174 billion by 2023 [10].
Table 1: some typical Data Center – Top 5 biggest in the world 2019 [11]
Range International
1 Langfang, China 6,300,000 Sq. Ft
Information Group
Data centers are designed to support business applications and research activities
that includes: (i) Email and file sharing, which need to store for a long time; (ii) Customer
relationship management is managing all company’s relationships and interactions with
customers and potential customers. Customer data is uploaded and stored in a DC that
allows companies to access their customer data anytime, anywhere. ; (iii) Big data,
artificial intelligence, and machine learning which requires storing, processing a large
amount of data in a short time; (iv) Virtual desktops, communication, and collaboration
services.
12
2.2 DATA CENTER CLASSIFICATION AND INFRASTRUCTURE
COMPONENTS
2.2.1 Data Center classification
Data centers in general can be classified in the following ways: owner and service
provision purpose; classified according to the number of cabinets racks (or the number of
IT equipment); classified according to the rating standards Uptime Institute.
Firstly, two broad categories of data center ownership are enterprise and colocation.
Enterprise data centers are built and owned by large technology companies such as
Amazon, Facebook, Google, Microsoft, Yahoo, as well as government agencies, financial
institutions, insurance companies, retailers, and other companies across all industries.
Enterprise data centers support web-related services for their organizations, partners, and
customers. Colocation data centers are typically built, owned, and managed by data center
service providers such as Coresite, CyrusOne, Digital Realty Trust, DuPont Fabros...
These data center service providers do not use the services themselves but rather lease the
space to one or multiple tenants [12].
The Data Center Institute classifies data centers into six size groups (Table 2),
measuring by space or number of racks [12, 13]. Uptime Institute created a standard Tier
Classification System (Table 3) that has four tiers to consistently evaluation the
infrastructure performance or uptime of data centers [14].
Table 2: Data center size classifications by rack number and location space [12]
13
Table 3: Data center Infrastructure classification Tiers
Downtime
TIER Description Uptime
Per Year
Data centers provide dedicated site infrastructure
to support IT beyond an office setting, including
a dedicated space for IT systems, an
Basic uninterruptible power supply, dedicated cooling
99.671% 28.8 Hours
Capacity equipment that does not shut down at the end of
normal office hours, and an engine generator to
protect IT functions from extended power
outages.
Data centers include redundant critical power and
cooling components to provide select
Redundant maintenance opportunities and an increased
Capacity margin of safety against IT process disruptions 99.749% 22 Hours
Components that would result from site infrastructure
equipment failures. The redundant components
include power and cooling equipment.
Data centers have no shutdowns for equipment
replacement and maintenance. A redundant
delivery path for power and cooling is added to
Concurrently
the redundant critical components of Tier II so 99.982% 1.6 Hours
Maintainable
that each component needed to support the IT
processing environment can be shut down and
maintained without impacting the IT operation.
Site infrastructure builds on Tier III, adding the
concept of Fault Tolerance to the site
infrastructure topology. Fault Tolerance means
Fault 26.3
that when individual equipment failures or 99.995%
Tolerance Minutes
distribution path interruptions occur, the effects
of the events are stopped short of the IT
operations.
14
2.2.2 Data Center Infrastructure: main components
a) Rack Cabinet
A data center rack is a type of physical steel and electronic encloses that are designed
to house servers, networking devices, cables and other data center computing equipment.
This physical structure provides equipment placement and orchestration within a data
center facility. Each rack is generally prefabricated with slots for connecting electrical,
networking and Internet cables. Data center racks are created using a systematic design,
and are classified based on their capacity - the amount of IT equipment they can hold. It
can be defined two types of power space for rack in Data Center [16, 17]. The maximum
power of rack is denoted as Rack Max Power, which is determined by the rack power
supply and generally adopts its rating power (kVA or kW). When the total power of all
devices stacked inside a rack reaches or exceeds its maximum power, the power supply
will be cut off to protect those devices. Meanwhile, the maximum power of server is
denoted as Server Max Power, which generally adopts the server’s rating power. Usually,
server’s power consumption will not exceed its maximum power, even if it runs at full
capacity. The maximum power of server is far lower than its rating power [16]. When
16
deploying servers, it must be ensured that the total needed power supply of all servers in a
rack will not exceed the maximum power of the rack, thus avoiding the influence on that
workload running on the rack due to its power-off. In reality, the total power demand of
servers in a rack is usually calculated with all servers’ maximum power. New generations
of high-density servers and networking equipment have increased rack densities and
overall facility power requirements. While power density per rack averaged 6 kW in 2006,
it climbed to about 8 kW by 2012, 12 kW per rack by 2014. A Data Center capacity is
normally rated by its power-density (Table 5).
Table 5: Rack cabinet design consideration with power density [16, 17]
Low - DC Up to 5 kW
Medium – DC 6 kW to 10 kW
High - DC 11 kW to 17 kW
17
b) Power System
Figure 6: An overview of power supply system in the Data Center [18, 19]
1
https://uptimeinstitute.com/tier-certification/design
18
c) Cooling System
Every kilowatt (kW) of electrical power that is used by IT devices is later released
as heat [20, 21]. This heat must be drawn away from the device, rack cabinet and Data
center area so that operating temperatures are kept constant. Air conditioning systems, that
operate in a variety of ways and have different levels of performance, are used to draw
away heat. Providing air conditioning to IT systems is crucial for their availability and
security. When servers operate, they will heat up. If they reach a critical point, the server
components will not be able to work properly or to burn the processor. Humidity is also
important, if Data Center is too humid, this will form condensation. The condensation is
in hard drives, in connecting sockets will quickly lead to damage, corrosion, and
eventually, equipment failure. When data center air is too dry, the electric sparks are easily
produced. The high voltages from the static discharge can damage components of Data
Center. Temperature and humidity need to be in ideal condition by using the Computer
Room Air Conditioner (CRAC). IT hardware produces an unusual, concentrated heat load,
and at the same time, is very sensitive to changes in temperature or humidity. In fact, there
are two ways to remove the heat load in a room: Standard comfort air conditioner and
Precision air conditioner. Standard comfort air conditioning is not designed to handle he
heat load concentration and heat load profile of technology rooms, nor is it designed to
provide the precise temperature and humidity set point required for these applications.
Precision air systems are designed for close temperature and humidity control. They
provide high reliability for year-round operation, with the ease of service, system
flexibility and redundancy necessary to keep the technology room up and running 24 hours
a day [20].
As recommended by the American Society of Heating, Refrigerating and Air-
Conditioning Engineers (ASHRAE2) [21, 22] and the ANSI/TIA-942 data center standard,
the first step to gaining control of excess heat is to reconfigure the cabinet layout into a hot
aisle/cold aisle arrangement (Figure 7). ASHRAE allowable thermal envelopes as defined
in “Thermal Guidelines for Data Processing Environments” that represent where IT
manufacturers test equipment to ensure functionality. According to the ASHRAE TC 9.9
2
https://www.electronics-cooling.com/2019/09/ashrae-technical-committee-9-9-mission-critical-facilities-data-
centers-technology-spaces-and-electronic-equipment/
19
standards, to ensure optimal working environment for servers, IT and telecommunications
equipment; cooling systems need to maintain the IT environment according to the
following standards: ASHRAE TC 9.9 2015 recommended with Temperature: 22°C ± 1°C
and Humidity: 50% ± 5% RH. ASHRAE issued its first thermal guidelines for data centers
in 2004. The original ASHRAE air temperature recommended envelope for data centers
was 20-25°C and 40-55%RH (2004). Reliability and uptime were the primary concerns
and energy costs were secondary. Since then, ASHRAE has issued a recommended range
of 18-27°C and 35-60% RH (2008) and, in 2011 [23], published classes A3 and A4 that
allow a temperature range of 5°- 45°C and 20-80% RH. The A3 and A4 classes were
created to support new energy saving technologies such as economization. A summary of
the ASHRAE recommended range and classes are given in the Figure 8. In the suction
section of the rack (cold aisle) the air temperature must be at a temperature of between
18°C and 27°C and 30°C ÷ 38 °C (hot aisle) and relative humidity (RH) from 40% - 55%
depending on the active thermal load (Figure 7).
20
Figure 8: ASHRAE TC 9.9 - 4th edition 2015 Thermal Guidelines [23]
d) Raised floor
Raised floor ensures high load support, easy to access, maintenance of underfloor
equipment, cleaning, and safety. Flexible module for a cold air distribution system for
cooling IT equipment, to tracks, conduits, or supports for data cabling, a location for power
cabling, a copper ground grid for grounding of equipment, a location to run chilled water
or other utility piping. According to the [24], the raised floor was developed and
implemented as a system intended to provide the following functions:
- A cold air distribution system for cooling IT equipment
- Tracks, conduits, or supports for data cabling
- A location for power cabling
- A copper ground grid for grounding of equipment
- A location to run chilled water or other utility piping
21
Figure 9: Raised floor in a Data Center [24]
There are a 2,666-operating satellite on the orbit until 3/31/2020. They are owned
by the United States: 1,327, Russia: 169, China: 363, Other: 807. These satellites include
low earth orbit (LEO): 1,918, medium earth orbit (MEO): 135, Elliptical orbit: 59,
geostationary orbit (GEO): 554. Space technology has advanced rapidly in recent years.
Satellite plays an important role in daily life. [25].
Four important satellite applications include communication, navigation, weather-
climate, earth-planet observation. A communications satellite relays and amplifies radio
telecommunications signals via a transponder; it creates a communication
channel between a source transmitter and a receiver at different locations on Earth.
Communications satellites are used for television, telephone, radio, internet, and military
applications. There are about 2,000 communication satellites in Earth's orbit which use for
private, public, academic, business, and government. The signal is sent into space is
called Uplink frequency, while the frequency with which it is sent by the transponder
is Downlink frequency (Figure 10).
22
Figure 10: Transponder of satellite internet
The speed of the satellite internet is ranging from 12Mbps to 100 Mbps [8]. This
means about 1TB data per day or 30 TB per month can be transmitted between a satellite
and a ground station. If many satellites are used to transmit data, hundreds of TB data per
month will need to collect, to store, to process, and to distribute them. Therefore, a Data
Center is built to manage a large amount of data.
Another application of satellite is earth-planet observation, European Space Agency
(ESA) is exploiting five Sentinel programs (Sentinel 1, 2, 3, 4, 5P) [26] that include radar
and super-spectral imaging for land, ocean, and atmosphere monitoring. For instance, the
soil moisture can extract and study from Sentinel 1 satellite data (Figure 11), this image
contains about 1.5 GB data. While average each 30s one image is captured. This means
about (24*3600/30) *1.5=4.32TB data is acquired per day. The five Sentinel programs
(Sentinel 1, 2, 3, 4, 5P) will produce hundreds of TB data each month. Therefore, a Data
Center is designed to manage a very large amount of data.
23
Figure 11: A sentinel image of Northeast Vietnam from Sentinel 1
Indian Space Science Programme has the primary goal of promoting and
establishing space science and technology programme. The ISSDC is the primary data
center to be retrieved from Indian space science missions. This center is responsible for
the collections of payload data and related ancillary data for space science missions such
as Chandrayaan, Astrosat, Youthsat, etc. The payload data sets can include a range of
information including satellite images, X-ray spectrometer readings, and other space
observations [27]. The Southeast Asia Regional Climate Downscaling (SEACLID) project
was established in November 2013 with objectives to downscale multiple climate change
scenarios for Southeast Asia, build capacity in regional climate simulation and establish
data centre for data dissemination [28].
A mini Data Center of Department of Space and Applications (Figure 12) at USTH
which is built to store and process the databases for scientific research activities. These
researches include climate modeling, remote sensing projects, ..which demand high-
performance computing. Climate modeling is based on mathematical equations that
represent the basic laws of physics, chemistry, and biology that govern the behavior of the
atmosphere, ocean, land surface, ice, other parts of the climate system, and interaction
among them. Therefore, climate modeling requires massive computing capacity and
capability with computer performance of about 2 Tflops and stores about 200 TB data.
24
Figure 12: Mini data center at the University of Science and Technology of Hanoi
3
https://uptimeinstitute.com/uptime-institute-awards/
25
Figure 13: Some Data Center in Vietnam certified by Uptime Institute (updated 2020)
https://uptimeinstitute.com/uptime-institute-awards/
26
III. BASIC CONCEPT DESIGN OF A DATA CENTER IN USTH
27
leading to great transparency. The standard covers all aspects of the physical data center
including site location, architecture, security, safety, fire suppression, electrical,
mechanical and telecommunication. The summary of requirements for data center design
is divided into 4-levels in the following Table 7.
Active Capacity
Components to Support N N+1 N+1 N+N
the IT Load
1 Active &
Distribution Paths 1 1 2 (Both Active)
1 Alternate
Concurrently
No No Yes Yes
Maintainable
No if
[Average < 5
KW]
Continuous Cooling No No Yes
Yes
[Average > 5
KW]
In the Figure 14, It can be seen that the difference between Tier I and Tier II is the
number of generator and UPS. In Tier II, additional generators and UPS provide backup
for the most critical components. The significant difference between Tier II and Tier III is
the number of delivery path. In Tier III, the alternative power from a second utility
provides the parallel power support for the critical IT load, in case of power failure of the
primary path. However, there is no requirement to install UPS in the passive path.
Therefore, Tier III system is vulnerable to utility conditions. Tier IV provides a complete
redundant system by adding two active power delivery paths. It can enable dual systems
to run actively in parallel. In both power paths, it contains N+1 UPS and generator sets.
The comparison of performance in different Tier systems is shown in Table 7. It shows
that higher level of Tier system has greater system availability.
29
The data center cooling system is an important system that helps to remove the heat
that is discharged from IT equipment during the operation time. The calculation of thermal
load is based on the power consumption of the IT equipment. Using precision air-
conditioning system is recommended according to the standards described in previous
session. ASHRAE allows thermal envelopes as defined in “Thermal Guidelines for Data
Processing Environments” that represent where IT manufacturers test equipment to ensure
functionality. In fact, we probably know about some cooling technologies currently used
in data centers: room, row and rack-based cooling systems. Choosing a right cooling
technology is based on experiences and requirements for data center design. Figure 15
introduces two data center server cooling technologies: room – based and row-based
cooling systems. The closer the cooling system is to the heat source, the more efficient it
will operate. Room-based cooling may consist of one or more air conditioners supplying
cool air completely unrestricted by ducts, dampers, vents, etc. or the supply and/or return
may be partially constrained by a raised floor system or overhead return plenum. With a
row-based configuration, the CRAC units are associated with a rack row and are assumed
to be dedicated to a row for design purposes. Row-based cooling has a number of side
benefits other than cooling performance. The reduction in the airflow path length reduces
the CRAC fan power required, increasing cooling efficiency.
Figure 16: Data Center in the University of Science and Technology of Hanoi: 5th
floor of the Building 9-floors, IT room area of 60m2 with 03 rack cabinets and 8-CPUs,
other Storages
Figure 17: actual Data Center in the University of Science and Technology of
Hanoi located in 5th floor (USTH building) with the air comfort conditioning, no-UPS,
no-backup Generator, no-access control, no-raised floor systems
31
Figure 18: Layout arrangement of Data Center in the University of Science and
Technology of Hanoi: 5th floor: 03 rack cabinets and 8-CPUs, other Storages
Comments:
- The Data Center area with depth x width = 8500 x 6500 mm, on the 5th floor of a 9
floor building. The room floor-to-ceiling height is 2700 mm. (Figure 16-18).
- Including 03 racks cabinets, uneven height, wide x depth = 600 mm x 1070 mm, of
which 01 rack (1991mm high, 600mm wide, 1200mm deep) contains server
equipment operating. The remaining racks contain other equipment. Also, the entire
USTH’s database is being stored and processed in 07 CPUs.
- Fire protection system: only suitable for fire prevention and fighting in the office
area; it does not comply with the standards of the data center.
- The data center USTH is not equipped with UPS and GEN systems, so the risk of
data loss can occur when the building power electricity is interrupted. Therefore,
the stability and reliability in operation of IT equipment are not guaranteed to
comply with Data Center standards.
32
- Furthermore, the air conditioners in use currently are not the precision air cooling
systems recommended for use in data centers. This is a standard comfort air
conditioning system, not including humidification and dehumidification systems,
so it is impossible to control the exact temperature and humidity in the entire space
of the IT equipment room when the rack system operates. ., thus greatly affecting
the stability and performance of IT equipment.
- Another points that we can consider in data center area space causes a large loss of
cooling air, thus causing loss of power consumption as well as the ability to cool IT
equipment when Data center operating during summer days with very high outside
ambient temperatures in Hanoi.
Based on the data center design standards presented in Part II, the main
infrastructure design consists of two main parts: Electrical Infrastructure and Air
Conditioning Infrastructure.
In practice, the design is based on the needs of the end-user (i.e. capacity density of
racks), the location where the Data Center is installed, and the deployment experience of
other data centers in Vietnam. In this work, we recommend installing on the 5th floor of
the USTH’s building; compiling with the TIER-3 standard. We divided the areas into
rooms (see Figure 19): the IT room (for the racks which contain servers, storage and
network equipments); Power room contains the electrical equipment supplies (UPS,
electrical cabinet power supply); NOC room (for IT staff managing and operating the Data
Center of USTH).
33
Figure 19: Layout arrangement of new - Data Center in the University of Science
and Technology of Hanoi: 5th floor: 20 rack cabinets
4
http://hdl.com.vn/
34
Table 8: Power consumption calculation of the IT equipment in Data Center (critical
equipment be supplied by UPS)
35
Table 9: Power sizing UPS
TOTAL POWER
(6) = (1) + (3) + (5) KW 111.55
RATING FOR UPS
II UPS rating selection 115 KVA/115KW
Figure 20: Proposal topology for Data center in USTH refer to the Uptime
Institute Tier 3 for M&E infrastructure [19, 30, 31]
36
a) Thermal rating of Data Center energy demand – cooling system
Table 11: Power demand calculation for all equipment in Data Center
(12) = (10) +
Total KW 6
(11)
Figure 21: Layout arrangement - Data center in USTH according to the Uptime
Institute Tier 3
In Figure 21, we propose designing a Data Center according to TIER - 3 standards
for University of Science and Technology of Hanoi. This design aims to expand in the
future as the number of students of the school increases up to 5000 students/year; racks
are used for different departments in the school: Faculty of SPACE, EN, ICT….,
prioritizing the storage of student records, faculty records, and other important USTH
databases.
38
IV. CONCLUSIONS
This thesis presented a comprehensive review of the previous research works and
the future trends of data center energy infrastructure. The infrastructure design for the data
center is complicated and needs to be based on related international design standards such
as electrical infrastructure, cold air supply system used for IT equipment cooling. In fact,
the operation of IT equipment (contained inside rack cabinets) required a 24/7 constant
power supply from the UPS system with 22°C temperature, 50% relative humidity by a
precision air-conditioning system. Failing to ensure these recommended operating
conditions can create a risk of data loss and security compromise along with negative
effects on the performance of IT equipment, thereby becoming a big factor in the business's
economic losses. On that basis, we have proposed a new Data Center design for USTH,
based on TIER-3 infrastructure standards, which are being applied in most major data
centers in Vietnam. However, due to time limitations and lack of in-depth understanding
of all data center components, this report does not cover other data center internal
structures.
39
REFERENCES
[1] Cisco, “Cisco Global Cloud Index: Forecast and Methodology, 2015–2020”,
White Paper, 2016
[2] M. Dayarathna, Y. Wen, R. Fan, “Data Center Energy Consumption Modeling: A
Survey”, IEEE Communications Surveys & Tutorials, Vol. 18, No. 1, p. 732-794,
2016
[3] VNEXPRESS: “Vietnam among least competitive data center markets in Asia
Pacific: report”, https://e.vnexpress.net/news/news/vietnam-among-least-
competitive-data-center-markets-in-asia-pacific-report-3970435.html , August 22,
2019
[4] Gemma A. Brady, Nikil Kapur, Jonathan L. Summers, Harvey M. Thompson, “A
case study and critical assessment in calculating power usage effectiveness for a
data centre”, Energy Conversion and Management, Volume 76, 2013, Pages 155-
161, ISSN 0196-8904,
[5] Cisco Systems, Inc: “What Is a Data Center”,
https://www.cisco.com/c/en/us/solutions/data-center-virtualization/what-is-a-data-
center.html#~distributed-network
[6] Rong, H.; Zhang, H.; Xiao, S.; Li, C.; Hu, C, “Optimizing energy consumption for
data centres”, Renew. Sustain. Energy Rev. 2016, 58, 674–691
[7] Oró, E.; Depoorter, V.; Garcia, A.; Salom, J, “Energy efficiency and renewable
energy integration in data centres. Strategies and modelling review”, Renew.
Sustain. Energy Rev. 2015, 42, 429–445.
[8] Beaty, D.L, “Internal IT load profile variability”, ASHRAE J. 2013, 55, 72–74
[9] Patrick Donovan, “Data Center Projects: Advantages of Using a Reference
Design”, White paper 147. Schneider Electric.
[10] Arizton: “Data Center Market - Global Outlook and Forecast 2018-2023”;,
https://www.arizton.com/market-reports/global-data-center-market/snapshots
[11] ICT Price : “Top 10 biggest data centres from around the world”, http://ict-
price.com/top-10-biggest-data-centres-from-around-the-world/
[12] Tim Day and Nam D. Pham, “Data Centers: Jobs and Opportunities in
Communities Nationwide”, 2017 U.S. Chamber of Commerce Technology
Engagement Center.
[13] Andrea, Mike. 2014. “Data Center Standards: Size and Density,” The Strategic
Directions Group Pty Ltd.
[14] Stansberry, Matt. 2014. “Explaining the Uptime Institute’s Tier Classification
System.” Uptime Institute.
40
[15] J. Mitchell Jackson, J.G. Koomey, B. Nordman and M. Blazek, “Data center
power requirements: measurements from Silicon Valley”, July 2001: University
of California, Berkeley.
[16] Tom Weber, “Evaluating data center cabinet power densities”, www.align.com
[17] Kevin Brown, Wendy Torell, Victor Avelar, “Choosing the Optimal Data Center
Power Density”, White paper n°156, Schneider Electric.
[18] Xibo Jin, Fa Zhang, Athanasios V. Vasilakos, Zhiyong Liu, “Green Data
Centers: A Survey, Perspectives, and Future Directions”, 2016.
https://arxiv.org/abs/1608.00687v1
[19] S. Chalise et al., "Data center energy systems: Current technology and future
direction," 2015 IEEE Power & Energy Society General Meeting, Denver, CO,
2015, pp. 1-5, doi: 10.1109/PESGM.2015.7286420.
[20] APC: “Why Do I Need Precision Air Conditioning?”, 2001 American Power
Conversion (Schneider Electric).
[21] John Bruschi, Peter Rumsey, Robin Anliker, Larry Chu, and Stuart Gregson,
“FEMP Best Practices Guide for Energy-Efficient Data Center Design”, NREL
report/project number: nrel/br-7a40-47201, 2011.
[22] ASHRAE TC 9.9, Standard 90.4, “Thermal Guidelines for Data Processing”,
https://www.electronics-cooling.com/2019/09/ashrae-technical-committee-9-9-
mission-critical-facilities-data-centers-technology-spaces-and-electronic-
equipment/
[23] ASHRAE, 2015, “Thermal guidelines for Data Processing Environments”, 4th
edition. Atlanta: ASHRAE.
[24] Neil Rasmussen, “Raised Floors vs Hard Floors for Data Center Applications”,
White Paper 19. 2014, Schneider Electric.
[25] The Union of Concerned Scientists: “UCS Satellite Database ”, Published Dec 8,
2005 Updated Apr 1, 2020. https://www.ucsusa.org/resources/satellite-database
[26] Copernicus Program, https://en.wikipedia.org/wiki/Copernicus_Programme
[27] Indian Space Science Data Center project,
https://www.re3data.org/repository/r3d100010988?fbclid=IwAR0h-
yga7Pvxp97cOYkDKRhhpTswCCgDYtpJFgP-hTl-nVP1N0Zui8z1-1g
[28] Project N° ARCP2015-04CMY-Tangang, “The Southeast Asia Regional Climate
Downscaling”, 2015
[29] Colocation Vietnam (data accessed on June 2020):
https://www.datacentermap.com/vietnam/
[30] Anixter : “Data CenterInfrastructure Resource Guide”,
https://www.anixter.com/content/dam/Anixter/Guide/12H0013X00-Data-Center-
Resource-Guide-EN-US.pdf
41
[31] Victor Avelar, “Guidelines for Specifying Data Center Criticality / Tier Levels”,
White Paper 122, by APC (Schneider Electric).
[32] Kevin Dunlap and Neil Rasmussen, “Choosing Between Room, Row, and
Rack-based Cooling for Data Centers”, White Paper 130 by APC (Schneider
Electric
42