Abacus DC DPR
Abacus DC DPR
Abacus DC DPR
IS A
DATA
CENTRE?
Prime Minister Modi's focus on data security and the protection of digital
assets has instilled con dence among businesses like ours, encouraging us
to invest in the data center sector in India. His dedication to fostering a
digitally inclusive nation has not only bene ted businesses but has also
improved the lives of millions of Indians by providing better access to
education, healthcare, and government services.
Encourage domestic and foreign investments in the sector Promote R&D for
manufacturing and development of Data Centre related products and
services for domestic and global markets. Promote domestic manufacturing,
including non-IT as well as IT components, to increase domestic value
addition and reduce dependence on imported equipment for Data Centres.
1. Tier 4 Hyper Scale Data Centre (Among Top 5 Data Centres in India)
2. In Campus housing facilities for on roll sta
3. 40% Green Belt Development (Open for all General Public)
4. 20 MW Solar Plant to reduce stress on Country Electricity Grid
5. 10 MW Wind Turbine Farm
6. In campus Resort & Recreational facilities for International Clients
7. In campus Helipad, Warehousing facilities
8. Exclusive mobility right to E-Vehicles Only
9. Architectural Masterpiece which would boost Tourism & Investment
opportunities in the country
Data Centre Project Report Page 5 of 185
ff
10. Startup Incubation Hub
11.Development of Venture Capital Fund to promote incubated startups
12.Subsidised Co-Working, Co-Located O ce Spaces for IT
Companies.
13. Completely Smart Campus with 3 Tier Security System.
14.In house Research & Development Wing for innovations in Data
Centre Markets.
15.Skill Development under National Skill Development Mission
16.Employment Hub (Can generate employment to over 5000 direct &
indirect workers)
17.Industry First Flexible Product & Service O erings
18.GPU based O erings.
Our portfolio comprises of leading edge products and services. These form the building
blocks of the innovative solutions we construct. The solutions are aimed at putting your
business ease. With us, you are assured of platinum-grade service, world-class network
infrastructure, global alliances and superior domain expertise.
Our team is a perfect blend of experience, young energy and sharp brains. All dedicated
to making this country a better place to live and work. We have posted over 200% YoY
growth in revenue & subscriber growth over the past few years and claim as one of the
fastest growing ISPs in the country.
The above quote depicts the current scenario of the society after the COVID-19
pandemic. As a pandemic aftermath, everyone felt the need to be online, be it
businesses, education, socialisation, meetings & even CAR DELIVERIES. Yes! Even cars
have been delivered online / contact free in the post covid world. The pandemic gave
what the society needed the most, to bridge the gap between the traditional businesses
and modern technology.
3.VP Aviations Private Limited, Air Ambulance & Charter Flight operator &
Aggregator in India
5.Comwave Media (OPC) Private Limited, VAS Aggregator for ISPs & Enterprise
Networking Products. Developed myCloudCAM, myWi , myFileShare products
for various ISP & Enterprise usage.
Our progressive thinking and creative approach is what makes us stand out
from the crowd. It’s why we consistently win the trust and why so many of
our clients keep coming back to us. We have a clear vision of what we need
to deliver – high speed internet access with absolutely zero downtime ; and
we guide our business using ve core values – lead, grow, deliver, sustain
and protect. I’m proud of the work we do and give you my personal
commitment that we will deliver what we promise and do it safely and
sustainably.
My Internet’s top priority has always been — and remains — providing high
quality internet services and exceeding customer service expectations.
Abacus Cloud’s experienced and professional technical and sales team is
fully dedicated to delivering reliable, cost-e ective and exible solutions to
customers. At Abacus Cloud, we will continue to enable our customers to
expect more from us by enhancing our service o erings and meeting our
customers’ evolving needs.
Sincerely,
• The data Centre is the department in an enterprise that houses and maintains back-end
information technology (IT) systems and data stores—its mainframes, servers and
databases.
• For a small organization, the dataCentre may represent a small closet that houses one
or two servers and a network patch panel.
• There are four functional requirements of a data Centre which includes location (a place
to locate computers, storage and networking devices), power (to maintain the devices),
HVAC (temperature controlled environment within the parameters needed) and
structured cabling (connectivity provided to other devices both inside and outside).
A single mainframe required a great deal of power and had to be cooled to avoid
overheating. Security became important – computers were expensive, and were often
used for military purposes. Basic design-guidelines for controlling access to the computer
room were therefore devised.
During the boom of the microcomputer industry, and especially during the 1980s, users
started to deploy computers everywhere, in many cases with little or no care about
operating requirements. However, as information technology (IT) operations started to
grow in complexity, organizations grew aware of the need to control IT resources. The
availability of inexpensive networking equipment, coupled with new standards for the
network structured cabling, made it possible to use a hierarchical design that put the
servers in a speci c room inside the company. The use of the term "data Centre", as
applied to specially designed computer rooms, started to gain popular recognition about
this time.
The boom of data Centres came during the dot-com bubble of 1997–
2000. Companies needed fast Internet connectivity and non-stop operation to deploy
systems and to establish a presence on the Internet. Installing such equipment was not
viable for many smaller companies. Many companies started building very large facilities,
called Internet data Centres (IDCs), which provide enhanced capabilities, such as
crossover backup: "If a Bell Atlantic line is cut, we can transfer them to ... to minimize the
time of outage.”
The term cloud data Centres (CDCs) has been used. Data Centres typically cost a lot to
build and to maintain. Increasingly, the division of these terms has almost disappeared
and they are being integrated into the term "data Centre”.
Information security is also a concern, and for this reason, a data Centre has to o er a
secure environment that minimizes the chances of a security breach. A data Centre must,
therefore, keep high standards for assuring the integrity and functionality of its hosted
computer environment.
Industry research company International Data Corporation (IDC) puts the average age of a
data Centre at nine years old. Gartner, another research company, says data Centres
older than seven years are obsolete. The growth in data (163 zettabytes by 2025) is one
factor driving the need for data Centres to modernize.
Focus on modernization is not new: concern about obsolete equipment was decried in
2007, and in 2011 Uptime Institute was concerned about the age of the equipment
therein. By 2018 concern had shifted once again, this time to the age of the sta : "data
Centre sta are aging faster than the equipment.”
Machine room
The term "Machine Room" is at times used to refer to the large room within a Data Centre
where the actual Central Processing Unit is located; this may be separate from where
high-speed printers are located. Air conditioning is most important in the machine room.
Aside from air-conditioning, there must be monitoring equipment, one type of which is to
detect water prior to ood-level situations.
Raised oor
Although the rst raised oor computer room was made by IBM in 1956, and they've
"been around since the 1960s", it was the 1970s that made it more common for computer
Centres to thereby allow cool air to circulate more e ciently.
The rst purpose of the raised oor was to allow access for wiring.
Lights out
The "lights-out" data Centre, also known as a darkened or a dark data Centre, is a data
Centre that, ideally, has all but eliminated the need for direct access by personnel, except
under extraordinary circumstances. Because of the lack of need for sta to enter the data
Centre, it can be operated without lighting. All of the devices are accessed and managed
by remote systems, with automation programs used to perform unattended operations. In
addition to the energy savings, reduction in sta ng costs and the ability to locate the site
further from population Centres, implementing a lights-out data Centre reduces the threat
of malicious attacks upon the infrastructure.
The two organizations in the United Countrys that publish data Centre standards are
the Telecommunications Industry Association (TIA) and the Uptime Institute.
• Internet is the future, we at India a diversi ed community of more than 1.4 billion
people , 60% of the population in India is connected with internet and average person
uses 6 hours of internet daily. India is the second largest nation connected with internet.
• Internet users are now growing by an average of more than one crores new users every
day, with all of the original ‘Next Billion Users’ now online.
• Our latest internet data – collected and synthesised from a wide variety of reputable
sources – shows that internet users are growing at a rate of more than 11 new users per
second, which results in that impressive total of one crores new users each day.
• Since the onset of Covid-19, data consumption globally has surged; in India too, it has
gone up on the back of work from home, tele-medicine, online education and digital
commerce.
• The Indian data centre industry has clocked ₹1.2 billion in revenues in scal 2020 and
Crisil expects the industry to log a rapid 25-30 per cent CAGR to ₹4.5-5 billion by scal
2025.
• Data Centres will become the next big segment after warehousing, with nearly ₹3.4 Bn
foreign investments in the coming three-four months
Digital Economy
• When the COVID-19 pandemic hit at the turn of 2020, it challenged the foundations of
social and economic norms around the world. One after the other, countries initiated
lockdown protocols and social distancing procedures for public safety that shifted daily
routines to telecommuting, online education, video calls, and digital banking.
• As the Centre point of this ‘new normal’, Internet infrastructure came under huge
stress. Global peak tra c increased by 47%, compared to a forecasted 28%, with
some services like Facebook video calling (which saw a 100% increase), and Net ix
(which welcomed 16 crores new subscribers) driving the change and duration of the
peak tra c patterns. Global Wi-Fi tra c also increased PC (personal computer)
uploads to cloud computing platforms and video calls surged by 80%, while Internet
Exchange Point (IXP) tra c in AsiaPaci c grew by 40%.
• Such tra c surges have posed a question mark about the capacity and reliability of the
Internet, while at the same time, reminding us that half of the world is still not online. To
ensure the Internet functioned smoothly, governments and service providers launched
numerous emergency initiatives, including exible spectrum use, additional spectrum
release, increased international and domestic capacity, subsidized broadband services,
and free access to online resources.
• The industry also stepped forward and o ered free data and voice minutes, leniency in
the pay-back period, complimentary access to paid content, and cooperation in relief
e orts and disseminating information on COVID-19 safety measures.
• In general, the Internet remained resilient enough to respond to the tra c spikes.
However, it is clear that this resilience has not been uniform across the world, simply
because countries are at varying levels of digital readiness.
• The size of the digital population in India and the growth trajectory of digital
economy necessitates a strong growth of Data Centres, which has the potential to ful l
the growing demands of the country.
• Indian Data Centre market has seen tremendous growth in the past decade,
riding on the explosion of data through smartphones, social networking sites,
ecommerce, digital entertainment, digital education, digital payments and many
other digital businesses / services.
• The COVID pandemic has shown the importance of Data Localisation, Lower latency &
Block Chain Technology
• These can only be achieved only if Data Centres are established hyper locally.
• These are known as next “COAL MINES” as Data is the future. All major business
houses of India have dived into these “DATA MINES” in the last one year itself
• Data Centres have seen exponential growth in occupancy percent, tra c catered in the
last few years. The capacities are running at peak levels due to rise in Bandwidth
Demand.
• More the number of Data Centres, more bandwidth can be provided to the end
subscriber, thereby increasing the speed and e ciency of being.
• Data Centres have been added under Essential Services Maintenance Act (Article 59 of
1968)
• Continuous functioning of Data Centres is critical for continued delivery of services and
to maintain the normalcy of day to day activities. Inclusion of Data Centre under the
ESMA will enable seamless continuity of services even during times of calamities or
crisis.
• Data Centres were classi ed under Essential Services during the COVID -19 pandemic
induced lockdown.
• Being epicentre of the country, it is located at the strategic centre point of all
major nancial hubs of the country - New Delhi, Mumbai, Hyderabad, Lucknow,
Jaipur, Patna, Ahmedabad, Nagpur, Pune & Visakhapatnam.
• Located right at the centre of India, India borders ve other countrys in the
country and provides a unique access to nearly 50 percent of India’s population.
MP also has access to major ports on both east and west coasts within a range
of 1000 km; for example, Indore to JNPT, Jabalpur to Paradip, Indore to Kandla,
Rewa to Haldia, and Jabalpur to Vishakhapatnam.
• In the post-GST regime, where raw material and nished goods can seamlessly
move between all the major cities of India, India serves as the perfect location
for manufacturing activities. The country has a massive road network spanning
2,30,000 km, seven inland container depots, an international air cargo facility at
Indore, ve commercial airports with over 100 ights, and more than 550 trains
daily.
• Abundant Land Pockets. India is known for huge unused land pockets, Hence, it
would be best putting those in best use.
• Its tourism industry has seen considerable growth, with the country topping the
National Tourism Awards in 2010–11. In recent years, the country's GDP growth
has been above the national average. In 2019–20, country's GSDP was recorded
at 9.07.
• India has a prosperous industry ecosystem and is home to over 300 large
industries from di erent sectors. Some of the major sectors include textile and
apparel—Trident, Vardhman, Grasim, Pratibha Syntex, Nahar, food processing—
ITC, Parle, Hershey’s, Coca Cola, Mondelez, automobiles and manufacturing—
John Deere, Volvo, Eicher, M&M, Force, Tafe, and pharmaceuticals—Cipla,
Lupin, Glenmark, Novartis, Mylan.
• India is not only rich in natural minerals, coal, and diamond—it also o ers
excellent infrastructural support for industries to thrive. For instance, the country
is power surplus with an installed capacity of 23,000+ megawatts.
• The country government also provides guaranteed 24*7 power and water supply
for units in MPIDC industrial parks. In addition, MP has 26 lakh tons of
warehousing capacity with 7 ICDs across the country for export logistics.
Country-of-the-art industrial parks further strengthen the industrial infrastructure
in the country.
• The present country government’s constant e orts to bring all compliance online
and reduce procedural complexities has further increased the ease of doing
business in MP. The country’s INVEST portal covers the entire life cycle of an
investment proposal; from including pre-establishment and pre-operation
approvals to incentive processing, a range of 32 services are already available
online. All these policies and more have led to MP being ranked 7th in the
Business Reform Action Plan (BRAP) in 2018.
• Dynamic and Future oriented Government Approach under the able leadership
of Country Chief Minister Sh. Shivraj Singh Chouhan.
They function like hotels where you rent a room (server) for as long as you need to host
your website. The hotel o ers everything you need, housekeeping, room service and
laundry (networking, power, and maintenance), while you pay for the convenience.
Below is a breakdown of how these facilities work and the services they render to
generate income.
a. Infrastructure As A Service
Data centres normal provide equipment to customers who don’t want to or cannot invest
in building their own facility for private use. The client then pays for what they use and
have the added bene t of being able to use more hardware as the demand increases.
Note that the infrastructure o ered normally consists of storage space, hosting services,
servers, rewalls etc. The service provider is then tasked with the maintenance and
upgrade of all the equipment letting the customer focus more on developing software or
application to use that infrastructure.
b. Software As A Service
According to reports, it is often cheaper to purchase software just when you need it than
to buy a lifetime licence especially if buying for many people for example in a company.
Presently, in this age, you can access software as a service through your web browser
without installing them on your devices, thus, allowing people to easily collaborate as it
allows easy data sharing.
Data centres are renowned for o ering software such as word processors or spreadsheet
programs in a similar way. Note that it also does not mean that they have to make the
software themselves but they can o er discounted packages for existing software like
Google Docs or O ce 365 etc.
c. Network As A Service
Aside from just o ering equipment, a data centre can also o er network services like
phone services over the internet (VoIP), Virtual Private Networks (VPNs), private telephone
network for use within a company (Private Branch Exchange) and Uni ed
Communication.
d. Platform As A Service
Note that this service is mainly for developers as it o ers a sustainable environment for
them to build and deploy applications. The data centre provides a platform that supports
certain programming languages and one that caters for most of the con guration of
servers and networks. This also allows developers to specialize on their code, build
quickly and ship early.
Also note that a developer can scale up their application easily as resource demand
increase especially since they don’t have to bother about upgrade costs. The expense of
getting more access to hardware through the di erent packages o ered by this service
tend to be less than when the developer purchased the hardware and set it up for
themselves.
• The data Centre industry has a wide range of customers demanding for its services. In
view of this, we have determined that we are in business to cater to the following group
of existing and potential clients.
Data Protection
Scalability
Speed
Cost
User Friendliness
Customisation
Flexibility
Round the Clock Support
Complimentary service platform for Startups & Incubated Companies.
Colocation Incentives
Investor Relations
A dedicated Investor Relations O cer shall be appointed to address the Investor Queries.
Investor Relations is also responsible for the preparation of interim reports and nancial
countryments bulletins, in cooperation with the Group Accounting, Group Treasury and
the Business units, creating investor presentations and planning and implementing
investor communications as well as daily contact with investors and analysts. Investor
Relations is also responsible for organising events.
• Ease of doing business in the sector, towards attracting investments and accelerating
the existing pace of Data Centre growth in the country.
• Tax Rebates, Single Window clearances from all the necessary departments.
'
Employment Opportunities
• The proposed Data centre will create job opportunities to over 2000 direct workers &
over 3000 indirect workers, hence, impacting a lot of lives
• It will create employment in both Skilled and unskilled segments of labour
• 60% reservation in Jobs to India Domicile candidates, thereby promoting local society
Investment Opportunities
• Data Centre being a hub of Data, coupled with strategic support from Government, will
de nitely attract Investments from various MNCs, FDI & other Businesses.
• It will rocket boost the economy for the country
• Attract various JV options for the Country Government
• It will cater as boon for the high paced IT Industry of the country currently valued at
over ₹ 4 Billion Rupees
• It will localise their dependancy for data storage within the country, hereby reducing
latency
• India, being the epicentre will help Telecoms to reduce network latency, hereby
increasing stability and internet speed in the country
• It will localise their dependancy for data storage within the country, hereby reducing
latency
• A GPU based Data Centre will increase Broadcasting & Streaming business options in
the country.
• It will help in faster delivery of content with lower latency and higher bandwidth, thereby
increasing experience of the subscribers
Environmental Impact
• We thrive to use as much as Renewable sources of Energy as possible for the project
• At least 60 acres of the land will be reserved for Solar Power Plant of a capacity of 20
MW
• Over 40 acres of the land will be developed as “Green Belt” which will be open to all
General Public for recreational purposes.
• We would be using Water Cooled technology to cool the servers which leaves much
lesser carbon footprint as compared to traditional air conditioning systems
Social Responsibilities
• Development of country of art “Green Belt” in the vicinity will act as recreational zone
open to all General Public.
• Free Education to over 100 children of Below Poverty Line families (who would facilitate
the construction work of the DC)
• Creation of an NGO for supporting the women and child development of the country.
• The Country Government will get an edge over competitors in the forthcoming
elections
• It will optimise the Government’s approach towards the general public.
• Political advertisements, endorsements.
• To make India Digital under the footsteps of “DIGITAL INDIA”
• It will provide an edge over opposition parties and help curb incumbency.
This can be done by promoting health care and sanitation in rural areas. This can also be
a contribution to the Swach Bharat Kosh which has been set-up by the Central
Government. Blood donation camps can also be done as a part of a company’s CSR
initiative.
This can be inclusive of providing education to children and essential vocational skill
training that enhance employment or special education among women, elderly and the
di erently-abled.
This can include the restoration of heritage sites, buildings of historical importance and
works of art. Public libraries can be set up as well.
(vi) Measures can be taken towards the benefit and support of armed
forces veterans, war widows and families.
(vii) Contributions to the Prime Minister’s National Relief Fund or any other
fund set up by the central government, for welfare, development and relief
of the schedule caste, tribes, other backward classes, women and
minorities.
The detailed process of data centre design appears on the outset to be a purely
mechanical process involving the layout of the area, computations to determine
equipment capacities, and innumerable other engineering details. They are, of course,
essential to the design and creation of a data centre, however, the mechanics alone do
not make a data centre. The use of pure mechanics rarely creates anything that is useful,
except perhaps by chance.
There are, in fact, some philosophical guidelines that should be kept in mind during the
data centre design process. These are based on the relatively short history of designing
and building practical data centres, but are also based on design concepts going way
back. This chapter looks at some of these philosophies.
The idea that technology is relatively new, that it arose within the last fty to one hundred
years, is a common misconception. There have been great advances, particularly in the
electronic age, but the truth of the matter is that technology has been around since
human beings began bashing rock against rock.
One of the most interesting things about design is that it draws from many sources.
Paintings by Raphael and Botticelli in the Renaissance were dependent on the
mathematics of perspective geometry developed more than a millennia and a half before
either were born. They also drew on the language and form of classical architecture and
Greco-Roman mythology to provide settings for many of their works. Raphael and
Botticelli created works that had never been seen before, but they could not have done
this without the groundwork that had been set down in the previous centuries.
Look back to the most proli c designers and engineers in the history of western
civilization: The Romans. Roman advances in design and technology are still with us
today. If you cross a bridge to get to work, or take the subway, or walk down the street to
get a latte, chances are you are doing so using elements of Roman design and
technology. These elements are the arch and concrete.
When entering the Pantheon in Rome, most people probably don’t remark, “What a great
use of the arch!” and “That dome is a single concrete structure.” However, without the
modular design of the arch and the invention of concrete, the Roman Pantheon could not
have been built.
The Romans understood that the arch, by design, had strength and the ability to transfer
load from its centre down to its base. They had used the arch in modular and linear ways
to build bridges and carry water for their water systems. But in the Pantheon, the
modularity of the arch realized its true potential. Spin an arch at its centre point and you
create a dome. This means that across any point in the span you have the strength of the
arch. Also, they had found that concrete could be used to bond all of these arches
together as a single dome. Concrete allowed this dome structure to scale beyond any
other dome of its time. It would take eighteen centuries for technology to advance to the
point where a larger dome than that of the Pantheon could be built.
What does the architecture of ancient Rome have to do with data centres? The physical
architecture itself has little in common with data centres, but the design philosophy of this
architecture does. In both cases, new ideas on how to construct things were needed. In
both cases, using the existing design philosophies of the time, “post and lintel” for
ancient Rome, and “watts per square foot” for data centres, would not scale to new
requirements. It is this idea, the design philosophy of modular, scalable units, that is
critical to meet the requirements of today’s data centres and, more importantly, the data
centres of the future.
A modern data centre still shares many aspects with ancient architecture, structurally and
in service. The form literally follows the function. The purpose of both the Pantheon and a
data centre is to provide services. To provide services, its requirements for continual
functioning must be met. This is the design team’s primary concern. The design of the
data centre must revolve around the care and feeding of the service providing
equipment.
These functional requirements of the data centre are:
■ A place to locate computer, storage, and networking devices safely and securely
■ To provide the power needed to maintain these devices
■ To provide a temperature-controlled environment within the parameters needed to run
these devices
■ To provide connectivity to other devices both inside and outside the data centre
In the design philosophy of this data Centre, these needs must be met and in the most
e cient way possible. The e ciency of the data centre system relies entirely on the
e ciency of the design. The fundamental principles of a data centre philosophy should be
your guiding principles.
A design philosophy is the application of structure to the functional requirements of an
object based on a reasoned set of values.
Fundamentals of the Philosophy
There are ve core values that are the foundation of a data centre design philosophy:
simplicity, exibility, scalability, modularity, and sanity. The last one might give you pause,
but if you’ve had previous experience in designing data centres, it makes perfect sense.
The following are the top ten guidelines selected from a great many other guidelines:
1. Plan ahead. You never want to hear “Oops!” in your data centre.
2. Keep it simple. Simple designs are easier to support, administer, and use. Set things
up so that when a problem occurs, you can x it quickly.
3. Be exible. Technology changes. Upgrades happen.
4. Think modular. Look for modularity as you design. This will help keep things simple
and exible.
5. Use RLUs, not square feet. Move away from the concept of using square footage of
area to determine capacity. Use RLUs to de ne capacity and make the data centre
scalable.
6. Worry about weight. Servers and storage equipment for data centres are getting
denser and heavier every day. Make sure the load rating for all supporting structures,
particularly for raised oors and ramps, is adequate for current and future loads.
7. Use aluminium tiles in the raised oor system. Cast aluminium tiles are strong and
will handle increasing weight load requirements better than tiles made of other materials.
Even the perforated and grated aluminium tiles maintain their strength and allow the
passage of cold air to the machines.
8. Label everything. Particularly cabling! It is easy to let this one slip when it seems as if
“there are better things to do.” The time lost in labelling is time gained when you don’t
have to pull up the raised oor system to trace the end of a single cable. And you will
have to trace bad cables!
9. Keep things covered, or bundled, and out of sight. If it can’t be seen, it can’t be
messed with.
10. Hope for the best, plan for the worst. That way, you’re never surprised.
“It is an old maxim of mine that when you have excluded the impossible, whatever
remains, however improbable, must be the truth.”
- Sherlock Holmes, by Sir Arthur Conan Doyle
The criteria for a data Centre are the requirements that must be met to provide the system
capacities and availability necessary to run the business. Due to the special
circumstances of each facility, it would be di cult to give a comprehensive list of all
criteria involved in data Centre design. The possibilities are vast, and it isn’t the intention
of this project report to give a de nitive set of design plans to follow, but rather to guide
you toward your nal design by listing and describing the most probable criteria. The goal
of this chapter is to arm you with the knowledge you need to begin the design process.
Project Scope
Most often, it is the project scope that determines the data Centre design. The scope
must be determined based on the company’s data Centre needs (the desired or required
capacities of the system and network infrastructure), as well as the amount of money
available. The scope of the project could be anything from constructing a separate
building in another country with o ces and all the necessary utilities, to simply a few
server and storage devices added to an existing data Centre. In either case, those
creating the project speci cations should be working closely with those responsible for
the budget.
Budget
Designing a data Centre isn’t just about what the company needs or wants, it’s what
they’re willing to pay for.
Using project scope as a starting point, the criteria for the data Centre can be loosely
determined, and a comparison between how much this will cost and the budget will
determine the viability of the project. Is there too much money or too little? (Okay, in
theory you could get more money for the data Centre than you need, but this rarely
happens.) Then the balancing act begins. If there isn’t enough money in the budget to
cover the cost of essential elements, either more money must be allocated, or some
creative modi cations must be made to the project scope.
The process for determining a budget, deciding what parts of the data Centre will receive
what portion of it, and putting together a Centre based on designated funds is one of
negotiation, trade-o s, compromises, and creativity. Also, there is probably more than
Data Centre Project Report Page 47 of 185
fi
ff
fi
ffi
fi
fl
one budget for the data Centre, and how the money is allocated depends on numerous
factors speci c to the company.
Planning a data Centre is part of larger business considerations, and both designers and
those setting the budget must be exible. Accountants telling the data Centre designers,
“Here’s how much you get. Make a data Centre,” probably won’t work. By the same
token, designers demanding enough money for the ideal data Centre probably won’t
meet with approval by the accountants. When negotiating for funds, the best idea is to
have several alternative plans. Some questions and considerations that must be
examined in the beginning might include:
■ Is there enough money to create an adequate Centre for the company’s needs? ■ How
much do you actually need to create the Centre?
■ Consider carefully all possible future modi cations, upgrades, changes in power needs,
and system additions in the design.
The toughest thing about designing a data Centre is working within the budget. The
budget will force you to make compromises and you must gure out whether or not you
are making the right compromises. You might be able to cut costs by removing the
backup generators from the budget, but you must weigh the risk of such a decision.
There is the possibility that the data Centre power might fail and systems would be out of
action without backup power. Every compromise carries a degree of risk. Do the risks
outweigh the cost? Figuring out how to meet the budget is where your nance people
and risk analysts really come into play. Use their expertise. Here are a few questions you
might work out with your nance and risk team.
■ If cost exceeds budget, can anything be removed or replaced with a less expensive
alternative?
■ Are all redundant systems really necessary?
■ How much will projected failures (downtime) cost compared to initial costs for
redundant systems?
■ Is a separate Command Centre necessary?
Data Centre Project Report Page 48 of 185
fi
fi
fl
fi
fi
fi
■ Can amortization schedules be stretched from, for example, three years to ve years so
there is money available for other needs?
■ Can certain areas be expanded or upgraded later?
■ What is the best time to bring the facility online? In India, amortization doesn’t begin
until you occupy the space. Would it be better to take the amortization hit this scal year
or the next?
A nal point to consider: As with many aspects of data Centre design, the money spent
on planning is invariably money well spent. It costs money to build a data Centre, and
part of that expenditure comes right up front in coming up with a budget. Money spent on
creating an accurate budget can actually save money in the long run.
Location
It would seem that the site you choose for your data Centre would be considered one of
the essential criteria. It’s true that where you choose to locate the data Centre site (region/
building) is important, but this choice is based on many di erent factors. For example, a
company wants to build a new data Centre near their corporate o ces in Noida, Uttar
Pradesh. To meet project scope on the essential criteria, it is determined that several
crore rupees more are needed, just to secure the site location. Suddenly, building in
Noida doesn’t seem as critical if a few crore rupees can be saved by locating the building
one hundred and sixty kilometres away in India where land prices are much cheaper.
Also, connectivity through the company’s network infrastructure has made it possible for
a data Centre to be located wherever it is practical and a ordable. A data Centre can
even use multiple locations, if necessary, connecting through the network. In this way,
location is a very exible and negotiable criteria.
■ Physical capacity. You must have space and weight capacity for equipment, and
therefore, the other three criteria. There must be space for the equipment and the oor
must be able to support the weight. This is a constant.
■ Power. Without power nothing can run. Power is either on or o . Connections to
di erent parts of the grid and/or utilizing a UPS increases uptime. You must have physical
capacity to have room for power and the equipment that needs power.
■ Cooling. Without cooling nothing will run for long. This is either on or o , though
redundancy increases uptime. You must have physical capacity and power to run
HVACs.
■ Bandwidth. Without connectivity, the data Centre is of little value. The type and amount
of bandwidth is device dependent. You must have physical capacity, power, and cooling
to even consider connectivity.
Unless the data Centre will be used for non-mission-critical operations, the last three
criteria should be designed to be up and running 100 percent of the time.
The use of these elements is non-negotiable, but their values are negotiable. Consider a
decision about power redundancy. A UPS system (batteries that kick in when the power
goes out) is less expensive than creating a power generation plant, but it has a limited run
time. For a mission-critical operation, the 20 minutes of power a UPS might give you
could be insu cient.
Let’s say the UPS costs ₹1 crores, and the power generation plant costs ₹3.5 crores. The
track record of the power company shows that they’re down an average of 15-minutes
once a year. For your company, a 15-minute power outage equals two hours for the
outage and recovery time. Two hours of downtime costs the company ₹500,000. With a
UPS system, there would be no outage because the 20 minutes a orded by the batteries
would easily cover for the 15 minute outage and there would be no recovery time needed.
Therefore, it would take two years to recover the ₹1 crores rupee cost of the UPS,
whereas it would take seven years to recover the cost of the power generation plant. If
the power company has a greater problem with power outages, the generators make
sense. Or relocating to an area with more dependable power might make more sense.
Secondary Criteria
The essential criteria must be included in the design in whatever values are available.
However, there are invariably other criteria that must be considered, but they are
secondary. The level of importance of secondary criteria is wholly dependent on the
company and project scope. It’s conceivable that the budget could be trimmed, for
example, in xtures, but it’s likely that you’ll want to budget in overhead lighting so data
■ ATM transactions which are highly utilized (mission-critical) must be available around
the clock. Redundant systems are essential.
■ Security and equities trading must be constantly available during business hours
(mission-critical) and moderately available the remaining parts of the day. Redundant
systems are essential.
■ Home loans are important but some occasional downtime won’t be disastrous.
Redundancy is a good idea, though this is where corners can be cut.
■ The Community Services Web site should be up and running around-the-clock so
people can access the information, but this is a non-critical service and some downtime
won’t hurt. Redundancy is probably not worthwhile.
■ The Community Services email mailers are sent only once a week in the evening and,
though important, it won’t hurt the company if the mailers go out late on occasion. No
redundancy is required.
Risk-assessment analysts are hired to look at each part of the pro le to determine the
cost of downtime in each area and help decide the best course of action. They determine
that the servers for ATM transactions and equity trading are mission critical. The cost of
either department going down will cost the bank ₹500,000 per minute of down time.
Using the RLU model, the data Centre designer can calculate that these systems require
200kW of electricity. The cost of a 200kW generator is ₹2 crores. The cost of a 20-minute
UPS for 200kW is ₹450,000. So, for ₹2.45 crores the bank can provide power to its
con gurations. Since all it would take is a 5-minute outage to lose ₹2.5 crores, a
generator and a UPS are considered a viable expenditure.
The servers for the Home Loan portion of the bank require 100kW of power and the risk
analysts determine that an outage to this department will cost ₹5,000 per minute. The
cost of a 100kW generator would cost ₹1 crores. A 20 minute UPS for 100kW would be
₹300,000. The risk analysts also went to the Artaudian Power & Electric Company and got
historical information on power outages in the area during the last ve years. This data
Data Centre Project Report Page 52 of 185
fi
fi
ff
fi
fi
fi
shows that they will average 2 outages a year, but the duration of these outages will be
less than ten minutes. Also, the ATM and equity trading groups need a 200kW 20-minute
UPS. This UPS can be upgraded to a 300kW twenty minute UPS for only ₹150,000. At
two 10-minute outages a year, the cost of this UPS upgrade will pay for itself in a year
and a half. This upgrade is deemed viable but the 100kW generator is not, because it
would take 200 minutes of outages of more than 20 minutes to recoup the expenditure.
The systems that run the Community Services web site and mailers represent no
signi cant loss of revenue for the bank if they are down for even a few days. It is
determined that no additional cost for increased availability will be approved for these
systems.
The cost of services to increase availability is a continuum. Each step in increasing
availability has a cost. At some point, the cost of the next step might not be worth the
amount of system downtime. So, determining what the availability pro le of a
con guration will be is determined by the cost of having this con guration unavailable. It
is not about providing your customers with what they want. They always want it all. It’s
about how much money they are willing to spend to get what they want. It’s a cost
e ective trade o .
This chapter describes the most important design decisions that must be made in
planning a data Centre. A few of the topics are described in more detail in later chapters.
This chapter contains the following sections:
■ “Design Process”
■ “Data Centre Structural Layout”
■ “Data Centre Support Systems”
■ “Physical and Logical Security”
■ “System Monitoring”
■ “Remote Systems Management”
■ “Planning for Possible Expansion”
Design Drawings
It should be kept in mind that the design of a data Centre should be structured but uid,
not only during the design process, but after construction. Computer environments
constantly evolve to accommodate company needs, changes in technology, and the
Data Centre Project Report Page 56 of 185
ff
fi
fi
ff
fi
fi
ffi
fi
ff
fi
fl
fi
business landscape. Professional, detailed plans are necessary in the design stages, but
it is important to keep updated working drawings of the data Centre and all support
systems.
Computer Aided Design (CAD) software is typically used. It is more e cient than drawing
by hand, and creates plans that are clearly readable, easily reproduced, and easily
modi ed. These blueprints allow for the continued updating of architectural, electrical,
mechanical, and computer systems. The drawings can be used in site evaluations and
future planning.
Blueprints are particularly important when the project involves outside contractors. Some
of the primary contractors are:
■ Architectural rms. They might supply actual drawings of the building, showing a wall
here, door there, lobby over there, where carpet will be installed, where concrete will be
used. This represents the physical building.
■ Interior designers. They create the “look” of the place, sometimes matching company
speci cations for consistency of styles, from trim to carpet.
■ Structural engineers. They make sure the building will use materials and construction
techniques that will keep the roof from collapsing under the weight of all those cooling
towers.
■ Electrical design rms and engineers. They deal with lighting plans, electrical
distribution, wireways under the oor, breaker subpanels, power transformers, wiring for
the re detection system, and smoke alarms.
■ HVAC design rms. They determine HVAC unit placement and whether they should be
20-ton or 30-ton, determine proper installation of piping that brings chilled uids to units,
and where cooling towers, compressors, and heat exchangers will be located.
Some of these tasks, such as electrical and HVAC, might be handled by the same rm. It
could depend on who is available in the area. It is a good idea to employ a project
management rm to coordinate all of these di erent contractors.
Thanks to the Internet, you can access the drawings electronically (Adobe® PDF format
works well for this). This can reduce the time of the design/review/change process
considerably. The CAD drawings are usually held by the building contractor who helps
coordinate all the other subcontractors. PDFs are good, but, a few times in the cycle, you
will need actual blueprints which are larger in scale than most computer monitors. These
allow you to see very ne details that might be lost in a PDF le. Also, they provide a
place to make notes directly on the drawings for later use.
During the design process, you should also have several dozen pads of Post-It Notes for
temporary comments on the blueprints and to bring certain details to the attention of
others. You should also have a large white board with lots of dry erase markers in a
variety of colors. (Remember to put the caps back on the markers when not in use.)
■ Budget
■ District
■ Insurance and building code
■ Power
■ Cooling
■ Connectivity
■ Site
■ Space
■ Weight
A delicate balancing act must occur between many of the members of the design and
build team to determine the capacities and limitation,, and to work with them. With this
knowledge, factors can be juggled to decide how to implement what is available to meet
the project scope. If the limitations are too great, the project scope must change.
Structural Considerations
There are any number of structural issues to consider when designing a data Centre. Here
is a sampling of some actual issues you might face:
■ Building in an area with a sub oor to ceiling height of ten feet. By the time you add
two feet for the raised oor, the height is reduced to eight feet. Now add the twelve
inches needed for light xtures and re suppression systems, and your space is
reduced to seven feet. The racks that will occupy this space are seven feet tall and
exhaust heat out the top, or rather, they would if there was room. These racks will
overheat real fast. This is not a realistic space in which to build a data Centre.
Data Centre Project Report Page 58 of 185
fi
fl
fi
fl
fi
ff
fi
fi
■ Building in the basement of a building that overlooks a river. After construction is
complete, you nd out that the river over ows its banks every few years and you don’t
have any pumps in the basement to get the water out.
■ Building in a space with the restrooms built right in the middle. This really
happened. The space was shaped like a square donut with the rest rooms occupying a
block in the middle. How do you e ciently cool a donut-shaped space? Having toilets
in the middle of your data Centre is not the right way to add humidity to your HVAC
system. If you must live with this type of room shape, you must. But if you have any say
in the matter, look into other locations.
■ Aisles aren’t wide enough for newer or bigger machines. The people who move the
equipment end up ripping massive holes in the walls trying to make the tight turns
required to get from the loading dock to the staging area. Maybe a few dozen light
xtures along the corridor are taken out as well. Your building maintenance crews will
get very angry when this is done on a weekly basis. Know how much space is needed
to move and turn the racks and design in adequate aisle space. This means anticipating
larger and heavier machines.
■ Not knowing the structural load rating of raised oors and ramps. Imagine this: You
acquire a space with an existing raised oor and ramps. This means a big chunk of the
cost and design process has been taken care of! The day arrives when the storage and
server racks begin moving in. Unfortunately, no one checked into the load rating for the
oor and ramps. While rolling in a heavy rack, a portion of the oor gives way, taking the
rack and several people with it into a big hole. You learn quickly about liability issues.
Know the total weight that will go on the oor and ramps, and make sure existing oors
and ramps meet these speci cations.
Raised Floor
A raised oor is an option with very practical bene ts. It provides exibility in electrical
and network cabling, and air conditioning.
A raised oor is not the only solution. Power and network poles can be located on the
oor and air conditioning can be delivered through ducts in the ceiling. Building a data
Centre without a raised oor can address certain requirements in ISP/CoLo locations.
Wire fencing can be installed to create cages that you can rent out. No raised oor allows
these cages to go oor to ceiling and prohibits people from crawling beneath the raised
oor to gain unauthorized access to cages rented by other businesses. Another problem
this eliminates in an ISP/CoLo situation is the loss of cooling to one cage because a cage
closer to the HVAC unit has too many open tiles that are decreasing sub oor pressure.
However, some ISP/CoLo locations have built facilities with raised oor environments,
because the bene ts of a raised oor have outweighed the potential problems listed
above.
How aisle space is designed also depends upon air ow requirements and RLUs. When
designing the Centre, remember that the rows of equipment should run parallel to the air
handlers with little or no obstructions to the air ow. This allows for cold air to move to the
machines that need it, and the unobstructed return of heated air back to the air
conditioners.
Be sure to consider adequate aisle space in the initial planning stages. In a walls within-
walls construction where the data Centre is sectioned o within a building, aisle space
can get tight, particularly around the perimeter.
Command Centre
Though an optional consideration, for some companies a separate Command Centre
(also called a Command and Control Centre) is useful for controlling access to the
consoles of critical systems. This is just one of the many security devices used in the data
Centre. In disaster recovery scenarios or other critical times, the Command Centre is a
key area. In many corporations where computer technology is at the core of their
business, this Command Centre also serves as a “war room” in times of crisis.
However, with companies moving to geographically distributed work forces, having only
one way to monitor and work on equipment in the data Centre might not be a practical
alternative. Being able to hire from a talent pool on a global scale increases your chances
of getting better people because the pool is larger. This is also useful if you are in an area
prone to bad weather. A person might not be able to get into the Command Centre, but if
the data Centre is remotely accessible and they have power and a phone line, they can
still work.
As more companies move to electronic ways of doing business, Command Centres are
becoming public relations focal points. They can be designed as a glassed in box that
looks into the computer room to give personnel a way to monitor security and
allow visitors a view of the equipment without entering the restricted and environmentally
controlled area. If the data Centre is a key component of the company’s image, the
Command Centre can be designed to look “cool,” an important PR tool. Whether it looks
into the data Centre computer room or not, a modern, high tech Command Centre room
■ Locations on the oor that can support the weight of the racks
■ Power to run the racks
■ Cooling to keep the racks from overheating
■ Connectivity to make the devices in the racks available to users
■ Planned redundancies
If any one of these services fail, the system will not run e ectively, or at all. These support
systems are how a data Centre supplies its intended services. They are also
interdependent. If you can’t place the server in the data Centre, it won’t run. If you can’t
get enough power to run the server, it won’t run. If you can’t cool the server, it won’t run
for long, a few minutes at best. If you can’t connect the server to the people who need to
use it, what good is it? All of these requirements must be met simultaneously. If one of
them fails, they all might as well fail. Your data Centre can only be as e ective as its
weakest support system.
You have to be able to place the servers in the data Centre and, depending on the type of
server, you might need even more space than its physical footprint to cool it. This is the
cooling footprint. Weight is also a major consideration. If you have space for the machine,
but your raised oor can’t handle the weight load, it will crash through the raised oor.
The ramps or lift you use to get the machine onto the raised oor must also be able to
handle the weight load of the system.
Power Requirements
It is essential that the data Centre be supplied with a reliable and redundant source of
power. If computers are subjected to frequent power interruptions and uctuations, the
components will experience a higher failure rate than they would with stable power
sources. To assure that power is up constantly, multiple utility feeds, preferably from
di erent substations or power utility grids, should be used. Also, the data Centre should
have dedicated power distribution panels. Isolating the data Centre power from other
Data Centre Project Report Page 61 of 185
ff
fl
fl
ff
fl
fl
ff
fl
power in the building protects the data Centre and avoids power risks outside your
control.
Placement of the HVAC (air conditioning) units is highly dependent on the size and shape
of the data Centre room, as well as the availability of connections to support systems.
The primary concern in placement is for optimal e ectiveness in dealing with the planned
load.
Air ow must be considered in the layout of the HVAC systems as well. Reducing
obstructions under the oor will provide the best air ow to the areas where the air is
needed. Air ow is also governed by under- oor pressure, so the placement and
distribution of solid and perforated tiles on the raised oor should be carefully considered.
You must maintain higher air pressure under the oor than in the data Centre space above
the oor.
Network Cabling
Network cabling is essential to a data Centre. It must supply not only TCP/IP connectivity,
but connectivity to Storage Area Networks (SAN) as well. Storage systems are becoming
increasingly “network aware” devices. Whether this has to do with managing storage
through TCP/IP networks or with using these devices on SANs, the requirements of the
network cabling must be exible and scalable.
Most of these requirements can be met using Cat5 copper and multi-mode bre.
However, some single-mode bre might also be needed to support WAN requirements.
Understanding what equipment will go where and knowing the cabling requirements of
each piece of equipment is integral to building data Centres. Of all of these support
systems, upgrading or adding more network cabling inside the data Centre is the least
intrusive support system upgrade.
Planned Redundancies
It is important to consider all of the possible resources that will be needed for
redundancy. Particularly, consider redundancy for power and environmental support
equipment. Redundant systems allow for uninterrupted operation of the Centre during
electrical and HVAC upgrades or replacements. A new HVAC unit can be run
simultaneously with the hardware it is replacing rather than swapping the two.
Redundancy assures that power and environmental controls are available in the event of
power or equipment failures.
Plan for at least the minimal amount of redundancy, but also plan for future redundancy
based on projected growth and changes within the Centre. Will the focus of the Centre
It is important that the intentions for redundancy be maintained as the demands of the
data Centre change and grow. Extra oor space or support systems that were planned for
redundancy should not necessarily be used for expansion if this strategy means
increasing the chances of downtime due to failures. Make sure the blueprints clearly
indicate the intended purpose of the space and systems.
The biggest problem with allocating less redundancy to create more capacity is in the
area of sub-panel and circuit breaker space. You should allocate space for at least one
additional sub-panel and breakers in the mechanical room for each megawatt of power
you have in the data Centre.
Also, consider redundancy for UPS and emergency power generators. While these are
large expenditures and twice as large if they are totally redundant, in a mission critical
data Centre where the cost of even one minute of downtime can cost crores of rupees,
they could be a prudent investment. Use the resources of your risk analysts to determine
the cost-e ectiveness of these redundant systems.
Two types of security must be addressed in the data Centre design. It is important to limit
access of unauthorized people into the data Centre proper, and to prevent unauthorized
access to the network.
All points of access should be controlled by checkpoints, and coded card readers or
cipher locks. Figure 3-3 shows these two restricted access features for entry into secure
areas.
For added security, cameras can be installed at entry points to be monitored by security
personnel.
The ability to access the physical console of a system over a network has many
advantages, including:
■ The ability to administer machines in a di erent region, even a di erent country
■ The ability to work remotely, from house, hotel, or even a conference.
However, this also means that anyone on the network could gain unauthorized access to
the physical console. Ways to reduce this risk include:
System Monitoring
Monitoring system status, health, and load is a useful tool for understanding how each
system is working, by itself and in relationship to other connected systems. Whatever
software you use for monitoring should conform to industry standard interfaces like
Simple Network Monitoring Protocol (SNMP). Even HVAC systems and UPS systems can
be connected to the network and run SNMP agents to give useful information on the
health of the data Centre and support systems.
Most data Centres have been able to continue within the same area without having to
take up more real ecountry. However, power and cooling requirements increase. Even if
you have the physical space to expand, you might not be able to accommodate the
additional power or cooling requirements of expansion. Also, sometimes a direct addition
to an operational data Centre is an even a tougher design and construction challenge
than building a new facility. What is more likely is that a future expansion would be treated
as a separate space from the existing data Centre, and you can use the networking
infrastructure of the existing data Centre to “link up” the expansion data Centre with the
existing one.
Using RLUs to determine data Centre capacities is the best method for planning for future
expansion. RLUs will give you the tools to de ne your space, structural needs, in-feeds
(including power and cooling), etc. and therefore give you a clear picture of remaining
capacities.
Designing a data Centre involves many di erent variables that include the housing
structure, all of the utility and network feeds necessary to keep the Centre
operational, and the storage and processing power of the hardware. Balancing all of
these variables to design a data Centre that meets the project scope and keeps the
Centre in constant operation can easily become a hit or miss operation if not carefully
planned. Using older methods, such as basing power and cooling needs on square
footage, gives inadequate and incomplete results. A newer method looks more closely at
room and equipment capacities using rack location units (RLUs) to plan the data Centre.
The design of the data Centre is dependent on the balance of two sets of capacities:
■ Data Centre capacities: Power, cooling, physical space, weight load, bandwidth (or
connectivity), and functional capacities
■ Equipment capacities: The various devices (typically equipment in racks) that could
populate the data Centre in various numbers
Depending on the chosen site of the data Centre, one of these sets of capacities will
usually determine the other. For example, if the project scope includes a preferred amount
of equipment capacity for the data Centre, the knowledge of the equipment requirements
can be used to determine the size of the Centre, the amount of power and cooling
needed, the weight load rating of the raised oor, and the cabling needed for connectivity
to the network. In other words, the equipment will determine the necessary data Centre
capacities. On the other hand, if the data Centre will be built in a pre-existing space, and
this space has limitations for square footage, power, etc., this will determine the
supportable equipment capacities. In other words, the data Centre size and in-feeds will
determine how much equipment you can put in the data Centre.
A new method for designing a data Centre based on these capacities uses a calculating
system called RLUs. The actual process of de ning RLUs to determine the capacities of a
data Centre boils down to careful planning. RLUs will assist you in turning the critical
design variables of the data Centre into absolutes. The idea is to make sure the needs of
each rack are met as e ciently as possible. RLUs tell you the limits of device
requirements and, therefore, the limits of the data Centre itself. Knowing these limits, no
matter how great or small, gives you complete control over the design elements.
The job of planning the data Centre is one of balancing. You will add equipment, modify
the in-feeds based on the equipment, nd the limits to the feeds, reevaluate the
equipment population or con guration, nd that the budget has changed, then reevaluate
equipment and resources.
The Rack Location Unit (RLU) system is a completely exible and scalable system that
can be used to determine the equipment needs for a data Centre of any size, whether 100
or 100,000,000 square feet. The system can be used whether you are designing a data
Centre that will be built to suit, or using a prede ned space. The RLU determinations are
a task of the design process and can determine whether or not the space is adequate to
ful ll the company requirements. Regardless of limiting factors, RLUs allow you the
exibility to design within them.
In a data Centre, most devices are installed in racks. A rack is set up in a speci c location
on the data Centre oor, and services such as power, cooling, bandwidth, etc., must be
delivered to this location. This location on the oor where services are delivered for each
rack is generally called a “rack location.” We also use the information on these services
as a way to calculate some or all of the total services needed for the data Centre.
Services delivered to any rack location on the oor are a unit of measure, just like kilos,
meters, or watts. This is how the term “rack location units” was born.
RLUs are de ned by the data Centre designer based on very speci c device
requirements. These requirements are the speci cations that come from the equipment
manufacturers. These requirements are:
■ How much power, cooling, bandwidth, physical space, and oor load support is needed
for the racks, alone, in groups, and in combination with other racks
■ How many racks and of what con gurations the data Centre and outside utilities can
support
Unlike other methods, the RLU system works in both directions: determining necessary
resources to accommodate and feed the equipment, and assisting changes in the
quantities and con gurations of the equipment to accept any limitation of resources.
In the past, there were mainframes. There was usually only one of them for a company or
a data Centre. The mainframe had a set of criteria: How much power it needed, how
much heat it would give o per hour, how large it was, and how much it weighed. These
criteria were non-negotiable. If you satis ed these criteria, the machine would run. If you
didn’t, it wouldn't run. You had one machine and you had to build a physical environment
it could live in.
Fast forward to the 21st century. Computers have become a lot faster and a lot smaller.
The data Centre that used to house just one machine now holds tens, hundreds, perhaps
thousands of machines. But there is something that hasn't changed. Each of these
machines still has the same set of criteria: power, cooling, physical space, and weight.
So, now you have di erent types of servers, storage arrays, and network equipment,
typically contained in racks.
How can you determine the criteria for all the di erent devices from the di erent vendors?
Also, whether you are building a new data Centre or retro tting an existing one, there are
likely to be some limits on one or more of the criteria. For example, you might only be
able to get one hundred fty 30 Amp circuits of power. Or you might only be able to cool
400,000 BTUs per hour. This is an annoying and frequent problem. Creating RLU
de nitions will give you numbers to add up to help you decide how many racks you can
support with these limitations.
Until recently, data Centres were populated with equipment based on using a certain
wattage per square foot which yielded an amount of power available to the equipment.
This could also be used to roughly determine the HVAC tonnage needed to cool the
equipment. Unfortunately, using square footage for these decisions assumes power and
cooling loads are equal across the entire room and does not take the other requirements
of the racks, or the number of racks, into consideration. This worked when a single
machine such as a mainframe was involved. The modern data Centre generally uses
multiple machines and often these are di erent types of devices with di erent
speci cations. There are also di erent densities of equipment within the di erent areas of
the data Centre.
Power
The amount of power, number of breakers, and how the Centre is wired are all dependent
on the needs of the equipment planned to occupy the oor space. When you know the
power speci cations and requirements of all the devices, you can do the math and begin
designing the power system.
The last item is best described in watts. This information should be part of the
manufacturer’s speci cations. However, if the speci cations don’t tell you how many
watts the device will draw, you can calculate this from the BTUs-per-hour rating of the
rack.
You will also need to know if the rack has redundant power. If so, all watt usage
requirements must be multiplied by this value. If the rack has no redundant power, the
multiplier is one; if it does have redundant power, the multiplier is two. In an RLU
speci cation, this multiplier is referenced as RM (redundancy multiplier).
Power can be di cult to retro t, so you should plan carefully for future power needs and
install conduit and wiring adequate for future power upgrades.
Cooling
A rack of devices produces heat and requires a speci c amount of cooling to keep it
running. The HVAC requirements should be carefully planned, because retro tting the
HVAC system is no easy task.
Cooling requirements are speci ed as BTUs per hour. This should be part of the
manufacturer’s speci cations. If it is not, you can calculate it from the amount of watts the
machine uses.
Data Centre Project Report Page 70 of 185
fi
ffi
fi
fi
fi
fi
fi
fi
fi
Watts ⋅ 3.42 = BTUs per hour
At minimum, either BTUs per hour or watt usage must be available from the HVAC
manufacturer. The requirement is to deliver enough conditioned air to the rack to meet the
BTUs per hour requirement. For example, if you have a rack that has a cooling
requirement of 10,000 BTUs per hour, and the HVAC system is only able to deliver
conditioned air to this rack location at 90 percent e ciency, then it must deliver 11,110
BTUs per hour into the plenum to compensate for this ine ciency. Work with your HVAC
contractor to ensure this.
The amount of area (square footage) needed on the oor for each rack must take not only
the actual dimensions of the rack into consideration, but also its cooling dimensions. This
is the area outside the rack used to draw air to cool the internal components and exhaust
this heated air out of the rack and back to the return plenum. While newer Sun racks are
usually cooled front-to-back (an e cient use of space because racks can be placed side-
by-side), older Sun racks and racks from other manufacturers might draw or expel air at
the sides. The dimensions you use in determining RLUs should include this cooling area.
These dimensions also indicate the minimum areas that should be left unobstructed by
other equipment to allow for the free owing of air. Check with the manufacturer for the
actual cooling dimension speci cations.
The cooling space required outside the rack can often be used as aisles and free space.
In a front-to-back con guration, the cooling area would be part of the 40 to 50 percent of
the total square footage needed for free space.
Bandwidth
The primary concern with bandwidth (connectivity) is the network and storage cabling
within the data Centre. This is usually done with Category 5 (Cat5 - copper) cables and/or
multi-mode bre cables. When determining the bandwidth part of the RLU, the concern
will primarily be whether or not there are enough connections for the rack to interface with
other devices.
To e ectively plan connectivity outside the data Centre, your ISP service bandwidth
should meet or exceed the total capacity of your data Centre’s inbound and outbound
bandwidth speci cations. The cost of bandwidth goes down over time, so it might not be
worth over-provisioning. Putting in the best quality and su cient quantities of cables for
networking and storage up front is recommended, but it might be more cost-e ective to
buy switches and ports as you need them.
Bandwidth within the data Centre is the easiest to retro t. If you must cut costs in the
design stages, cut internal cabling rst. You can always add it later as budget allows.
Cabling to the outside ISP should be done correctly in the beginning because changing
this cable is costly (sometimes involving ripping up walls, oors, digging trenches, etc.).
Each distinct rack has a speci ed weight. This weight is generally the same for all racks of
the same manufacturer and model, but could change due to additions or subtractions to
the con guration. The exact weight, or the potential weight of the rack, should be used in
the calculations to ensure a oor that can handle the load. There are a few di erent oor
load capacities to consider:
■ Total oor load. The weight the entire raised oor structure and sub oor can support.
This is particularly important if the sub oor is built on an upper story oor rather than
solid ground. Also, the raised oor structure must be chosen with a rating exceeding
current and future weight demands.
■ Total tile load. The weight a single tile of a speci c type can support. There are “solid,”
“perforated,” and “grated” tiles. The amount of load that can be handled by these types
of tiles can vary widely from one manufacturer to the next, and from one type of tile to
the next. Material type and amount of perf are key factors in support strength. Using a
typical lled raised oor tile, a 15 percent pass through tile (meaning that 15 percent of
the area of the tile is open space) will be able to handle a higher total load than a 25
percent pass-through tile because less material has been removed. However, cast
aluminum tiles can support the same total tile load, sometimes referred to as
concentrated load, whether the tile is solid, perforated, or grated. Grated tiles can have
up to a 55 percent pass-through.
■ Point load of tile. The point load of the tile of speci c type. A tile should be chosen that
will support the worst case point load of all the racks in the room. This is generally a
quarter of the weight of the heaviest rack, but the point load should be multiplied by
two, and should not exceed the total tile load. It would be rare to have more than two
casters from the same rack or a single caster from two racks on a single tile.
Load capacity is probably the most di cult of the criteria to retro t later. Imagine trying to
keep the data Centre up and running while replacing a raised oor.
Budget is often a major factor in determining what type of raised oor you install. In some
data Centre applications, using the same raised oor throughout makes sense. However,
there are areas, such as high value storage areas, electrical rooms, or areas with lighter
equipment, that might not need such high oor load capacities. For example, the Sun
Netra™ X1 server weighs 6kg or 13.2 lbs. A single rack with 30 Netra X1s would weigh
less then 500 lbs, and that’s assuming the rack itself weighs 100 lbs. A Sun Fire™ 6800
server weighs 1000 lbs. And the Sun Fire 15K server tips the scales at 2200 lbs (yep,
that’s one metric ton!). Now, if you know that you’ll have areas with smaller oor loads,
you can use a lower rated oor in that area and save some money on the budget.
However, you have designed in a restriction so that equipment in that area cannot exceed
a speci c weight.
If you decide to split up the weight load of your data Centre oor, you must also consider
the pathway to the higher load area. The heavier rated oor should be the one closer to
the entry point. It’s poor planning to construct a higher rated oor on the far side of your
data Centre, and a lower rated oor between that oor and the access point, because
equipment must be transported over this space.
Physical Space
There are essentially three aspects of physical space to consider when determining the
area requirements for a rack:
Functional Capacity
Functional capacity is required only to determine the quantity and type of RLUs you will
need to meet the project scope. For example, a Sun StorEdge™ T3 array might contain 36
gigabyte or 73 gigabyte drives. A fully con gured rack of Sun StoreEdge T3 arrays with 36
gigabyte drives has a functional capacity of 2.5 terabyte. A fully con gured rack of Sun
StoreEdge T3 arrays with 73 gigabyte drives has 5.2 terabyte functional capacity. So, if
your project scope speci es 100 terabytes of storage, you would need only 20 Sun
StoreEdge T3 arrays with 73 gigabyte drives. Forty would be needed if 36 gigabyte drives
are used.
The RLU tells you exactly what criteria needs to be meet for a rack of equipment to run. It
doesn't matter what the empty space (places where machines do not live, aisles,
pathways between aisles, door entries, etc.) has as criteria (it could be 90 degrees by the
ramp). It also indicates where the physical attributes such as power outlets, cooling air,
bre connection terminations, etc., need to be located. They need to be located wherever
the RLU will be located in the data Centre.
To determine the bandwidth requirements for any RLU, you need to look at how the racks
will be connected.
An individual Sun StorEdge A5200 array has up to four bre connections. You can t six
Sun StorEdge A5200 arrays in a rack. If your environment only requires you to use two of
these four connections, then 2x6 will give you the correct count. However, if you use all
four, the number will be 24. In the case of the Sun Fire 6800 server (RLU-C), the four Cat5
copper connections are necessary for these servers to be connected to two 1000BaseT
production networks, one administrative network, and one connection to the system
processor.
Now you have three RLU de nitions: RLU-A, RLU-B, and RLU-C. If you have 30 di erent
racks (all having di ering speci cations), you would have 30 separate RLUs. This is good,
and each type of rack (having di erent speci cations) should have its own RLU
designation.
Note – In this example, the de nition names are alphabetical, but that only gives 26
possibilities (52 if using both upper and lower case). You can design your own
alphanumeric designations. Whatever you choose, keep the designations short.
Notice that the de nitions for RLU-A and RLU-B are similar. Power outlets are the same
and watt usage is near identical. Cooling is a di erence of only 1020 BTUs per hour.
Physical space is the same. Weight di erence is less then 100 kg. The biggest di erences
are bandwidth (and that is four bre connections), and functional capacity at 0.5 terabyte.
Therefore, by taking the worst case for each of the criteria you can create a superset RLU
de nition that will meet the requirements of RLU-A and RLU-B. (Keep in mind that a
superset de nition can combine as many racks as is practical.) For now, let us call this
example RLU Superset-A.
Note – Using the “superset” name indicates that an RLU type is made up of the
speci cations of two or more racks. It is also a good idea to keep a list of the separate
RLUs in each superset.
Assume a decision is made to install 60 RLU-A racks and 20 RLU-B racks in your data
Centre. By building 80 RLU Superset-A locations in the data Centre you can support the
You now know exactly what you need (power, cooling, etc.) and where you need it for
each rack going into the Centre. Using superset RLUs gives you exibility in the design if
you need to modify the number of racks later, with no need to retro t.
There is another bene t: Often most data Centres are not at full capacity when they are
built. By having pre-de ned and pre-built RLU locations of given types, you can more
easily track the RLU locations that are not in use. As you need to bring new racks online
you know exactly how many you can install and where.
In-feed capacities are the grand totals of the power, cooling, physical space, weight, and
bandwidth you will need to support a given number of racks. Let’s say you plan to build a
data Centre with 40 Sun Fire 6800 servers (RLU-C). Each Sun Fire 6800 server will have
four Sun StorEdge racks (RLU Superset-A) connected to it. That's 40 RLU-Cs and 160
RLU Superset-As.
Only 40 to 60 percent of the oor space in a data Centre should be used to house
machines, as the rest of the space is needed for aisles, row breaks, ramps, etc. Open
space is also needed to allow cold air from the oor plenum to come up through
perforated tiles to the racks, and for exhaust air to move freely out of the rack and into the
HVAC return plenum.
So, multiply the total square footage by 2.0 to get the total square footage needed for the
room.
Total Physical Space = 1,640 sq ft Usage
Multiplier ⋅ 2.0
Total Room Space = 4100 sq ft
The following describes a possible procedure for planning the equipment set-up and
utility feeds for the data Centre.
Based on the project scope (including budget), and working with your capacity planning
information, determine what equipment will be connected into the data Centre. Using
your RLUs and capacity planning information, you now have a basis for determining the
number of racks needed, as well as their space and utility requirements.
Knowing how many RLUs you must accommodate, gure out the following requirements
and speci cations:
■ Power (number of outlets/type/watts/amps)
■ Cooling (number of tons of HVAC)
■ Space (square footage needed - see the “Cooling” section)
■ Bandwidth (number of copper, number of bre connections)
■ Weight of racks
For example, 25 RLU-X racks require a total of 1.2 megawatts of power and only 900
kilowatts are available. To solve this problem, the designer must make some decisions.
Some options are:
Possible limiting factors are insu cient power, bandwidth, space, vertical height, and
budget.
■ Can the problem be corrected? If so, how much will it cost?
■ Can the scope of the project be modi ed to accommodate the limitation?
■ If the scope cannot be changed and the limiting factors are not corrected, should the
project be abandoned?
This can be done in many di erent ways, depending on personal preference. You’ll want
to visualize where and how many racks you can t in the space. One way to do this is to
get a large plan view of the data Centre space, usually from a blueprint. Then cut out
pieces of paper about the size of your racks and start placing them.
You can also draw directly on the blueprint with colored pens and draw in boxes that
represent the racks you need to place, but you’ll need several blank copies of the
blueprint as you make changes to it. Using a plastic overlay with a grease pencil will make
it easier to make corrections.
When housing a data Centre in an existing building, several design issues must be
considered to choose the best location. Careful planning is essential to assure that a
location will meet not only immediate needs, but future needs as well. In the event that a
building or area must be built to house the data Centre, there are even more
considerations. The build-to-suit option typically o ers more exibility than utilizing an
existing area, but careful planning is still essential. Looking ahead and planning the site
and layout with forethought can save tremendous amounts of time, money, and
aggravation. Poor planning often means costly upgrading, retro tting, or relocating.
Aside from budget, there are several factors, many of which are described below, that
should be considered when determining the location of a building site. Consider all of the
possible problems with the area. Then, decide which of the problems are necessary evils
that must be tolerated, which can be remedied, and which will involve building or
retro tting in such a way as to factor them out.
Potential problems in the geographic location might not be obvious. Resource availability
and potential problems, whether natural or man-made, are critical issues and uncovering
them requires careful research.
Natural Hazards
The most obvious of potential natural hazards are ooding, tornados, hurricanes, and
seismic disruptions such as earthquakes and volcanic activity. If you must locate the data
Centre in an area with a history of these phenomena, make sure you retro t or build with
these hazards in mind. Obviously, a determination must be made whether or not it is
nancially worthwhile to locate the Centre in an area with potential hazards. If the site can
be set up in such a way that nulli es the problem (for example, in the case of
earthquakes, using seismic restraints on the equipment), then it might be worthwhile.
Flooding
Consider whether or not the site is at the bottom of a hill that would catch rain or snow
melt. Is the site on a ood plain? Is it near a river that might over ow? Is the site in the
basement area of an existing location? While you are at it, you might as well consider
tsunamis.
Seismic Activity
Anything that shakes the building is bad for equipment. Is the potential site in an area that
has frequent earthquakes, volcanic activity, or gigantic prehistoric lizards stomping
about? What is the seismic history of the area? How often and how severe is the activity?
What precautions can be used against the vibration and possible structural damage that
can be caused by tremors?
High Winds
This might be a concern if you are locating the data Centre in any of the higher oors of a
tall building. You must reconsider unless the building is built to resist moving in high
winds.
Temperature Extremes
It is important that data Centre equipment stay within a speci c operational temperature
range. In areas with extreme levels of heat or cold, it might be necessary to have more
HVAC and insulation. In these areas, humidi cation is also a problem, and larger
humidi cation units might be necessary. Larger HVAC systems might be worth the cost.
Fire
Though arson is a concern, res can also occur naturally or accidentally. Consider the
history of local re hazards. Is the site near a wooded or grassy area? Are there lightning
storms? Is the building reproof or re resistant? Can the building be designed or
retro tted to be reproof? Can the Centre be located well away from any facilities where
chemicals might create a combustion problem?
Man-Made Hazards
Nature isn’t the only culprit in compromising the integrity of data Centres. There are also
many hazards created by man to disrupt your hard work. Some of them are described in
the following sections.
Industrial Pollution
If possible, avoid locating the facility near major sources of industrial pollution. Look
carefully at neighboring facilities such as:
■ Factories
■ Manufacturing facilities
■ Sewage treatment plants
■ Farms
If chemicals associated with these facilities migrate into the controlled areas of the data
Centre, they can seriously impact not only the hardware, but the health of personnel. The
chemicals used in the eld treatment of agricultural areas can also pose a threat to people
Data Centre Project Report Page 79 of 185
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fl
and machines. Though a natural problem, also consider sand and dust that might be
blown into the Centre.
If you must locate in an area with these potential problems, consider this in your design
plans for the Centre. Make sure you use a ltration system robust enough to lter out any
local contaminants.
Electromagnetic Interference
Be aware of any surrounding facilities that might be sources of electromagnetic
interference (EMI) or radio frequency interference (RFI). Telecommunications signal
facilities, airports, electrical railways, and other similar facilities often emit high levels of
EMI or RFI that might interfere with your computer hardware and networks.
If you must locate in an area with sources of EMI or RFI, you might need to factor
shielding of the Centre into your plans.
Vibration
Aside from natural vibration problems caused by the planet, there are man-made
rumblings to consider. Airports, railways, highways, tunnels, mining operations, quarries,
and certain types of industrial plants can generate constant or intermittent vibrations that
could disrupt data Centre operations. Inside the Centre, such vibrations could cause
disruption to data Centre hardware, and outside the Centre, they could cause disruption
of utilities.
If constant vibration is a problem in the area, you should weigh the possibility of
equipment damage over the long term. In the case of occasional tremors, you might
consider seismic stabilizers or bracing kits which primarily keep the racks from tipping
over.
FIGURE 5-1 Data Centre Before the Walls, Raised Floor, and Equipment Are Installed
Security
Not all businesses have a need for high-level security, but most businesses must make
sure their data Centres are secure from vandalism, industrial espionage, and sabotage.
Make sure the potential area is situated so that access can be controlled. In a pre-existing
building, check for problem areas like ventilators, windows, and doorways that lead
directly outside or into an uncontrolled area. Could these openings be a breach to
security? Can they be blocked or can access be controlled in another way? Can motion
detectors and alarm systems be placed to increase security?
Access
Aside from security access considerations, the potential site for the data Centre should
be set up for the loading and unloading of large items such as HVAC units and computer
racks. In the case where the data Centre is not in direct proximity to a loading dock, there
must be a way to get bulky equipment to the site. It might also be necessary for small
vehicles like forklifts and pallet jacks to have access.
Raised Flooring
If the data Centre will have a raised oor, look at the space with some idea of what will be
placed beneath it. Consider the following:
■ How high can the oor be raised?
■ Consider the amount of open plenum necessary to channel air for cooling. Too little
space will cool inadequately, too much space will cool ine ciently.
Data Centre Project Report Page 82 of 185
ff
fl
fl
fl
fl
fi
ffi
■ Are there structural items in place that might obstruct the free ow of air below the
oor?
■ How will wiring, cabling, and outlets be run?
■ Is a raised oor a viable option for the available space?
■ With the reduced space between oor and ceiling, is there enough space to get heated
air from equipment back to the returns of the HVAC units?
Risk of Leaks
Liquids pose another serious hazard to data Centre equipment. Despite precautions,
water pipes and water mains can leak or burst. If you plan to locate the data Centre at a
pre-existing site, make sure you know where all water pipes, valves, pumps, and
containments are located. If pipes with owing liquids are running through the ceiling, you
might want to consider a di erent site. Also, will the data Centre be under oors occupied
by other tenants who might have facilities with the potential of creating leaks?
If you must locate the Centre where there is a risk of leaks, make sure you design in a way
to move water out of the room. Consider troughs under the pipes that are adequate to
handle the water from a pipe failure and will carry the water out of the room without
over owing. Also make sure there is an emergency water shut-o valve readily accessible
in the event of a pipe failure.
Environmental Controls
The type of air conditioning system chosen for the data Centre, and the location of the
units, might determine the viability of a location. Chilled water units must be connected to
chillers located in the building or an adjoining support facility, and might require cooling
towers. Due to noise and structural issues, chillers are usually located in a basement,
separate wing of the building, on the roof, in a parking lot, or in a separate fenced-in area.
Direct expansion air conditioners require condenser units located outside the building.
Also, the roof or outside pads should be structurally adequate to support the condensers.
Anticipating future expansion needs can be a challenge since it is di cult to predict future
trends in equipment. As technology advances, it tends to make hardware more space-
e cient (though more power and cooling consumptive). Over time, you might t more
equipment into less space, avoiding the need for more oor space (though it might
necessitate more power and HVAC capacity which would need oor space). Also,
networking allows for expansion in a di erent place inside the building or in a nearby
building. Another separate data Centre can be built, can be connected logically to the
other networks, and therefore to machines in the original data Centre.
If the need for more space is anticipated, consider this in your plans. Try not to land- lock
the Centre. If building an addition to the existing structure will eventually be necessary,
consider how the new area might share the existing support equipment, like chilled water
loops, security, etc. If expansion is likely and budget allows, consider putting in the
addition with raised oors and using the space for temporary o ces or storage.
The area is the speci c location, the room or rooms, possibly even multiple oors,
that will become the data Centre. Consider the following:
▪ Is the data Centre area protected from weather and seismic problems?
▪ Is the area safe from ooding (not near a river that over ows, in a ood plain, at
the bottom of a hill)?
▪ How will the data Centre be used?
▪ Will it be used for production, testing, information access?
▪ Will equipment or racks be rotated?
▪ How available must the equipment be (how often online)?
▪ What security level must there be for data Centre access?
▪ Will there be a separate Command Centre? Will it be in a separate location than
the data Centre? Where?
▪ What area is available? What is its shape (round, rectangular, square, L-shaped, T-
shaped)?
▪ How will the area be divided? Consider walls, storage, a Command Centre, o ces,
other rooms, loading docks, etc.
▪ If built within a multi-level building, what oor or oors will be included and what
parts of them are available?
▪ Is there enough width in the corridors, aisles, doorways, etc. to move large
equipment and vehicles?
▪ Are oors, ramps, etc. strong enough to support heavy equipment and vehicles?
▪ Is there a nearby loading dock? Is it on the same oor?
▪ Is a separate site needed for loading, unloading, and storage?
▪ How much room is left for data Centre equipment?
▪ Are there freight elevators? How many?
▪ Are there passenger elevators? How many?
▪ Is the area safe from seismic activity (earthquakes, hurricanes, high winds)?
▪ Are there any water system (bathrooms, kitchens) or pipes above the area?
▪ Are there necessary facilities such as restrooms and break rooms available?
▪ Is food available, even if from a vending machine? This is important for people
working late or in emergency situations where leaving the area for long periods of
time is not possible. Consider a small kitchen in a Command Centre.
The purpose of a raised oor is to channel cold air from the HVAC units and direct it up
where it’s needed to cool equipment, act as an out-of-the-way area to route network and
power cables, and act as a framework for equipment grounding. It also provides a sure
foundation for data Centre equipment.
▪ “Fire Rating”
Floor Height
The height of the oor depends on the purpose of the room. Height should be based on
air conditioner design and anticipated sub oor congestion. A typical oor height between
the sub oor and the top of the oor tiles is 24 inches (61 cm), though a minimum height
could be 18 inches (46 cm). The oor height could go as high as 60 inches (152 cm) but,
of course, you would need added HVAC to pressurize such a large plenum. The height of
the oor is also relative to the total height of the oor space. A 14-foot vertical space with
a 5-foot high raised oor leaves only nine feet. This doesn’t allow enough ceiling height
for air return.
Support Grid
The support grid for the oor has several purposes. It creates the open structure below
the oor to allow for the routing of cables, supports the load surface (tiles) and
equipment, and is used for part of the “signal reference grid.” There are many types of
support grids from di erent manufacturers.
The following gure shows a recommended system that utilizes bolted stringers and
provides maximum rigidity for dynamic loads.
If you intend to use an alternative system, such as snap-on stringers, make sure you
research them carefully to ensure that they meet the necessary load and stability
speci cations.
If the data Centre is to be located in a seismically active area, seismic bracing should be
considered for the raised oor system. Verify that the oor manufacturer supplies
supplemental bracing before making the decision to use a particular system. If this is not
an option, bracing systems are available from several manufacturers that could work with
existing equipment.
When determining the type and speci cations of the support grid you must anticipate all
the possible weight that could be placed on it at one time. Racks full of equipment, HVAC
units, equipment on dollies, forklifts or oor jacks, a tour of people, etc. The weight
speci cations of the oor must exceed this potential weight.
Floor Tiles
A raised oor is typically constructed on a grounded framework, with a load surface
consisting of interchangeable tiles (sometimes called oor panels). The tiles can be solid,
perforated, or grated. There are many di erent types of oor tiles, designed for di erent
loads, and to either prohibit air ow or allow speci c amounts of air ow through them.
Some tiles have custom cutouts for cable or utility passage. There is a great deal of
exibility for designing air ow patterns using tiles with speci c air ow characteristics.
Solid tiles can be placed to redirect air ow and create sub oor pressure. Perforated tiles
can be placed to redirect air ow while also letting a certain percentage of the air ow up
into the room or directly into equipment racks.
Tile Construction
Floor tiles are typically 24 in. × 24 in. (61 cm × 61 cm). Historically, the tile cores have
been made of compressed wood, concrete, or an open structural metal design. These
tiles usually have a point load of 500 pounds. While there are solid tiles from certain
Data Centre Project Report Page 88 of 185
fl
fi
fi
fl
fl
fl
fl
fl
fl
fi
fl
ff
fl
fi
fl
fl
fl
fl
fi
fl
fl
fl
ff
manufacturers that allow a load higher than 500 pounds, you should make sure your
stretcher system is also rated to handle this type of load. Even if these solid tiles and
stretchers can support higher oor load ratings, perforated tiles might not. The use of
perforated tiles that can handle higher loads might be required for heavy, bottom- cooled
equipment.
Choose tiles based on structural integrity and speci c load requirements. Wood or
concrete might not support heavier loads. Sun Microsystems Enterprise Technology
Centres are now installing cast aluminum tiles to handle the oor loads. These tiles can
support a point load of over 1,500 pounds, whether the tiles are solid, perforated, or even
grated tiles with a 55 percent pass-through.
FIGURE 6-2 Perforated Cast Aluminum Floor Tile Set Into the Support Grid
Note – It is best to use tiles with adequate load speci cations so they don’t warp or
become damaged. If this happens, replace them immediately. An ill- tting tile can pose a
safety hazard to people and equipment.
The oor surface must allow for the proper dissipation of electrostatic charges. The oor
tiles and grid systems should provide a safe path to ground through the tile surface, to
the oor substructure and the signal reference grid. The top surface of the oor covering
to understructure resistance should be between a minimum of
1.5 x 105 ohms and a maximum of 2 x 1010 ohms (as per NFPA 56A Test Method). The
tile structure (not the surface laminate) to understructure resistance should be less than
10 ohms.
Never use carpeted tiles. Carpets can harbor contaminants that are agitated every time
someone walks on the tiles. These tiles are more easily damaged by the movement of
hardware, or when removed using specially designed tile lifters that incorporate spikes to
catch the loops of the tiles. Also, carpeted tiles designed with static dissipative properties
can become less e ective over time.
Plenum
A plenum (pronounced PLEH-nuhm, from Latin meaning “full”) is a separate space
provided for air circulation, and primarily to route conditioned air to where it is needed in
the data Centre. It is typically provided in the space between the sub oor and raised oor,
and between the structural ceiling and a drop-down ceiling. The plenum space is often
used to house data and power cables. Because some cables can introduce a toxic
hazard in the event of re, special plenum-rated cables might be required in plenum
areas. This is subject to local re code.
Cable Trays
To keep cables out of harm’s way, it is normal to run cables under the raised oor. While
many data Centres just drop the cables down into the oor, this causes quite a lot of
The use of cable trays under the oor serves as a way to organize cables and limit
blockages under the oor. The cable trays are generally U-shaped wire baskets
(sometimes called “basket wireways”) that run parallel to the wireway that houses the
electrical outlets. In many cases, these trays will be joined to this wireway, either on top of
the wireway or on the opposite side of the outlets. This minimizes air vertices under the
oor that can lead to decreased air pressure.
Note – Make sure you factor in at least one and a half inches of space between the top of
the cable tray and the bottom of the raised oor tiles to keep the cables from getting
crushed. Two inches or more is preferable, but this space could be dependent on the
depth of the plenum.
This placement is critical. Poor planning could set things up so that the rack is on top of a
tile that covers the outlet and the outlet is facing the wrong direction. Good planning can
save you from having to move a machine or crawl under the raised oor to plug in a rack.
This is also a good time to determine where the under- oor cable trays will be installed.
The cable trays help organize the network and storage cables and keep them out of the
plenum where they would block air ow. Excess power cords should also be placed
there.
▪ Layout A: This shows a back-to-back electrical wireway con guration. You could
put the cable tray in the middle. You will still have quite a bit of dangling cable
because the outlets are far from the server. This will work only if your RLU
de nitions have very few network and storage cables de ned in them.
▪ Layout B: This is probably the most e cient layout. The wireway and outlets are
arranged so they can be accessed by removing a tile from the aisle area. The run
length from the outlet is shorter than Layout A. Excess cable can be placed in a
cable tray, either on the opposite side of the outlet or on the top of the wireway.
▪ Layout C: If you don’t look at these types of details in the design process, you
could nd yourself faced with Layout C for every other row of equipment in your
data Centre. Even though you can lift the tile to get access to the wireway, you will
▪ Layout D: This is the worst of the four layouts. The outlet is not only in the wrong
orientation, but it is also under a oor tile that has a rack on top of it. You would
have to move the machine two feet to get the tile up to access the outlet. Why is it
mentioned here? Because this mistake sometimes happens and now you know to
avoid it.
The following are things to consider when planning this layout:
▪ Will there be adequate space between the top of the cable tray and the bottom of
the raised oor tiles? This is important to keep all cabling in the tray from getting
crushed. An absolute minimum of 1.5 inches is recommended between the bottom
of the raised oor tile and the top of the wireway or cable tray, whichever is higher.
▪ Can you get to the cabling below the oor without having to move any racks?
(Moving racks that are in service is not an option.)
▪ What is the best layout so that the excess electrical cable can be placed in the
wireway without spilling over the sides?
It might be a good idea to create a mock-up using whatever materials work best for
you, from co ee stirring sticks and sugar cubes to 2×4s and cardboard boxes, to
gure out the best layout for the wireways.
▪ Network “home run” cabling from points of distribution (PODs) on the oor to
the network room. These cables should be bundled together, and their run to the
network room should be routed above the raised oor. To maximize air ow under
the raised oor, these are usually routed in a separate cable tray in the ceiling
plenum.
The second two types of cabling are installed and removed along with racks on the
oor by data Centre personnel. They are routed along the cable trays under the
raised oor.
▪ Network cables from network PODs to devices. These cables connect devices
to PODs, or connect devices to other devices.
If you know exactly what equipment you’ll be putting on the raised oor and where on the
oor you’ll be placing the equipment, acquiring tiles and the stretcher system with the
correct load capacity is straightforward. Part of the strength of a raised oor is in the fact
that each stretcher is connected to four other stretchers in di erent directions. If you have
to replace the tiles and stretcher system of a raised oor, the removal of even a portion of
the raised oor would cause weakness in the rest of the oor.
Load capacity won’t be much of an issue for ramps made of poured concrete, but it will
be for raised oors and structural ramps. There are three types of loads you should
consider:
▪ Point load. Most racks sit on four feet or casters. The point load is the weight of a
rack on any one of these four points. For example, a Sun Fire 15K server is
2200 pounds with four casters, so the load distribution is 550 pounds per caster. A
oor tile must have higher than 550-pound point load, which means that for a 1-
inch square area on the tile must be able to support 550 pounds on that 1-inch area
without de ection of more than 2 mm.
▪ Static load. Static load is the additive point loads on a tile. If you have two racks,
each with a 400 pound point load, and each rack has one caster on a tile, this tile
will have a 800 pound static load. The tile must be rated for at least an 800 pound
static load.
▪ Rolling load. Rolling load should be close to static load and is usually only
applicable to perforated tiles. Since it is possible that you might use your cool aisle
to also serve as an aisle to move equipment, the perforated tiles will need to
support the weight of two point loads of a rack as they are rolled along the aisle. If
the perforated tiles cannot accommodate this load, you would have to temporarily
The load rating of the raised oor will depend on the design and purpose of the room.
Historically, most raised oors were constructed out of concrete- lled steel- shelled oor
tiles. While solid tiles might be able to support the current and near future load
requirements, the perforated tiles cannot. The strength of these tiles rely
on the concrete ll, and perforations in the concrete greatly weaken the tile. Sun’s
Enterprise Technology Centres have switched to aluminum oor tile systems. These tiles
can handle a point load of 1,750 pounds even on a perforated grate with 55 percent air
ow. The static load of the same tile is 3,450 pounds.
In a pre-existing building, the structural oor must be assessed to determine whether or
not it will support the predetermined weight. Areas designed for light duty, such as
o ces, might not be able to handle the load. This determination should be made by a
quali ed structural engineer.
▪ How much cooling is needed per rack. (Is it the same for all racks, or do some
racks need more cooling than others?)
▪ The arrangement of solid and perforated tiles (in di erent perforation percentages)
to deliver air at the correct pressure to each rack.
The greatest amount of water pressure is leaked from the rst hole. The pressure
from the second hole is less, and the pressure from the third hole is less still.
In the case of air travelling through a plenum and escaping through the holes of the oor
tiles, the same principle applies even if you use only perforated tiles with the same pass-
through percentage. The air escaping through the holes of the tile closest to the source
(HVAC unit) will move at a greater pressure than the air escaping through the holes in
subsequently placed tiles. Therefore, racks directly above the rst perforated tile will
receive more cooling than racks above perforated tiles farther down the plenum. The last
rack in the line might not receive enough air for proper cooling.
To regulate the air more e ciently, perforated tiles of di erent air ow percentages can be
used. The rst tiles would have fewer holes relying on the greater pressure to move the
required volume into the racks. Subsequent tiles would have more holes to allow volume
to move through them despite the drop in pressure.
Solid tiles can also be used to control air ow. Where no air is needed (areas with no
racks above them and in the hot aisles in a back-to-back cooling model), solid tiles
should be used to maintain the optimum pressure. Or perforated tiles can be placed in
locations with no racks if air pressure needs to be reduced, or the room requires more
general cooling
Data Centre Project Report Page 95 of 185
fl
ffi
fi
fi
fl
fi
fl
ffi
fl
fl
fl
ff
ff
fl
fl
fi
fl
fi
fi
fl
fl
Pressure Leak Detection
It is important to maintain pressure under the oor and allow air ow only through
perforated tiles in the speci c areas where it is needed. This will help to maximize the
e ciency of the HVAC systems. However, rooms are not always perfectly square nor
level, so voids in the raised oor, especially near walls and around pipes and conduits,
occur. These voids allow air to escape from the oor void and decrease pressure.
The raised oor should be inspected routinely and any voids should be lled. Also, the
perforated tiles that were used to direct air to machines that have been moved to a
di erent location should be replaced with solid tiles. Replacing perforated tiles with solid
tiles should be part of the standard procedure when a machine is removed or relocated.
The power distribution system is the system that includes the main power feed into the
data Centre (or the building), the transformers, power distribution panels with circuit
breakers, wiring, grounding system, power outlets, and any power generators, power
supplies, or other devices that have to do with feeding power to the data Centre
equipment.
▪ “Electromagnetic Compatibility”
▪ “Electrostatic Discharge”
A well-designed electrical system for the data Centre ensures adequate and consistent
power to the computer hardware and reduces the risk of failures at every point in the
system. The system should include dedicated electrical distribution panels and enough
redundancy to guarantee constant uptime. A well-designed electrical system will provide
consistent power and minimize unscheduled outages. Equipment subjected to frequent
power interruptions and uctuations is susceptible to a higher component failure rate than
equipment connected to a stable power source.
Electrical work and installations must comply with local, country, and national electrical
codes.
You can use the rack location units (RLUs) you’ve determined for your design to calculate
how much power you need for equipment. The RLU de nitions should include not only
servers and storage equipment, but also network equipment such as switches, routers,
and terminal servers. Add to this the power requirements of your HVAC units, re control
systems, monitoring systems, card access readers, and overhead lighting systems.
From your RLU de nitions, you know that you’ll need 800 30Amp 208V L6-30R outlets to
power all of your racks. However, most circuit breakers will trip when they reach 80
percent of their rated capacity (this is sometimes referred to as a 0.8 diversity factor). A
30Amp breaker will really only allow a maximum of 24Amps through it before it trips and
shuts down the circuit. Each circuit can handle about 5000 watts (24 amps × 208 volts =
4992 Watts) or 5KVA so the worst case electrical draw per outlet is 5KVA × 800 outlets =
4000KVA. No problem, because this is well within the 7000KVA you have allocated.
However, most of the watts that these racks consume go into producing heat, and it will
take quite a bit more electricity (for HVAC) to remove that heat.
A good rule of thumb is to take your total equipment power and add 70 percent for the
HVAC system. The electrical usage will vary depending on the system and climatic
conditions. Your HVAC and electrical designers should be able to give you a more precise
multiplier once the HVAC system speci cs are known.
4000KVA × 1.7 = 6800KVA, and that is within the 7000KVA you have been allocated. So,
now you know that you have a large enough power in-feed to meet your electrical
requirements.
Finally, consider all possible future modi cations, upgrades, and changes in power needs.
For example, installing 50Amp wiring when only 30Amp is currently needed might be
worth the extra cost if it is likely, within a few years, that machines will be added that need
40 to 50Amp wiring. The initial cost could be insigni cant compared to the cost of
dismantling part of the data Centre to lay new wire.
▪ Will power sources be shared with areas outside the data Centre?
▪ Will the data Centre need single-phase or three-phase power (or both)?
▪ If the existing site is wired with single-phase, can it be retro tted for three-phase?
▪ If you intend to use single-phase, will you eventually need to upgrade to three-
phase?
■ Can you use three-phase wire for single-phase outlets, then change circuit breakers and
outlets later when three-phase is needed?
▪ Where will the transformers and power panels be located? Is there a separate
space or room for this?
▪ Which RLUs and their quantities need two independent power sources for
redundancy?
▪ If there is only one external power feed, can half the power go to a UPS?
The availability pro le of the data Centre could be the determining factor in calculating
power redundancy. Ideally, multiple utility feeds should be provided from separate
substations or power grids to ensure constant system uptime. However, those designing
the Centre must determine whether the added cost of this redundancy is necessary for
the role of the data Centre. It will be related to the cost of downtime and whatever other
power delivery precautions you are taking. If you have a requirement for your own power
generation as backup for data Centre power, then the additional costs of multiple utility
feeds might not be cost e ective. You should get historical data from your power supplier
on the durations of outages in your area. This can be valuable information when making
these decisions.
You might size your UPS to accommodate the actual power draw rather than the total
power draw. For example, a machine might use 1500 watts for “normal” load. However,
Sharing Breakers
Though it is sometimes a necessary evil, sharing breakers is not recommended. As
described in the earlier sections, machines don’t use all of the capacity of their resident
circuits. You have a normal load and a peak load. Two machines, each with a normal load
of 1500 watts and a peak load of 2200 watts, could share the same 5KVA 30Amp circuit.
However, if the con guration of these devices is changed over time, for example, if more
memory is added, this might change the normal and peak loads, over the amount that the
circuit could handle. While you might be forced to do this, you must be very careful and
accurate in your power usage calculations for any circuit that you share.
Maintenance Bypass
The power system design should provide the means for bypassing and isolating any point
of the system to allow for maintenance, repair, or modi cation without disrupting data
Centre operations. The system should be designed to avoid all single points of failure.
The common point of grounding can be connected to any number of sources at the
service entrance (main power feed), for example:
▪ Buried grid
▪ Building steel
▪ Water pipes
Whatever the sources, the ground should be carried through the entire system from these
sources. Ideally, the central point of grounding at the service entrance will be connected
to redundant ground sources such as building steel, buried grid, and cold water piping. A
single source sets up the potential for a broken ground. A water pipe might be disjointed.
Building steel could accumulate resistance over several oors. By tying into multiple
grounds, ground loops are avoided, disruptions are minimized, and redundancy is
achieved.
A university on the eastern seaboard lost all power from a problem with poorly grounded
generators on the main power line. In the postmortem, it was found that there really was a
postmortem. A raccoon seeking warmth had climbed into the generator housing and
shorted out the circuit, creating a grounding loop, and knocking out the power. When
everything was nally back online, another raccoon climbed into the generator and self-
immolated, taking the power with it. After that, chicken wire was installed around the
generator.
The data Centre must have its own grounding plan which will tie into the earth ground for
the rest of the building. The system should have su ciently low resistance to allow circuit
breakers, surge protectors, and power sequencers to respond to this overcurrent country
very quickly. This resistance should be in the 1 to 5 ohm range. In India, a 25-ohm
maximum resistance value is the minimum standard for most “normal” grounding
Data Centre Project Report Page 103 of 185
fi
fi
ffi
fl
fi
systems, according to the NEC. While this level of resistance is acceptable in a normal
o ce environment, data Centres should use the 5 ohms of resistance as the acceptable
maximum resistance level for their grounding system.
The NEC and local codes require electronic equipment to be grounded through the
equipment grounding conductor and bonded to the grounding electrode system at the
power source. The impedance of the equipment grounding conductor from the electronic
equipment back to the source neutral-ground bonding point is a measure of the quality of
the fault return path. Poor quality connections in the grounding system will give a high
impedance measurement. Properly installed equipment grounding conductors will give
very low impedance levels. Equipment grounding conductors should have levels meeting
code requirements with a value of less than 0.25 ohm.
Recommended Practices
The following is a list of recommended practices for an SRG. This information should be
well understood by the electrical engineer/contractor but should be used only as a
reference because electrical codes in your area might be subject to di erent
requirements.
5. Route #3/0 from equipment grounding bus bar to grounding bus bar in main
electrical room.
6. Bond HVAC units to perimeter ground or medium frequency ground loop via #6CU
conductor.
8. Complete documentation.
Documentation should be complete in all details, including the proper grounding
and bonding of heating, ventilation, and air-conditioning equipment, piping,
raceways, and similar items. The responsible engineer should not expect the
installers to complete the design.
Harmonic Content
Harmonics problems can be caused by the interaction of data Centre equipment with the
power loads or by switching power supplies. Harmonic distortion, load imbalance, high
neutral current, and low power factor can result in decreases in equipment e ciency and
reliability. Eliminating harmonics problems is di cult, because the computer hardware
contributes to them, and any changes in the room load or con guration to x the problem
can create new problems.
Sun Microsystems equipment has been designed to address the problems of harmonic
distortion and is generally compatible with similar modern equipment. Equipment that
does not have the advantages of modern harmonic-correction features should be isolated
on separate circuits.
Lightning Protection
The potentially damaging e ects of lightning on computer systems can be direct or
indirect. It might be on the utility power feed, directly on the equipment, or through high-
frequency electromagnetic interference or surge currents. Lightning surges cannot be
stopped, but they can be diverted. The plans for the data Centre should be reviewed to
identify any paths for surge entry, and surge arrestors that provide a path to ground
should be included to provide protection against lightning damage. Protection should be
placed on both the primary and secondary sides of the service transformer.
FIGURE 7-4 Emergency Power Disconnect and Manual Fire Alarm Pull Station
Protective covers can be placed over the buttons to avoid accidental contact, but access
cannot be locked out. The switch, or switches, should disconnect power to all computer
systems, HVAC, UPS, and batteries. If the UPS is located within the data Centre, the
Though not required by code, it is recommended that all power sources in the room be
controlled by the disconnect to provide the highest degree of safety to personnel and
equipment.
Data Centres undergo frequent modi cations, so any obsolete cabling should be removed
to avoid air ow obstructions and minimize confusion and possible disconnection of the
wrong cables during modi cation.
Note – Temporary extension cords should not be used in the sub oor void. If used above
the raised oor, precautions should be taken to ensure that they don’t pose a tripping
hazard, and that they are not damaged.
Consider this scenario: Currently all of your RLU de nitions use only single-phase power,
L6-30R 30 Amp outlets. If you were to use the standard wire gauge for these outlets it will
be ne. You can reuse this wire if you move to a three-phase outlet, provided that it still
uses 30 Amps. However, if you were to use a standard wire gauge for 50 Amps, then this
wire gauge would meet or exceed code requirements for the L6-30R 30 Amp outlets.
Basically, you can use a larger gauge wire than is standard, but, not a smaller gauge wire.
If you think you will need to change or upgrade power in the future, putting in the larger
gauge wire for future use is a good idea. With this larger gauge wire in place, if you need
to change some outlets to run at a higher amperage, you already have the wire ready and
waiting under the oor.
The wire gauge in the wireway can also support three-phase power as well as the current
single-phase L6-30R existing outlets, since they are both running at 30 Amps. You can
see four cutouts on the left side. These are already in the wireway so that, should three-
phase power be needed later, six of the L6-30R outlets can be removed and the wiring
used for four three-phase outlets. You can also see the labels for each outlet’s circuit
breaker. Six of these breakers can be removed at the sub-panel and replaced by four
three-phase breakers.
There is another way to solve this problem: Power Distribution Units (PDUs).
This gives a lot of exibility in your electrical system. However, there are a few downsides
to this design. The rst concern is that it might not meet code. Di erent countrys and
di erent counties might have di erent interpretations of electrical code requirements.
The other downside to the PDU design is availability. Currently, PDUs are custom- made
devices and can be subject to lead times of weeks or months to manufacture. This is not
a problem if you have a lot of lead time before you make changes to your electrical
outlets. However, in the real world, this luxury isn’t always available. With foresight, PDUs
of whatever type you anticipate the need for can be pre- manufactured, but this involves
additional cost for the materials (box, circuit breakers, wiring, and outlets) and labor to
make, and the additional cost of storing them.
PDUs can o er a great deal of exibility to your electrical design. However, your rst
requirement will be to nd out if they will be acceptable to your local code requirements.
And even if they are, they might not be the most cost e ective model for your data
Centre.
Electromagnetic Compatibility
Electromagnetic interference (EMI) and radio frequency interference (RFI) is radiated and
conducted energy from electrical devices that produce electromagnetic elds. The
electrical noise currents associated with these can interfere with the signals carried by the
electronic components and the cabling of equipment.
Sources of EMI and RFI can be inside or outside the data Centre environment. Common
external sources are airports, telecommunications or satellite Centres, and similar
facilities. Internal sources include the hardware itself. Sun equipment is tolerant of most
common EMI/RFI levels. If high levels are suspected, a study should be conducted to
determine whether shielding or other remedial actions are necessary.
Electrostatic Discharge
Electrostatic discharge (ESD) is the rapid discharge of static electricity between bodies at
di erent electrical potentials and can damage electronic components. ESD can change
the electrical characteristics of a semiconductor device, degrading or destroying it. It
might also upset the normal operation of an electronic system, causing equipment to
malfunction or fail.
There are numerous ways to control static generation and ESD. The following list
describes some of the control techniques.
▪ Never use paper clips to press reset buttons! A good idea is to tape a few wooden
toothpicks to the inside of the rack doors for use as reset button depressors.
▪ Keep covers on equipment racks closed. Covers should be opened only by trained
personnel using proper grounding when inspections, repairs, or recon gurations are
necessary.
▪ The raised oor system should be properly grounded with static dissipative tile
surfaces to provide a proper path to ground.
▪ Use only appropriate cleaning agents on oor tiles to maintain the static dissipative
properties of the oor.
▪ The soundness of the power distribution (wiring) and grounding systems supplying
power to the equipment
The control and maintenance of heating, ventilation, and air conditioning (HVAC), as well
as relative humidity (RH) levels, is essential in the data Centre. Computer hardware
requires a balanced and appropriate environment for continuous system operation.
Temperatures and relative humidity levels outside of the speci ed operating ranges, or
extreme swings in conditions, can lead to unreliable components or system failures.
Control of these environmental factors also has an e ect on the control of electrostatic
discharge and corrosion of system components.
▪ “Temperature Requirements”
▪ “Relative Humidity”
▪ “Electrostatic Discharge”
▪ “Air Distribution”
▪ Cooling must be delivered where needed. The heat load varies across the area of
the computer room. To achieve a balanced psychrometric pro le, the air
conditioning system must address the needs of particular heat-producing
equipment.
▪ Data Centres need precise cooling. Electronic equipment radiates a drier heat
than the human body. Therefore, precision data Centre cooling systems require a
higher sensible heat ratio (SHR) than o ce areas. Ideally, the cooling system should
have an SHR of 1:1 (100 percent sensible cooling). Most precision systems have
sensible cooling between 85 and 100 percent, while comfort systems normally rate
much lower.
▪ Controls must be adaptable to changes. The data Centre heat load will change
with the addition or recon guration of hardware. Also, exterior temperature and
humidity can vary widely in many places around the world. Both of these conditions
will a ect cooling capacities. Data Centre air conditioning systems must be chosen
for their ability to adapt to these changes.
Temperature Requirements
In general, an ambient temperature range in the data Centre of 70 to 74 F (21 to 23 C) is
optimal for system reliability and operator comfort. Most computer equipment can
operate within a wide psychrometric range, but a temperature level near 72 F (22 C) is
best because it is easier to maintain safe associated relative humidity levels. Standards
for individual manufacturers and components vary, so check the manufacturer’s
speci cations for appropriate temperature ranges.
Another reason for keeping the temperature ranges maintained as close to the optimal
temperature as possible is to give the greatest bu er against problems and activities that
can change the temperature pro le. Following are some possible causes of a change in
the temperature pro le:
With the Centre kept at the optimal temperature, these in uences have less of an overall
e ect on equipment.
Relative Humidity
Relative humidity (RH) is the amount of moisture in a given sample of air at a given
temperature in relation to the maximum amount of moisture that the sample could contain
at the same temperature. If the air is holding all the moisture it can for a speci c set of
conditions, it is said to be saturated (100 percent RH). Since air is a gas, it expands as it
is heated, and as it gets warmer the amount of moisture it can hold increases, causing its
relative humidity to decrease. Therefore, in a system using sub oor air distribution, the
ambient relative humidity will always be lower than in the sub oor.
Ambient levels between 45 and 50 percent RH are optimal for system reliability. Most data
processing equipment can operate within a fairly wide RH range (20 to 80 percent), but
the 45 to 50 percent range is preferred for several reasons:
▪ Operating time bu er. This humidity range provides the longest operating time
bu er in the event of environmental control system failure.
Although the temperature and humidity ranges for most hardware are wide,
conditions should always be maintained near the optimal levels. The reliability and
the life expectancy of the data Centre equipment can be enhanced by keeping RH
levels within the optimal ranges.
Certain extremes (swings) within this range can be damaging to equipment. If, for
example, very high temperatures are maintained along with very high percentages
of RH, moisture condensation can occur. Or, if very low temperatures are
maintained along with very low percentages of RH, even a slight rise in temperature
can lead to unacceptably low RH levels.
Corrosion
Excessive humidity in the air increases the corrosive potential of gases and should be
avoided in the data Centre environment. Gases can be carried in the moisture in the air
and transferred to equipment in the data Centre.
Drastic temperature changes should also be avoided. These can cause latent heat
changes leading to the formation of condensation. This usually happens in areas where
hot and cold air meet, and this can cause a number of hardware problems.
▪ Water can react with metals to form corrosion.
▪ Water can electrochemically form conductive solutions and cause short circuits.
▪ Water can form a reactive combination with gases present in the air, and the
resultant compounds can corrode hardware.
Electrostatic Discharge
Electrostatic discharge (ESD) is the rapid discharge of static electricity between bodies at
di erent electrical potentials and can damage electronic components. ESD can change
the electrical characteristics of a semiconductor device, degrading or destroying it. It
might also upset the normal operation of an electronic system, causing equipment to
malfunction or fail.
▪ The ability of the system to get the conditioned air to the units of hardware that
need cooling
The HVAC unit will have set points for both ideal temperature and humidity levels, like the
thermostat in a house. Sensors located within the data Centre track both the temperature
and humidity of the air. This information is fed back to the HVAC unit and the unit adjusts
its heat transfer and the humidi er moisture level to meet its set points.
Consider the air ow patterns of the storage and server equipment to be installed in the
data Centre.
▪ Is the heated air exhausted from the back or the top or the side of the rack?
▪ Does the air ow through the equipment from side-to-side, front-to-back, front-to-
top, or bottom-to-top?
▪ Do all of the units in a rack have the same air ow patterns or are some di erent?
Since the equipment from di erent manufacturers can have di erent air ow patterns, you
must be careful that the di erent units don’t have con icting patterns, for example, that
the hot exhaust from one unit doesn’t enter the intake of another unit. Sun equipment is
usually cooled front-to-back or bottom-to-top. Bottom-to-top is the most e cient way to
cool equipment, drawing directly from the supply plenum and exhausting to the return
plenum in the ceiling. It also creates a more economical use of oor space since no open
area to the sides of the equipment are needed for free cooling space.
These systems work by drawing air into the top of the HVAC unit, either from the room or
from the return plenum (return air), where it is cleaned by air lter banks, and passed over
a cooling coil. The conditioned air (supply air) is then pushed by large fans at the bottom
The downward ow air conditioning system used in data Centres is typically incorporated
with a raised oor system. The raised oor should be 24 inches (60 cm) above the
sub oor to allow space to run network and power cables, and for the passage of air. The
modular tile design makes it easy to recon gure hardware and air distribution patterns.
When hardware is added, solid and perforated tiles can be positioned to deliver
conditioned air to the hardware intakes.
The majority of the hardware in most data Centres takes in air for cooling at the front or
bottom of the unit and exhausts it out the back or top. Introducing conditioned air from
the ceiling causes turbulence when the conditioned air meets the hot exhaust. A higher
cooling load is needed to address this ine ciency.
Centralized systems, using a single large air handling unit, should be avoided. The
problems with centralized systems are:
▪ Lack of the degree of control you will get with multiple units
■ In such systems, temperature and RH are regulated by a single sensor set in the
ambient space of the room or the return air duct. It is unlikely that conditions in all areas
of the room are the same as they are at this single sensor point, so an inaccurate
presentation of room conditions will be given.
If the room is a long thin rectangle, you can probably place the HVAC units along the
perimeter of this room and get enough cold air volume to the Centre area. If the room is a
large square, you can place units at the perimeter and down the Centre as well, creating
in e ect two long thin rectangles within the room. This creates zones of cold air at the
required pressure for a given area to meet its cooling requirements.
Additionally, software for simulations of air ow and heat transfer is available. “Flovent”
software from Flomerics uses Computational Fluid Dynamics (CFD) techniques to allow
for HVAC simulation of data Centre environments. These models can include raised oor
height, obstructions under the oor, placement and percentage of perforated tiles,
servers, storage systems, and the placement of HVAC units.
Most HVAC systems require some liquid like water or coolant to exchange the heat from
the air as it goes through the unit, and this liquid is moved outside of the room (and
probably the building) to expel the heat it has absorbed. Pipes containing this liquid will
be within, or quite close to, your data Centre. As you know, water and electricity are a
nasty combination. If you put HVAC units on the oor, you must ensure that these pipes
have troughs or channels to redirect the uid out of the data Centre in the case of a pipe
failure. One way to do this is to locate as many HVAC units as possible, perhaps all of
them, outside the data Centre.
All the pipes needed for these units can be located outside the data canter, as well. There
should be a 4-inch barrier at the perimeter of the data Centre to prevent liquid from the
pipes from entering the data Centre if a pipe were to fail. This also gives a clean access to
the base of the HVAC unit to pump cold air under the oor with minimal obstruction.
Since these units are outside the walls of the data Centre, the raised oor and dropped
ceiling voids can be used as supply and return plenums, respectively.
Humidification Systems
Humidi cation can take place within the air conditioners, or by stand-alone units. In some
data Centres, it might be better to introduce moisture directly into the room where it will
mix easily with the ambient temperatures. This can be done with individual humidi ers,
separate from the HVAC units. These should be units designed to keep the psychrometric
rates of change to a narrow margin, monitor the room conditions, and adapt to the
current room and equipment demands. Separate units throughout the room increase the
amount of control over humidi cation and o er redundancy.
HVAC units are available with the capability of adding moisture to the air ow, but they
might not be the best solution due to the way they do this. Possible problems with
introducing moisture directly to air within the HVAC units are:
▪ Condensation can form within the process coolers and cause corrosion, reducing
the operational life of the units.
▪ HVAC units that introduce cooling air into the sub oor mix moisture with cold air
that might be near saturation. This can cause condensation and corrosion within the
sub oor system.
However, separate humidi er and HVAC systems will be more expensive than containing
the humidi er in the HVAC unit itself. Separate units will also add to labor costs. The
placement of RH systems out under the raised oor will require water, in either pipes or
bottled form, to be within the data Centre so the same precautions must be taken as with
pipes that are in the data Centre space. As you can see, there is no right answer. Each
approach has its advantages and disadvantages. You must determine what is the correct
solution for your data Centre.
Monitoring System
A comprehensive monitoring system is an added expense to the design and maintenance
of the facility, but it provides an invaluable tool for diagnosing and correcting problems,
collecting historical data for system evaluations, and for day-to- day veri cation of room
conditions. The following should be considered in the design of the data Centre and
monitoring system:
▪ The room condition feedback should not be based on one sensor in one part of the
room. A single sensor might tell that conditions are perfect, but in truth, they are
only perfect in that particular part of the room. Sensors should be placed in speci c
areas of the Centre and near critical con gurations. These sensors usually sense
▪ The monitoring system should have historical trend capabilities. The historical
psychrometric data can be used to analyze seasonal changes and other outside
in uences.
▪ The monitoring system should have critical alarm capabilities. At the very least, the
system should be set to notify when conditions move outside the set parameters. It
might also be necessary to have a system that performs automatic functions such
as switching to a backup chiller if a primary chiller fails.
▪ Ideally, the monitoring system should be integrated with a tracking system for all
parts of the Centre. This would include not only the in-room air conditioners and
humidi ers, but the cooling support systems, power backup, re detection and
suppression, water detection, security, and any other infrastructure and life-safety
systems.
▪ The HVAC system con guration and monitoring data should be periodically
examined and evaluated by trained personnel to ensure that temperature and RH
pro les are appropriate for the current room demands.
▪ These monitoring systems can use SNMP protocol to integrate into overall data
Centre management systems.
When designing the data Centre, the support system must be taken into consideration.
Design concerns include:
▪ Adequate space. There must be adequate space to house large equipment such
as chillers, cooling towers, condensers, and the requisite piping system.
▪ Climate. The climate of the area might partially determine the types of systems
used. For example, using cooling towers that rely on simple heat transfer to the
Data Centre Project Report Page 122 of 185
ff
outside air will be less e cient in Mumbai than, say, India, since the normal ambient
air temperature is higher. Local codes might have restrictions on what types of
systems you must use.
▪ Flexibility. If there are plans to expand the data Centre in the future, expansion of
the support system should also be considered.
▪ Redundancy. There must be enough built-in redundancy so that the loss of any one
component of the system will not signi cantly impact the system as a whole. The
system should be designed to allow for repairs or upgrades while the Centre is
online.
▪ Monitor the system. The mechanical support systems must be connected to the
building monitoring system. Status and critical alarms must be recorded and
reported to Maintenance and IT.
Air Distribution
If you think of the controlled environment of a data Centre as a cocoon, it’s easier to
imagine how forced currents of air are heated and cooled in a continuous cycle. The basic
principle of convection is what makes the system work: Cold air drops, hot air rises.
The cycle of air ow in the room follows this basic pattern:
▪ Conditioned air is forced into the raised oor void and directed up into the room
and into equipment racks by means of tiles with perforations or cutouts.
▪ Heated air from components is forced out of the racks and rises toward the
ceiling.
▪ Warm air is drawn back into the HVAC units where it is cooled and forced back into
the raised oor to continue the cooling cycle.
▪ Tile placement. Air distribution tiles should be positioned to deliver conditioned air
to the intake of each rack. Solid tiles should be positioned to redirect air ow to the
perforated tiles.
▪ Sub oor pressure. The sub oor pressure di erential enables the e cient
distribution of conditioned air. The pressure under the oor should be at least
5 percent greater than the pressure above the oor. Also, it might be necessary to
adjust the number and position of perforated tiles to maintain appropriate pressure
levels. It is the di erence in pressure under the raised oor and above the raised
oor that allows the cold air to ow correctly. Therefore, the pressure under the
raised oor must be greater than the air pressure above the raised oor. You can
use perforated tiles to modify this pressure change. Consider an example with two
racks: The rst, Rack A, is 20 feet from the HVAC unit. The second, Rack B, is 40
feet from the HVAC unit. The pressure under the oor at Rack A is x, and a 25
percent perforated tile provides an acceptable pressure di erential to allow enough
cold air to properly cool the machine. By the time the remaining air gets to Rack B,
20 feet further away, the pressure is x/2. To get the same amount of air to the
machine when the air pressure is half, you need a 50 percent perforated tile, or two
25 percent perforated tiles right next to each other.
▪ Avoid air leaks. Unnecessary air leaks often occur through oversized cable cutouts
or poorly cut partial tiles (against a wall, around a pillar, etc.). These breaches
compromise the sub oor pressure and overall cooling e ciency. Fillers or protective
trim should be tted to these tiles to create a seal.
▪ Avoid cooling short cycles. Cooling short cycles occur when cold air from the air
conditioner returns to the air conditioner before it has cycled through the heat-
producing equipment. This happens when perforated tiles are placed between an
air conditioner and the nearest unit of heat-producing hardware,
Since cool air is being cycled back to the air conditioner, the regulating sensors at the
return air intake will register a cooler room condition than is accurate. This will make the
unit cycle out of cooling mode while the cooling needs of the equipment have not been
addressed. This a ects both temperature and relative humidity.
▪ Cooling requirements
The last item in the list is the most common restricting factor.
The heat load of small individual servers or storage arrays is generally low, but the density
increases dramatically when the devices are stacked in racks. Also, newer technologies
tend to condense the geometry of the electronics which thereby increase the density of
the heat load.
The majority of Sun servers and storage arrays are designed to take in conditioned supply
air at the front, pass it over the heat loads of the internal components, and exhaust it at
the rear. Sun racks can house a wide variety of devices with di ering air ow patterns.
Some devices move air bottom to top, some from front to back, others from one side to
the other.
The front-to-back air ow pattern suggests a front-to-front (and back-to-back) row and
aisle con guration. With this con guration, direct transfer of the hot exhaust from one
rack into the intake of another rack is eliminated.
The amount of aisle space necessary will depend on the e ciency of the cooling system.
If the cooling system and the possibilities for e cient air distribution are less than optimal,
▪ Aisle widths might be di erent depending on the size of the racks. Both the
standard Sun storage rack and the Sun Fire 6800 server rack are two feet by four
feet and would require a minimum of four feet (1.2 m) of aisle space. These widths
could be di erent depending on tile and cutout placement.
▪ There must be breaks within the equipment rows to allow operators access
between rows and to the backs of the racks.
▪ The air conditioning returns should be placed so that the warm air from the
equipment has a clear path into them. In the case of a low ceiling, this is
problematic as the warm air must build up until it can be drawn into the air
conditioner intakes. A much better design implements a dropped ceiling with vents
to allow warm air to rise up into the return plenum. From there, the hot air can be
drawn e ciently back to the HVAC units.
▪ The amount of air forced into the plenum (number of HVAC units and velocity).
▪ The distance the air must travel to get to the equipment it is meant to cool.
▪ Other breaches in the supply plenum, such as cable cutouts or missing tiles.
The pressurization level must be adequate to move the right amount of cool air to the
right parts of the data Centre. This pressure is regulated by the velocity of the air out of
the HVAC units and the distribution and percentages of perforated tiles used.
The pressurization levels in the plenum should be regularly monitored. This is especially
important when any sub oor work must be done, because removing oor tiles will
Data Centre Project Report Page 126 of 185
ffi
ff
fi
fl
ff
fi
fl
fl
fl
fi
fl
fl
degrade sub oor pressure. Each 2 ft ×2 ft solid tile represents 4 feet of oor area,
equivalent to four perforated tiles with 25 percent perf. If many oor tiles must be
removed for sub oor work, it might be necessary to compensate for lost pressure.
To maintain the integrity of the supply air plenum, avoid the following:
▪ Too many air distribution tiles. The number of perforated tiles should be carefully
determined to maintain proper pressurization. A typical 25 percent perforation tile
represents one foot of free area. Higher perforation percentage tiles should be used
with caution, because they limit air distribution adjustment.
▪ Oversized cutouts. Custom cutouts in tiles are typically for cable passage, to t
around support columns, and for other oddly shaped corners. Partial tiles are
sometimes created to ll in around perimeter walls. The number of cutouts should
be limited and carefully made. Oversized cutouts should be tted with appropriate
sealing trim or lled with closed-cell foam.
▪ Poor tting tiles. Only tiles that accurately t the support grid should be used.
Replace any tiles that allow air to escape around the edges. Loose tting partial
tiles along any perimeter walls should be replaced or t with trim to seal the gaps.
▪ Perimeter penetrations. Check for holes in the sub oor perimeter walls. These
could be passages for cabling, conduit, or pipes and can constitute major leaks. Fill
them with appropriate materials such as closed-cell foam. Seal any cracks or joints
in perimeter walls and sub oor deck. Do not use any materials that might hinder the
functioning of expansion joints. Fix any gaps between the perimeter walls and the
structural deck or roof.
▪ Cable chases. Cable chases in PODs and into adjacent rooms can compromise air
pressure in the sub oor. Holes in columns that route cable between sub oor
plenum and ceiling plenum are a concern. The columns can act as chimneys
depleting sub oor pressure and pressurizing the ceiling void. A pressurized ceiling
void creates convection problems, diminishing the e ciency of the cooling system.
A vapor barrier is any form of protection against uncontrolled migration of moisture into
the data Centre. It could be simply a matter of plugging holes, or it could mean retro tting
the structure of the data Centre to encapsulate the room. The added expense involved in
creating an e ective vapor barrier will be returned in greater e ciencies in the
environmental support equipment.
▪ Avoid unnecessary openings. Open access windows, mail slots, etc., should not
be a part of the data Centre design. These allow exposure to more loosely
controlled surrounding areas.
▪ Seal perimeter breaches. All penetrations leading out into uncontrolled areas
should be blocked and sealed.
▪ Seal doorways. Doors and doorways should be sealed against unnecessary air and
vapor leaks. Place high-e ciency gaskets and sweeps on all perimeter doors.
▪ Paint perimeter walls. Paint all perimeter walls from the structural deck to the
structural ceiling to limit the migration of moisture through the building material
surfaces.
▪ Seal sub oor area. Seal the sub oor to eliminate moisture penetration and surface
degradation. The normal hardeners that are used in most construction will probably
not be adequate to seal the sub oor. The procedure and additional materials for this
process should be included in the building blueprints.
The network cabling infrastructure consists of all the devices and cabling that must be
con gured for the data Centre to be connected to its networks, as well as the cabling
required to connect one device to another within a con guration (for example, connecting
disk devices to servers).
▪ “Points of Distribution”
▪ “Avoiding Spaghetti”
▪ “Veri cation”
Now, suppose one of those cables goes bad and you need to replace it. If it’s not labeled,
you need to physically trace the cable under the oor. The probability is that the cable you
need to trace is wrapped up in a bunch of other cables and will be di cult and time-
consuming to trace, ensure that it is the correct cable, and replace. There is a better way
to solve this problem. By knowing your connectivity requirements you can create a
modular design using points of distribution (PODs) which minimize unnecessary cabling
under the oor.
The connectivity requirements will be based on the type of connections the device has
(Cat5 or bre) and how many of these connections you need for each device. For
example, a Sun StorEdge T3 array has one bre connection and two Cat5 connections,
one for network connection and one for the physical console. You need the bre
connection to transfer data to and from the Sun StorEdge T3 array. To con gure,
administer, and monitor the Sun StorEdge T3 array through the network, you need a
connection to the network port through its Cat5 interface. If you want access to the
physical console as well, this is again through a Cat5 cable. Let’s say you want network
connectivity but not the physical console. For each Sun StorEdge T3 array you need one
multi-mode bre cable and one Cat5 cable. With eight Sun StorEdge T3 arrays in a rack,
the connectivity requirement is eight multi- mode bre and eight Cat5.
Modular Design
In the past, when the cabling requirements for machines were less (maybe one or two per
machine), you could run the cables to one central point, usually the network room.
However, as you can see from the previous example, the number of connections has
increased by orders of magnitude. You can still run 2,680 cables back to the network
Data Centre Project Report Page 130 of 185
fl
ff
fi
fi
fl
fi
ff
fi
fi
fl
fl
fi
fi
fi
fl
fi
ffi
fl
fi
fi
room, but the data Centre design philosophy dictates that you keep the design as simple
as possible.
Since we have segmented the oor into a given number of RLUs of particular types, we
can de ne an area on the oor that contains a certain number of RLUs which will
determine how many Cat5 and bre connections the area will need. Repeat this process
for all areas of the oor. Each of these clusters of RLUs, and more speci cally, their
network cabling requirements, can be looked at as a module. This also allows us to build
in some fudge factor. It is as likely as not that, over time, some RLUs will be over their
initial cabling requirements and others will be below. By grouping some of them together
we have the exibility (another part of the design philosophy) to allocate an extra
connection from an RLU that is not in use to one that needs it. We can also locate
support devices, switches, terminal servers, and Cat5 and bre patch panels for this
module somewhere within this cluster of RLUs.
You might need to connect a storage device on one side of the data Centre to a server on
the opposite side. There are two ways to do this. You can use the logic contained in
switches to move data from one device to another, or you can use the patch panels to
cross-connect one patch panel port to another. This basic design allows you to keep
connections local to an area for greater simplicity, but gives you the exibility to connect
(logically or physically) from one module to another.
Points of Distribution
A Point Of Distribution (POD) is a rack of devices and patches that manages a certain
number of RLUs (which you can think of as a group of devices). PODs allow you to
distribute both the physical and logical networking cables and networking equipment into
modular and more manageable groups, and allow you to centralize any necessary cross-
patching. All of the cabling from a group of devices can connect to the network through
the POD. A data Centre might have dozens or hundreds of groups of devices, and each
The use of this modular, hierarchical, POD design, and having a POD every 16 to 24 RLUs
on the oor, allows you to have shorter cable runs from the machines and makes the
cables easier to trace. It also avoids tangled cables (“spaghetti”) under the oor.
Note – The components of a POD are contained in a rack of a given size, usually
speci ed in terms of rack units (U). 1U is equal to 1.75 inches in height. A typical
7 foot rack contains about 6.5 feet of usable rack space, making it 45U tall
(1.75" x 45 = 78.75"). When you calculate how many devices you can t in your rack, you
will need to know the number of Us of each device.
▪ Network sub-switches
It is not necessary for the device to be on the network to be connected to the NTS, but
within a functioning data Centre, the devices probably will be on the network. Having the
console on the network can be a potential security problem. However, there are ways to
protect yourself. Most NTSs have an authentication system to help restrict access. Also,
the NTSs would be on your administrative network, and one or more forms of
authentication should be required to gain access to that network.
Cross-Patch Ports
The Cat5 and bre ports allow cross-patching when needed. These cross-patches are
signi cantly fewer in number than if you were to run all the needed cables to a single
central point. This increases ease of manageability and decreases cost.
The patches from each POD terminate in the network room. Also, each of the patches is
uniquely identi ed with the same identi er (label) at both ends, in the POD and in the
network room. They should also be tested to verify that they meet the speci cation you
Data Centre Project Report Page 132 of 185
fi
fi
fl
fi
fi
fi
fi
fi
fl
fi
are using. There are devices, commonly called cable testers, that are attached to each
end of the cable. Then a series of data streams are sent that verify that the cable meets
its speci cation and the results compared against what the speci cations should be. To
meet speci cations, the results must be within certain tolerances. Speci cations for both
Cat5 and multi-mode bre are available from the IEEE.
The network equipment in a POD is more likely to change over time than the cross- patch
ports. To design for this exibility, the highest density patch panels should be used to
minimize the space they take up in each POD. The highest density for Cat5 and bre
patch panels, as of this writing, is 48 ports of bre in 5U and 48 ports of Cat5 in 2U. If you
need 48 ports of each, that’s 96 cables! You need a way to keep all those cables
organized. Cable management units for each of the two patch panels are 2U. The patch
panel setup for a POD that contains 1 bre patch panel, 1 Cat5 patch panel, and 2 cable
management units is 11 U (19.25 in.). The wires that go from the patch panels in the
PODs to the network room should be bundled together and run to the network room
above the raised oor, usually in a separate cable tray in the ceiling plenum, to maximize
air ow under the raised oor.
Sub-Switches
Let’s say that you will have four networks in the data Centre. Three of these networks are
for production and one is the administrative network. Each POD must have a sub-switch
on the administrative network. You determine that you need connectivity to all production
networks from each POD. So, for production and administrative network connectivity you
need four sub-switches per POD. Each of these sub-switches is connected to a master
switch for that network in the network room. Remember that you can only transfer data
through the network hierarchy at the maximum rate of the narrowest device. If you have
100BaseT Ethernet feeding your servers on the production networks, and only a
100BaseT interface connecting that sub-switch to the master switch, one server could
take up all the bandwidth to the master switch. In this case, it would be better to use a
1000BaseT interface to connect the sub-switches to their master switch.
Cable Connectors
The RJ-45 connector is the de facto standard for Cat5 copper wiring. However, in bre
cabling you have several options: LC, SC, and ST type connectors. SC is currently the
most common because it is the standard connector type for most current Gigabit
Interface Converters (GBIC) used in bre networking and SAN applications. The LC
connector is half the size of an SC connector, and it is likely, since space is always at a
premium, that LC will eventually surpass SC as the most common bre connector type. In
trying to design for future requirements, you should install ber with LC connectors in
your PODs. If you need to convert from LC to SC, you can use a device called a dongle. If
necessary, you can use a similar type of dongle to convert the much older ST type
connector to SC or LC.
LC SC Dongle RJ-45
Avoiding Spaghetti
Spaghetti is great on a plate with a nice Bolognese sauce. It isn’t good in a data Centre. It
is all too common for the network cabling in data Centres to get tangled up on top of and
under the oor due to bad or non-existent cabling schemes. Keep the following
suggestions in mind:
▪ Use the correct length of Cat5 or ber cables to go from point to point. This avoids
the need to coil or otherwise bundle excess cable.
▪ Route cables, whenever possible, under the tiles of the raised oors, preferably in
cable trays. Don’t lay cable on the ground where it can block air ow and create
dust traps.
▪ Label each cable at both ends so that the oor doesn’t need to be raised to follow
cable routing.
▪ Avoid messy cable routing on the oor as shown in the following gure. This creates
several hazards and liability issues.
These labels, just like labels for the patch panels, power outlets, and circuit breakers,
need to be uniquely identi ed. Over the life of a data Centre you could go through a lot of
cables. If you used a 2-character, 3-digit scheme (for example, AS257), you would have
675,324 usable, unique labels (26 × 26 × 999 = 675,324). That should be enough.
Color coding is also useful as an identi er. In the above scenario, you would need ve
colors: one for the administrative network, three for the production networks, and one for
the NTSs. Using yellow cables, for example, for the administrative network implies a
warning. These cables must be plugged only into the administrative network. This makes
it easier to identify which sub-switch is on which network. You should have a label on the
switch, but somebody might forget to check the label. It’s much harder to miss plugging a
purple cable into the sub- switch with all the purple cables. If you can’t use di erent
colored cables, consider using color coded labels on the cables.
Verification
Each patch panel port should be veri ed and certi ed by the installer as part of the
contract. You should also have cable testers, both Cat5 and bre, available in the data
Centre. With these you can verify that the patch-panel ports were done correctly and, if
you have questionable cables, you can nd out whether they are good or not. This helps
to eliminate doubt.
In the data Centre design, a shipping, receiving, and staging area is an important
consideration, particularly if the equipment will involve many recon gurations in the
lifetime of the Centre. Often, shipping and receiving take place in one area, usually near a
loading dock. Staging can happen in the same area, or it could be in a separate location
(recommended). Finally, storage facilities must be considered.
▪ “Staging Area”
▪ “Storage”
Some important factors should be kept in mind during the planning stages:
▪ Safety. Safety should be the primary concern of loading dock design. Loading,
unloading, warehousing, and distribution are rated among the most hazardous of
industries. A single accident can cost thousands to crores of rupees in insurance,
downtime, and liability costs. Consider safety systems carefully. Good lighting, good
drainage, good ventilation, vehicle restraints, dock bumpers, striping, indicator
lights, wheel chocks, safety barriers, and hydraulic dock levelers are just a few of
these considerations.
▪ Flexibility. Advances occur in trucking and material handling which can dramatically
e ect the design of the docking facilities. Future trends must be taken into
consideration. Loading docks must be equipped with features that ensure
workability and safety throughout its lifetime.
▪ Durability. Loading docks take a lot of abuse. The e ort and expense of using
quality materials and durable designs will pay for itself in the long run.
FIGURE 10-1 Loading Docks With a Large Area in Which Trucks Can Easily Maneuver
▪ Bigger trucks. Trucks are getting longer and wider. Many trucks are now 102 inches
wide and can be 80 feet long, or longer. If such large-capacity trucks will be used,
the docking area and the maneuvering area must be designed to accommodate
them.
▪ Truck access. Some truck trailer oors are as low as 36 inches to increase ceiling
height. To accommodate these trucks, the dock must have portable ramps, truck
levelers, dock levelers, or some other way to equalize the distance between dock
Data Centre Project Report Page 138 of 185
ff
fl
ff
oor and trailer oor.
▪ Climate control. Dock seals and shelters help to maintain the internal climate,
protect merchandise, create security, save energy, and keep the area safe from rain,
snow, and wind that pose a threat to human safety.
▪ Use specialists. Every loading dock has its own special requirements. Consult with
quali ed loading dock specialists during the design stages.
▪ Area for maneuverability of heavy equipment and vehicles. This must take the
turning radius of large vehicles into consideration. Also consider ventilation areas for
exhaust fumes.
▪ The path from receiving to the data Centre should be unobstructed, have wide
enough access, and have ramps available at di erent levels.
Staging Area
At least one dedicated staging area should be part of the data Centre design. Staging is
an area between the loading dock and the equipment’s nal destination, and is often used
for equipment con guration. Equipment coming from receiving on its way to the data
Centre, as well as equipment moving from the data Centre out to storage or shipping, will
usually be processed in the staging area.
This area should be outside the data Centre, but should be maintained within similar
parameters. Contamination will be generated by packing, unpacking, and component
handling and this must be isolated from the operational equipment. The staging area also
involves a lot more human and machine tra c that can add to and stir up contaminants.
▪ The packing and unpacking of equipment can create a lot of contaminants, so this
should always be done in the staging area.
▪ Equipment should be stored, if even for a short time, in the staging area. The same
security measures that limit and monitor physical access should be used in the
staging area just as they would be used in the data Centre itself.
One of the things often overlooked in a staging area is the space required to pack and
unpack equipment. A Sun Fire 15000 server requires a minimum of 18 linear feet to
unpack the machine from its shipping material. Just to pack or unpack this machine, you
need a clear area 18 feet long by 10 feet wide (180 sq ft). It’s better to have too much
space than not enough, so consider allowing 20 feet by 10 feet (200 sq ft) for this
process.
This area must also be able to handle the weight requirements of all the equipment.
Consider the amount of packing and unpacking you might do in parallel. There is usually
more than one rack for a single con guration in the data Centre, and these racks often
arrive at the loading dock at the same time. Realistically, if you only have one area of 200
sq ft, you can only unpack one of these racks at a time.
Storage
It is often necessary to retain packing materials in case something must be shipped back
to the vendor, for example, in the event of a component failure. Since this material can
create contaminants, it should be stored in an area with no running computer equipment.
Packing materials can also take up a lot of space, so using expensive raised oor space,
or even o ce space, is probably not a cost-e ective solution. You might also need to nd
economic storage for large quantities of inexpensive equipment, like network cable. On
the other hand, expensive equipment and critical spare parts should be stored in the data
Centre or staging area, because restricting access to this type of equipment is prudent.
Consider the following:
▪ Will the storage area be close to the data Centre? If not, how far away?
▪ Document how equipment was packed for ease in repacking. Label everything!
Potential hazards in a data Centre can range from mildly inconvenient to devastating.
Some are di cult to avoid, but knowing what the potential hazards are in the data Centre
area is the rst step in preparing to avoid or combat them.
▪ “Types of Hazards”
▪ “Fire”
▪ “Flooding”
▪ “Earthquakes”
▪ “Miscellaneous Disasters”
▪ “Security Problems”
▪ “Noise Problems”
▪ Earthquakes
▪ High winds
▪ Hurricanes
▪ Tornados
Manual controls for various data Centre support systems should be conveniently located.
Controls for re, HVAC, power, abort or silence, and an independent phone line should be
grouped by appropriate doorways. All controls should be clearly labeled, and concise
operating instructions should be available at each station.
Keep the following human safety guidelines in mind when planning the data Centre.
▪ Ensure that personnel are able to exit the room or building e ciently
▪ Avoid blockages and doors that won’t open easily from the inside
▪ Clearly mark re extinguishers and position them at regular intervals in the room.
▪
▪ Clearly mark rst aid kits and position them at regular intervals in the room
Fire
Fire can occur in a data Centre by either mechanical failure, intentional arson, or by
natural causes, though the most common sources of res are from electrical systems or
hardware. Whether re is measured in its threat to human life, damage to equipment, or
loss of business due to disruption of services, the costs of a re can be staggering. The
replacement cost for the devastation caused by a re can number in the tens or hundreds
of crores of rupees.
A re can create catastrophic e ects on the operations of the room. A large-scale re can
damage electronic equipment and the building structure beyond repair. Contamination
from smoke and cinder from a smoldering re can also damage hardware and incur heavy
costs in cosmetic repairs. Even if the actual re is avoided, discharge of the re
suppression medium could possibly damage hardware.
Fire Prevention
Several steps should be taken to avoid res. Compliance with NFPA 75 will greatly
increase the re safety in the Centre. The following precautions should be taken in the
design and maintenance of the data Centre and support areas:
▪ No smoking. Smoking should never be allowed in the data Centre. Signs should be
posted at entryways and inside. If you think this could be a problem, designing in a
nearby smoking area for breaks will reduce or eliminate smoking in the data Centre.
▪ Check HVAC reheat coils. Check the reheat coils on the air conditioner units
periodically. If left unused for a while, they can collect dust that will smolder and
ignite when they are heated up.
▪ Preserve the data Centre “cocoon.” Periodically inspect the data Centre perimeter
for breaches into more loosely controlled areas. Block any penetrations. An alarm or
suppression system discharge caused by conditions outside the Centre is
unacceptable.
Physical Barriers
The rst line of re defense and containment is the actual building structure. The rooms of
the data Centre (and storage rooms) must be isolated by re-resistant walls that extend
from the concrete sub oor deck to the structural ceiling. The oor and ceiling must also
be constructed of noncombustible or limited combustible materials able to resist the re
for at least an hour. Appropriately controlled rebreaks must also be present.
The HVAC system should be dedicated to the controlled area of the data Centre. If this is
not possible, appropriately rated re dampers must be placed in all common ducts or
plenums.
When data Centre res occur, they are commonly due to the electrical system or
hardware components. Short circuits can generate heat, melt components, and start a
re. Computer room res are often small and smoldering with little e ect on the room
temperatures.
The early warning re detection system should have the following features:
▪ Since it can get very noisy in the data Centre, a visual alert, usually a red ashing
siren light, should also be included in the system.
Modern gas systems are friendlier to hardware and, if the re is stopped before it can do
any serious damage, the data Centre might be able to continue operations. Water
sprinklers are sometimes a viable alternative if saving the building is more important than
saving the equipment (a water system will probably cause irreparable damage to the
hardware). Gas systems are e ective, but are also shorter lived. Once the gas is
discharged, there is no second chance, whereas a water system can continue until the
re is out. Water systems are highly recommended in areas that contain a lot of
combustible materials such as storerooms.
These decisions must be weighed, but in the end it could be local ordinance, the
insurance company, or the building owner who will determine what suppression system
must be installed. There is no reason why multiple systems can’t be used, if budget
allows.
Following are descriptions of a few di erent suppression systems. Note that the last two
are not recommended, but are described in the event that such legacy systems exist in
the facility. If either or both of these are in place, they should be changed out for safer
systems.
▪ FM200. This is the recommended suppression system. The FM200 uses the gas
hepta uoropropane which is quickly dispersed around the equipment. It works by
literally removing heat energy from the re to the extent that the combustion
reaction cannot be sustained. It works quickly, is safe for people, doesn’t damage
hardware, won’t interrupt electrical circuits, and requires no post-discharge cleanup.
With this system there is the possibility that the data Centre will be back in business
almost immediately after a re.
▪ Dry pipe sprinkler. Dry pipe sprinkler systems are similar to wet pipe systems with
the exception that the pipes are not ooded with water until detection of a re
threat. The advantage is less likelihood of leaks. The disadvantages are the longer
amount of time before discharge and the possibility of ruining equipment. If this
system is used, a mechanism should be installed that will deactivate all power,
▪ Wet pipe sprinkler. Wet pipe sprinkler systems use pipes that are full at all times,
allowing the system to discharge immediately upon the detection of a re threat.
The advantage is speed in addressing the re. The disadvantages are the possibility
of leaks and of ruining equipment. If this system is used, a mechanism should be
installed that will deactivate all power, including power from UPSs and generators,
before the system activates.
▪ Halon 1301. Not recommended. Halon is an ozone-depleting gas that has been
replaced in favor of the more environmentally friendly FM200. Halon 1301 systems
are no longer in production as of January 1994, and legacy systems can only be
recharged with existing supplies.
Manual means of re suppression should also be on hand in the event that automatic
systems fail. Following are descriptions of the two backup systems:
Flooding
Like re, ooding can be caused by either equipment failure or by natural causes.
Consider the following:
▪ How often, if ever, does ooding occur around the data Centre area?
▪ Can the data Centre be located in a higher area, safe from ooding?
▪ Troughs to channel water out of the data Centre should be installed underneath
pipes. These troughs should have the same or greater ow rate as the pipes
themselves.
▪ It is possible to have a pipe within a pipe. If the interior pipe develops a leak, the
water would be contained in the outer pipe.
▪ Water detection sensors should be placed along the runs of the pipes and at
plumbing joints where most leaks are likely to start.
▪ In cold climates and near HVAC units, insulate the pipe to prevent freezing.
▪ Can the data Centre be located on lower oors where there would be less sway?
▪ Can racks be secured to the oor and ceiling as a means of seismic restraint?
▪ Other steps required to ensure the safety of personnel should be outlined in local
building codes.
Miscellaneous Disasters
There are many possible disasters to consider, and the e ects of most will fall into
categories that cause one or more problems.
▪ Wind-based (hurricanes and tornados). Use the same appropriate guidelines for
earthquakes, as these events will cause the building to shake or vibrate.
▪ Water-based (severe storms and tsunamis). Use the appropriate guidelines for
water penetration and leak detection on doors and windows.
Security Problems
The security of the data Centre is critical. Data Centres not only contain valuable
computer hardware, but the data in the machines is usually worth exponentially more
than the 10s or 100s of crores of rupees that the equipment costs.
Access should be restricted to only authorized and trained personnel. Several levels of
barriers should be in place. The use of “traps” (a space between two doors) is a good
Data Centre Project Report Page 149 of 185
fl
fl
ff
idea for security as well as preventing the in ltration of particulate matter. People enter
the exterior door and the interior door cannot be opened until the exterior door is closed.
The data Centre should be positioned so that it does not use an exterior wall. Avoid
exterior windows in your data Centre. If your data Centre does use an exterior wall, place
barriers on the outside of the wall to slow down vehicles that might try to smash through.
(This might sound ridiculous, but it has happened.)
For many corporations, their information is their business. If it sounds like you are
fortifying this thing to be a mini Fort Knox, you are on the right path. Consider the
following:
▪ Are there windows or doors that could prove to be a security risk? Can they be
blocked?
▪ Where will the Command Centre be located? Will it have a separate entrance?
▪ Will the data Centre only be accessible through the Command Centre?
▪ Will people be able to remotely access the data Centre from anywhere? Will there
be access restrictions to certain portions?
Noise Problems
With processors getting faster and disks getting more dense, the cooling requirements in
data Centres are rising. This means more fans and blowers to move more conditioned air.
Noise can be a big problem in some data Centres. The use of Command Centres, and
devices like network terminal servers that allow remote access to a machine, allow users
to work in a less noisy environment. However, you will need to have people in your data
Centre some of the time.
Ear protection should be used in particularly noisy rooms, and might even be required.
The installation of noise cancelling equipment is useful but expensive. If people are
working remotely most of the time, it might not be worth the cost. Ear protection might be
adequate. If you do have people in the data Centre quite often, the investment in noise
cancellation equipment might be worthwhile.
Particles, gasses, and other contaminants can impact the sustained operations of the
computer hardware in a data Centre. These contaminants can take many forms, some
foreseeable and some not. The list of possible contaminants could be localized to the
district (local factory pollutants, airborne dusts, etc.), or they could be generated more
locally somewhere at the site. Airborne dust, gasses, and vapors should be kept within
de ned limits to minimize their impact on people and hardware.
▪ “E ects of Contaminants”
▪ “Avoiding Contamination”
Contaminants that a ect people and equipment are typically airborne, so, obviously, it is
important to limit the amount of potential contaminants that cycle through the data Centre
air supply to prolong the life of all electronic devices. Potential contaminants can also be
settled, making them harder to measure. Care must be taken that these aren’t agitated by
people or mechanical processes.
Gaseous Contaminants
Excessive concentrations of certain gasses can cause corrosion and failure in electronic
components. Gasses are of particular concern because of the recirculating air ow pattern
of the data Centre. The data Centre’s isolation from outside in uences can multiply the
detrimental in uences of any gasses in the air, because they are continually cycled
through equipment for repeated attack.
factors, such as the moisture content of the air, can in uence environmental corrosivity
and gaseous contaminant transfer at lower levels. Higher concentrations of these levels
should be a concern.
Note – In the absence of appropriate hardware exposure limits, health exposure limits
should be used.
Particulate Contaminants
The most harmful contaminants are often overlooked because they are so small. Most
particles smaller than 10 microns are not usually visible to the naked eye, and these are
the ones most likely to migrate into areas where they can do damage. Particulates as big
as 1,000 microns can become airborne, but their active life is short and they are typically
arrested by most ltration systems. Submicronic particles are more dangerous to the data
Centre environment because they remain airborne much longer and can bypass lters.
Some of the most harmful dust particle sizes are 0.3 microns and smaller. These often
exist in large quantities, and can easily clog the internal lters of components. They have
the ability to agglomerate into large masses, and to absorb corrosive agents under certain
psychrometric conditions. This poses a threat to moving parts and sensitive contacts. It
also creates the possibility of component corrosion.
The removal of airborne particulate matter should be done with a ltering system, and the
lters should be replaced as part of the regular maintenance of the data Centre.
Human Movement
Human movement within the data Centre space is probably the single greatest source of
contamination. Normal movement can dislodge tissue fragments, dander, hair, or fabric
bers from clothing. The opening and closing of drawers or hardware panels, or any
All unnecessary activity and processes should be avoided in the data Centre, and access
should be limited only to trained personnel. All personnel working in the room, including
temporary employees and janitorial sta , should be trained in the basic sensitivities of the
hardware and to avoid unnecessary contact. Tours of the facility are sometimes
necessary, but these should be limited and tra c should be restricted to avoid accidental
contact with equipment.
The best solution to keeping human activity to a minimum in the data Centre is to design
in a Command Centre with a view into the data Centre room. Almost all operations of the
Centre will take place here, and those visiting the facilities can see the equipment from
there. The data Centre should never be situated in such a way that people must go
through the equipment room to get to unrelated parts of the building.
Subfloor Work
Hardware installation and recon guration involves a lot of sub oor activity, and settled
contaminants can be disturbed, forcing them up into the equipment cooling airstreams.
This is a particular problem if the sub oor deck has settled contaminants or has not been
sealed. Unsealed concrete sheds ne dust particles and is also susceptible to
e orescence (mineral salts brought to the surface of the deck through evaporation or
hydrostatic pressure). It is important to properly seal the sub oor deck and to clean out
settled contaminants on a regular basis.
Stored Items
The storage and handling of hardware, supplies, and packing materials can be a major
source of contamination. Cardboard boxes and wooden skids or palettes lose bers when
moved and handled. Particles of these have been found in the examination of sample
sub oor deposits. The moving and handling of stored items also agitates settled
contaminants already in the room. Also, many of these materials are ammable and pose
a re hazard. All of these are good arguments for making a staging area for packing and
unpacking an important design criteria.
FIGURE 12-1 and FIGURE 12-2 show unnecessary clutter and particulate matter in a data
Centre room.
Effects of Contaminants
Destructive interactions between airborne particulate and electronic equipment can
happen in many ways, some of which are outlined in the following subsections.
Physical Interference
Corrosive Failure
Component failures can occur from the corrosion of electrical contacts caused by certain
types of particulate. Some particulates absorb water vapor and gaseous contaminants
which adversely a ect electrical components. Salts can grow in size by absorbing water
vapor (nucleating). If the area is su ciently moist, salts can grow large enough to
physically interfere with a mechanism, or cause damage by forming corrosive salt
solutions.
Short Circuits
The accumulation of certain types of particles on circuit boards and other components
can create conductive pathways, thus creating short circuits. Many types of particulate
are not inherently conductive, but can become conductive by absorbing moisture from
the air. When this happens, the problems can range from intermittent malfunctions to
component failures. To avoid this problem, care should be taken with both the proper
ltration of air and careful control of humdi cation.
Thermal Failure
Thermal failures occur when cooling air cannot reach the components. Clogging of
ltered devices can cause restricted air ow resulting in overheating of components.
Heavy layers of accumulated dust on hardware components can form an insulative layer
that can lead to heat-related failures. Regular replacement of air lters and cleaning of
components will help to avoid this problem.
Avoiding Contamination
All surfaces within the controlled zone of the data Centre should be kept clean. This
should be done by:
▪ Keeping contaminants out. Keeping contaminants from entering the data Centre
should be done by minimizinging tra c through the room, adequate air ltering,
Data Centre Project Report Page 156 of 185
fi
fi
ff
fi
ffi
ffi
fl
fi
fi
fi
avoidance of improper chemical use, and positive pressurization of the room. Also,
a properly constructed data Centre uses only non-shedding and non-gassing
materials. If the data Centre is a retro t of an existing structure, it might be
necessary to change out or seal some existing construction materials.
Exposure Points
Breaches in the controlled zone of the data Centre must be controlled and monitored. All
doors must t snugly in their frames and be sealed with gaskets and sweeps. Automatic
doors should be carefully controlled to avoid accidental triggering, especially by people
without proper security clearance. A remote door trigger might be necessary so that
personnel pushing carts can easily open the doors. In highly sensitive areas, a design with
double sets of doors and a bu er in between will limit direct exposure to outside
contamination.
Subfloor Void
The sub oor void in a downward- ow air conditioning system functions as the supply air
plenum. This area is pressurized by forced conditioned air, which is then introduced to the
data Centre room through perforated tiles. Since all air moving into the room must travel
through the sub oor void, it is critical that this area be kept at a high level of cleanliness.
Contaminant sources can include degrading building materials, operator activity, or
in ltration from areas outside the controlled zone.
Clutter in the sub oor plenum should be avoided. Tangled cables or stored materials can
form “air dams” that allow particulate matter to settle and accumulate. When these items
are moved, the particulate is stirred up and reintroduced to the supply airstream. Store
supplies in outside storage areas, and keep all sub oor cabling organized in wire basket
cable trays.
All surfaces of the sub oor area, particularly the concrete deck and the perimeter walls,
should be properly sealed, ideally before the raised oor is installed. Unsealed concrete,
masonry, and similar materials degrade over time. Sealants and hardeners used in normal
construction are not meant for the surfaces of a supply air plenum. Only appropriate
materials and methodologies should be used in the encapsulation process. Here are
some guidelines:
▪ Spray applications should never be used in an online data Centre. The spraying
process forces sealant particulate into the supply airstream. Spray applications
could be appropriate if used in the early stages of construction.
▪ The encapsulant must have a high exibility and low porosity to e ectively cover the
irregular surface textures and to minimize moisture migration and water damage.
In data Centres with multiple rooms, the most sensitive areas should be the most highly
pressurized.
Filtration
Warm air from the data Centre hardware returns to the HVAC units where it is cooled and
reintroduced to the room to continue the cooling cycle. The air change rate in a data
Centre is much greater than a typical o ce environment and proper ltration is essential
to arresting airborne particulate. Without high e ciency ltration, particulate matter will be
drawn into computers with the probability of clogging air ow, gumming up components,
causing shorts, blocking the function of moving parts, and causing components to
overheat.
The following gure shows the lters placed in the top of an HVAC unit.
The lters installed in recirculating air conditioners should have a minimum e ciency of
40 percent Atmospheric Dust-Spot E ciency (ASHRAE Standard 52.1). Air from outside
the building should be ltered with High E ciency Particulate Air (HEPA) lters rated at
99.97 percent e ciency (DOP E ciency MIL-STD-282) or greater. To prolong their life, the
expensive high-e ciency lters should be protected by multiple layers of lower grade
pre lters that are changed more frequently. The rst line of defense should be low-grade
20 percent ASHRAE Atmospheric Dust- Spot E ciency lters. The next level of ltration
should consist of pleated or bag type lters with e ciencies between 60 and 80 percent.
All of these lters should t properly in the air handlers. Gaps around the lter panels
Another, possibly less obvious reason for maintaining a clean data Centre has to do with
psychology. Operators working in a clean and organized data Centre will be more inclined
to respect the room and keep it clean and organized, thus maintaining its e ciency.
Visitors to the data Centre will show similar respect and interpret the overall appearance
of the room as a commitment to quality and excellence.
When designing the data Centre, keep regularly scheduled decontaminations in mind. A
well-designed data Centre is easy to maintain.
Introduction
Big Data is currently becoming a critical research focus along with the growth of Internet
of Things (IoT) and Internet of Everything (IoE) technologies. The European Commission
de nes Big Data as “large amounts of data produced very quickly by many di erent
sources such as people, machines or sensors”. The Big Data phenomenon was emerged
with the rapid growth of social networks, machine to machine (M2M) communication, and
Internet of Things paradigm. Continuous advancement in emerging IoT and IoE
technologies face challenges regarding to managing, storing and processing Big Data to
deliver awless services especially for latency sensitive applications such as video
streaming or health management systems. Relying on current technologies such as cloud
computing is not e cient for addressing the requirements of Big Data management .
Hence, new technologies are needed to reduce the complexity, ease management and
boost the processing of Big Data. In this paradigm, Big Data is complex and
multidimensional and involved with multifaceted relations between generators, end users
and intermediate intelligent processing units. Thus, it requires appropriate technologies
and infrastructures for managing, storing and processing. The pre-processing of raw
sensory data is one of the most e cient ways to reduce the load of Big Data in cloud
computing. To this end, a fog computing platform at the edge of the network is
introduced to reduce the processing load from the cloud by delegating some simple and
frequent tasks to the fog (preprocessing). A virtualized and hierarchical architecture of fog
computing provides a distributed computing and storage platform near to the edge
sensors for local and latency sensitive applications. In this model, the raw data generated
in edge sensors is pre-processed in the local fog, and more meaningful and e cient data
with reduced volume and velocity are sent to the cloud for further and global processing
and storage. Expected bene ts of utilizing the fog computing technology in the IoT
architecture include: local and hence faster processing and storage for geo-distributed
and latency sensitive applications, reduced communication overhead, and reduced
volume and velocity of Big Data before sending it to the cloud.
It is evident that recently the research trend for dealing with Big Data has focused on
technological advancement in computing platforms and architectural designs for IoT
based systems. Yet, relying merely on the advancement of computing technologies for
Big Data processing and management will not completely address the involved issues. In
this paper, approaching the problem from a di erent perspective, we aim to reshape the
existing raw, passive and unstructured form of data in IoT to an intelligent and active
form, while preserving and enhancing many other important parameters such as energy
e ciency, scalability, throughput, quality of service as well as privacy and security. For
this, we will introduce a new e cient, intelligent, self-managed and lightweight data
structure that we call Smart Data. We believe that Smart Data will revolutionize the current
perspective of data and will open many potential research directions to tackle emerging
Big Data issues. The Smart Data is a package of encapsulated structured or semi-
structured data generated by IoT sensors, a set of metadata, and a virtual machine. It is
controlled and managed through the metadata that accommodates a set of rules that
Big Data
Big Data in general is characterized with ve major features, namely Volume, Velocity,
Value, Veracity and Variety which are known as the 5 Vs of Big Data . Volume in Big Data
features the size, scale or amount of the data that is required to be processed. Velocity of
Big Data represents the speed of data generation. It becomes a problematic issue, if data
is generated faster than it can be analyzed and stored. Variety of Big Data expresses the
complexity of data as structured, semistructured, unstructured or mixed data. Value of the
data re ects the added value of the data to the underlying process. In fact, the added
value of the data represents the gap between the business demand and technological
solutions for managing the Big Data. Veracity of Big Data refers to the consistency and
trustworthiness of the data being processed. Data veracity ensures the integrity,
availability, and accountability of the data. In addition to these general characteristics, a
new feature, geo-distribution , is also introduced by emerging IoT applications. In IoT-
based applications, sensors in di erent geographical locations, which have to be
managed as a coherent whole, are generating the data. It is evident that centralized
approaches such as cloud computing are not e cient for such a highly distributed
infrastructure because of the low latency requirements of these applications. One e cient
approach to deal with this issue is devising local computing/storage units for each
geographical location to respond to local application requests that require fast interaction
times and larger computing/storage units (i.e. cloud) for global processing and storage for
processes that e ect an IoT-based system as a whole.
Fog Computing
In a typical IoT infrastructure, raw data are generated through hundreds of thousands of
sensors. Cloud computing has been widely adopted in this paradigm because of the
immense processing load caused by the collected data, reducing service delivery cost
and achieving better interoperability with other cooperated systems. Cloud computing is
being recognized as a success factor for IoT, providing ubiquity, reliability, high-
performance and scalability. However, due to its geographically centralized nature as well
as communication implications, cloud Computing-based IoT fails in applications that
require a very low and predictable latency, are geographically distributed, are fast mobile,
or are large-scale distributed control systems . Fog computing is a promising technology
proposed and developed by Cisco, complementing the cloud computing services by
extending the computing paradigm to the edge of the network. Bringing the
computational intelligence geographically near to the end users will provide new or better
services for latency sensitive, locationaware and geo-distributed applications that due to
their characteristics are not feasible merely through cloud computing. In this paradigm,
smart devices and communication components with both computation and storage
capabilities, i.e., intelligent routers, bridges, gateways, and smart devices such as tablets
and mobile phones compose the fog computing platform near to the edge of the network.
• IoT Architecture
A typical IoT-based system is composed of a set of sensors and/or actuators with built in
communication capabilities, connected to a gateway through wireless technologies such
as Wi-Fi, Bluetooth or ZigBee. As we are dealing with resource-constrained devices,
lightweight protocols, such as CoAP, 6LoWPAN, and MQTT, are used for communication
at this level of the system. The gateways, on the other hand, are usually devices with
higher processing power and storage capacity. They are responsible for harvesting data
from sensors and redirecting data to processing units (typically cloud), or the other way
around, from processing units to actuators. In some IoT systems, the gateways also carry
out some basic pre-processing on raw sensory data. Communication between gateways
and cloud platforms is accomplished through high performance communication
technologies such as 3G and broadband technologies over Internet protocols such as
TCP/IP, IPv4, and IPv6. At the other side of the network, end user systems and
applications access processed data and visualize results of various analyses (Figure 1).
Fog computing introduces an intermediate layer between the edge network or the end
nodes and the traditional cloud computing layer. The fog layer can be implemented using
almost the same components and resources and can utilize many of the same
mechanisms and attributes as the cloud layer. Fog computing extends the computing
intelligence and storage to the edge of the network. Furthermore, it also extends the
computing and storage capability of gateways in conventional IoT systems to a broader
and faster distributed, hierarchical computing platform. Fog computing does not
outsource the cloud computing’s components and services but instead, it aims to provide
a computing and storage platform physically closer to the end nodes provisioning new
Data Centre Project Report Page 163 of 185
breed of applications and services with an e ective interplay with the cloud layer. The
expected bene t is faster computation times for requests that require low latency. This
plays a crucial role in promotion of the Internet of Things (IoT). Utilizing fog computing
reduces the overhead of communication with the cloud through the Internet and provides
a faster response for applications that requires lower latency. This is made possible by
locally computing IoT data in the fog layer and forwarding only those which does not
require real-time computation or require higher processing power to the cloud layer. For
example, in the case of a smart community, where homes in a neighborhood are
connected to provide community services, lower latency can be expected for making
urgent decisions, and so, data is sent to a geographically closer computation units
instead of a fairly distant cloud node.
In addition to the requirement of low latency, fog computing is a promising technology for
dealing with Big Data generated from IoT-based systems with a vast number of nodes
spread across a large area. IoT-based systems introduce a new dimension for
characterizing Big Data, namely geodistribution, along with its generally known
characteristics. Cloud based IoT systems dealing with Big Data have to process a large
amount of data at any time. Fog computing, as a middleware, can preprocess raw data
coming from the edge nodes before sending them to the cloud layer. As a result, the fog
layer not only reduces the amount of work needed in the cloud layer by generating
meaningful results from raw data, but also reduces the monetary cost of computing in the
cloud layer. Figure 2 illustrates the architecture of an IoT system which uses fog
computing.
Smart Data
Management of Big Data is one of the most important issues in emerging IoT
technologies. The massive amount of data generated through M2M communication will
require new data management solutions. In fact, at a low level of a large IoT system, the
data generated by sensors has raw and passive nature with a high volume, velocity and
variety. Conventional methods for managing (computing and storing) such data do not
provide su cient solutions. Hence, new and e cient technologies are required to ease
the management e orts and to reduce computational and network overheads. Typically,
cloud computing is recognized as a scalable and robust approach for processing and
storage of data in IoT systems. Pre-processing of raw sensory data is one of most
e cient ways to reduce the load of Big Data from the cloud. To this end, a fog computing
platform at the edge of the network is introduced to reduce the processing load from the
cloud by delegating some simple and frequent tasks from the cloud to the fog (pre-
processing).
In order to reduce the management e orts, computing load and communication overhead
imposed by Big Data, and to improve e ciency of Big Data computing in general, we
approach the problem from a di erent perspective. We aim to reshape the existing raw,
passive and unstructured form of data in IoT to an intelligent and active form, while
preserving and enhancing many other important attributes such as energy e ciency,
scalability, throughput, quality of service, and privacy and security. We refer to our new
data structure as “Smart Data”. Smart Data are rst generated by sensors in a very basic
form and then they evolve through their lifecycle by being processed and converted to
Data Centre Project Report Page 165 of 185
ffi
ffi
fi
ff
ff
ff
ff
ff
ffi
ffi
fi
ff
ff
ffi
meaningful information as well as complemented and amended with new features such
as security and privacy. A fog computing platform is the main enabler for implementing
our Smart Data concept.
Generally, the Smart Data is a standalone unit that through the resources provided by the
underlying hierarchical fog computing platform undergoes a series of pre-processing
steps, evolving by getting more attributes, such as security and privacy aspects, and
involved rules. IoT sensors generate a basic and lightweight version of Smart Data. It
evolves (grows) when it travels through the hierarchical fog computing system towards
the cloud, merging with other cells. The process is the opposite when data moves from
the cloud towards the actuators, i.e., data are transformed stepwise into a distributed set
of elementary cells. Figure 5 illustrates the general structure of a Smart Data cell. The
Smart Data is composed of three main parts: payload data, metadata and virtual
machine. In IoT based scenarios, where data is generated and transferred continuously in
a resource-constrained environment, communication activities can consume a
considerable amount of energy. In our Smart Data, data generated by each sensor are
encapsulated into Smart Data bundles and are communicated to their gateways in
speci c intervals. The main objective of encapsulating a set of data already at the sensor
level, instead of constantly sending discrete data, is to reduce the communication
overheads in a very resource constrained environment as well as to reduce the data
velocity in the Big Data context. The payload component of the Smart Data undergoes a
series of processing or pre-processing steps and is thereby converted into more
meaningful information. Processing or pre-processing of data includes di erent
operations such as aggregation, ltering, compression, and encryption.
The metadata part of Smart Data contains key information such as the source of data
(sensors), destination of data, the physical entity which data belongs to, timestamps,
current status and logs as well as rules for accessing, fusing or di using, and processing
data, for example. In addition, the metadata part stores information extracted by
processing the payload data. Such information obtains more accurate values when the
data is processed and aggregated with other data from di erent sensors or the same
sensor over a longer period of time.
The virtual machine part, in turn, acts as a platform that enables and manages the
execution of the rules speci ed in the metadata part. The VM at the very beginning stage
contains only basic application codes. Then, it evolves by adding other code modules of
To clarify the idea, let us consider a simple room temperature control system using IoT
sensors and actuators as an example. Each sensor senses the temperature within its
designated area with a frequency of one measurement every 2 seconds. It is obvious that
continuously transmitting every single sensor reading will cause high network tra c. The
Smart Data reduces this overhead by encapsulating a series of raw sensor readings and
storing the data into a Smart Data bundle and transmitting it in speci c time intervals. A
basic form of Smart Data is built in each sensor in each interval and is transmitted to the
gateways. The data at this stage is raw and have a very basic structure, but it is pre-
processed with operations such as ltering, aggregation and compression during its
lifecycle. The metadata part of the Smart Data includes information that describes the
data. In this example, the average temperature in each sensing interval is calculated and
stored in the metadata already at the sensors. Once the Smart Data is transmitted to the
gateways, it is aggregated with the previously transmitted Smart Data and hence, an
average temperature over a longer period will be stored in the metadata. Moreover, the
Smart Data also can be aggregated with other Smart Data collected from other sensors,
for example temperature sensors of other rooms, and the average temperature of the
whole house in speci c time slots as well as the average temperature of the whole house
in a longer period could be calculated and stored in the metadata.
In order to avoid the overhead of integrating the whole application code to each Smart
Data, at early stage Smart Data includes only basic application code that provides it some
basic functionalities such as communication and lightweight encryption. A new module of
the application code integrates to the base code whenever there is a need for a new
functionality. Each module contains a set of program codes that provides a certain
service or accomplishes a certain operation on the data. For example, there could be an
aggregation module, an encryption module, and compression modules.
A lifecycle of Smart Data is a time span in which a data cell is generated, processed,
stored, used and destroyed. Figure 6 illustrates the Smart Data life cycle. The Smart Data
technology facilitates e cient monitoring and controlling of data during its lifecycle. The
idea is that throughout its lifecycle data is traceable, linkable and accountable. This is
made possible by logging the events that Smart Data has undergone within the integrated
metadata part of a data cell. From the data ow perspective (Figure 7) a basic form of
Smart Data is rst generated in sensors. At this point, Smart Data is considered
immature. Immature Smart Data contains a semi-structured and encapsulated set of raw
data in its payload. The data is stored in the payload of the Smart Data in a very basic
format already in the sensors. This semi-structured data needs to be re-formatted and
structured according to the application’s context to make complex analysis and access
possible for the application. Furthermore, some basic information, such as the source of
data, destination of data, and time stamps are speci ed in the metadata part of each
smart data cell already in the sensors. The payload and metadata of the Smart Data are
encrypted using lightweight symmetric ciphers such as Tiny Encryption Algorithm (TEA),
PRESENT Cipher and HIGHT Cipher in the sensors to ensure their integrity and
con dentiality during inter-node transfers.
Once the Smart Data has been transmitted from the sensors to their corresponding
gateways, they are decrypted and then aggregated with other Smart Data collected from
the neighbor sensors. The objective of the data aggregation is to combine the Smart Data
from di erent sensors of an IoT application and transform them into a single Smart Data
cell that is smaller in size. Then, the aggregated Smart Data is encrypted and access
control policies for its payload are set in its metadata at this point. For this purpose, the
Smart Data cell communicates with the code repository unit to fetch the required code
Data Centre Project Report Page 169 of 185
fi
ff
fi
ffi
fl
fi
modules and rules for the access control and aggregation. The data aggregation in Smart
Data is done in an intelligent way. Instead of the gateways performing the aggregation,
Smart Data cells have the ability to aggregate with each other using the pre-speci ed
rules which are set in their metadata parts and required intelligence provided by the code
modules downloaded from the code repository.
The gateways in IoT systems are usually resource unconstrained devices and responsible
for managing the sensors and their data. In the case of Smart Data, the gateways provide
computing and storage resources as well as communication interfaces to the Smart Data.
In the gateways, after the aggregation stronger encryption mechanisms such as
Advanced Encryption Standard (AES) could be applied to Smart Data cells. At this point,
Smart Data becomes semi-mature, having aggregated data and developed its metadata
to a higher level of complexity.
The semi-mature Smart Data undergoes then a series of further pre-processing tasks,
such as ltering and compression through the next higher levels of the fog Computing
hierarchy and becomes mature. Furthermore, the Smart Data might be aggregated with
other Smart Data collected from the neighbor networks updating their metadata and VM
parts if required. Filtering involves elimination of noise from the data. Noise refers to
useless or unwanted records in the data that degrades the quality of data. Spurious
readings, measurement errors, and background data are three main causes of noise in the
data. Filtering makes further analysis and processing of the data easier and more
accurate. In some applications, in order to signi cantly decrease the amount of data and
to preserve the system’s energy e ciency, the ltering process eliminates normal features
of monitored parameters and only reports any abnormal trend of those parameters. Such
ltering rules are also set within the metadata part of the Smart Data. For example, in the
case of a video streaming application whose aim is to detect the motions in the
environment, ltering the unwanted part of a video stream that contains no movements
and keeping and processing only the part that contains movements of an object will
signi cantly reduce the size of the data that needs to be processed. This will also
considerably a ect the network tra c. Clustering techniques provide an e cient way to
lter the data.
Data Centre Project Report Page 170 of 185
fi
fi
fi
fi
fi
ff
ffi
ffi
fi
fi
ffi
fi
In addition to combining the Smart Data collected from di erent sensors or from a same
sensor in di erent time periods, the aggregation process also involves transforming data
from di erent sources to a single summative data. Such summative data will have a
smaller size compared to the sum of the original data before the aggregation that results
in lower network tra c and improved performance in processing the data. The
aggregation of sensory data will also reduce the variety of Big Data in Smart Data.
Depending on the application, ltering and aggregation might be applied on the smart
data in the fog Computing network hierarchy until the Smart Data reaches a desired level
of quality. So, de ning an optimal fog computing cluster in which the Smart Data is
processed with the least energy consumption and time, and with the best values for the
ve Vs of the data, is a challenging research problem in designing the Smart Data.
The mature Smart Data will have signi cantly less volume and velocity and higher value
and veracity. During its lifecycle, semi-mature or mature smart data would be compressed
and stored in the local memory systems provided by the fog nodes, so that it would be
swiftly available for local applications and processes. On the other hand, mature data
from each local fog node in multiple geographical areas is sent to the cloud for global
processing and stored for long-term purposes. Also, speci c rules for destroying Smart
Data can be set in the metadata part to destroy the payload part in the case of detected
security breaches, or if a given validity period expires. Table 1, brie y compares immature,
semi-mature and mature Smart Data.
Figure 7: Data Flow Diagram for Smart Data in an IoT based system.
In our model, utilizing the virtualized architecture of fog computing, we integrate Smart
Data as virtual fog computing nodes in an existing fog computing platform (Figure 3&4). In
this model, we bring cog computing not only to the nearest point (gateways) to the edge
of the network (sensors) but also the sensor nodes themselves are involved and are
organic parts of fog computing. Smart Data is generated by sensors and sent to the
gateways managing the sensor nodes. Then it is (pre-)processed and stored in the local
fog networks. The sensor nodes have built-in communication capability and are able to
communicate with their neighbors. This has two main advantages. First, in case a sensor
node has lost its connection with a gateway, it can redirect the data to another node in its
vicinity that has connectivity with the gateway. This is especially crucial and e ective for
mobile sensors. Second, the sensor nodes can collaborate with each other for very swift
decision-making that does not require heavy processing.
The sensors are the lowest layer of our fog computing hierarchy (Figure 4). Since the
sensor nodes in our model are not primarily meant for actual data processing, we
consider the sensors as the level zero (L0) of the computing hierarchy. Once Smart Data
is generated by the sensor nodes, a set of encapsulated data is sent to the gateways
within a particular time frame. The gateways in this model can also be mobile. The
nearest available gateway which receives Smart Data directly from a set of sensors is
considered the current managing fog node, or the gateway fog node, for these underlying
sensors. Moreover, as it is the nearest point to the edge sensors, it is considered the level
Data Centre Project Report Page 172 of 185
fl
ff
one (L1) fog node for this set of the sensor nodes. Each fog node is capable of
communicating with other fog nodes in its neighborhood.
A gateway fog node has the primary responsibility for managing Smart Data received
from its underlying reporting sensors, initiating and managing fog computing-based
processing and storage for this data and also forwarding actuation commands to speci c
actuators within its area. The gateway fog nodes, once having received Smart Data from
multiple sensors, supervise aggregation of this data and assign required computing and
memory resources for the involved processing. They are also responsible for setting
access control policies for smart data according to predetermined rules. Stronger
encryption of data is also applied at this point. Depending on the underlying IoT service,
the gateway nodes also establish distributed local storage within the local fog computing
platform or (pre-)process the contents of the Smart Data in the fog computing platform.
Evolution of Smart Data involves procedures such as aggregation, ltering, compression,
encryption, and access control.
The fog computing platform in our model is hierarchical, which enables stepwise
evolution of Smart Data at each level of hierarchy. At the level one (L1) of this hierarchy,
the gateway nodes are connected directly to the sensor nodes (L0) from one side and to
their upper fog level (L2) from the other side. The nodes at each level are able to
communicate with each other (typically with their neighbors at the same level) and
accomplish distributed tasks. Once the tasks have been completed the nodes pass data
to their parent node at the next higher level of hierarchy. This stepwise process is
continued until data reaches a desired degree of maturity at the highest hierarchy level of
the fog computing platform. Consequently, the architectural model is scalable and
extendable both vertically and horizontally. The identi cation of fog nodes is done
according to their level and position within a speci c level. For example Fnm identi es a
fog node at the level “n” and the position “m” within the level “n”.
Generally, the tasks at the lower levels of hierarchy are more detailed and concern
relatively small amounts of data, while at the higher levels of hierarchy more general tasks
on larger amounts of data are carried out. The data collected from the lower levels is
aggregated at the higher levels. Moreover, a higher hierarchy level deals with data from a
larger geographical region than a lower hierarchy level. From the real-time performance
point of view, task allocation is done in such a way that processes with low/tight latency
requirements are executed at lower hierarchy levels, closer to users. This enables realtime
tasks to execute and deliver results very fast, improving user experience. Such processes
have some speci c characteristics. First, they typically use data related to fewer sensors
(corresponding to smaller geographical areas), and therefore a relatively small amount of
data is involved. Second, generally at lower levels, a larger number of individual
computing devices are involved in processing compared to higher levels. Tasks with no or
loose latency requirements can be executed at higher levels of hierarchy. Smart Data
processed in the fog computing platform becomes eventually mature and re ned and is
stored either in the distributed fog storage or sent to the cloud utilizing high performance
communication technologies such as 3G and broadband connections. In case a local
decision is made within the fog computing platform, the resulting commands will be sent
to corresponding actuators through the involved gateways, without disturbing the cloud.
Therefore, by pre-processing data locally in the fog computing platform, the workload of
coud Computing can be signi cantly reduced.
With wide application of IoT/IoE technologies, new techniques will be required to deal
with Big Data. In the Big Data paradigm, the data generated by sensors has a passive
and raw form that requires a signi cant amount of processing and storage resources.
Consequently, these new technologies are facing challenges in regard to management,
storage and processing of Big Data. In this data Centre, by reshaping the current raw and
passive structure of data into an intelligent and active form, we introduced the Smart Data
concept. This concept combined with the fog computing technology will have potential to
revolutionize the current perspective of data in IoT and will open many potential research
opportunities to tackle emerging Big Data issues. Smart Data takes advantage of
hierarchical and virtualized model of fog computing and provides better means for pre-
processing Big Data originated from IoT sensors. As the next phase of this project, our
purpose is to implement the Smart Data model in a realistic scenario and to design and
develop e cient solutions for relevant operations such as ltering, aggregation,
compression and encryption based on the proposed Smart Data concept.
When deployed strategically and paired with adept human oversight, arti cial intelligence
can generate a host of new e ciencies for next-generation data Centres.
Whether they maintain their own in-house data Centres or rely exclusively on o site data
Centres, IT professionals need to ensure their servers are equipped to handle the
increased demands generated by a wide range of emerging technologies that promise to
reshape the corporate landscape in the years to come. Companies that fail to incorporate
the revolutionary potential of these technologies—from cloud computing to big data to
arti cial intelligence (AI)—into their data Centre infrastructure may soon nd themselves
well behind the competitive curve.
In fact, Gartner predicts that more than 30 percent of data Centres that fail to su ciently
prepare for AI will no longer be operationally or economically viable by 2020. In light of
this stark reality, it’s incumbent upon companies and third-party vendors alike to invest in
solutions that will help them make the most of these cutting-edge technologies.
Data Centres consume an enormous amount of power, even in handling the computing
needs of a single mid-sized company. While some of this consumption stems directly
from servers’ compute and storage operations, much of it stems from data Centres’
cooling functions. It’s absolutely essential for companies to keep their servers cool in
order to guarantee their proper operation—that’s Data Centre 101—but at the industrial
data Centre scale, this energy usage can quickly become a major nancial burden. As
such, any tool or technique that can help a company improve its data Centre cooling
e ciency represents an immense value add.
In pursuit of better data Centre energy e ciency, Google and DeepMind recently
experimented with using AI to optimize their cooling activities. According to the Alphabet-
owned tech pioneers, the idea was that an AI-powered recommendation system—even
one that only makes minor improvements across a wide network of data Centres—could
cut down on energy usage, slash costs, and make facilities more environmentally
sustainable.
Thus far, the project has been an overwhelming success: the application of DeepMind’s
machine learning algorithms in Google’s data Centres has reduced the energy used for
cooling by as much as 40 percent, without compromising server performance.
Unfortunately, the number of quali ed candidates for these positions has remained fairly
stagnant. As such, data Centre management teams are facing a severe sta ng
shortage that may one day threaten companies’ ability to adequately maintain their digital
assets. In order to keep pace with the growing demand being placed on data Centres,
corporate stakeholders must now make a choice to either ght tooth and nail for limited
talent or invest in solutions that allow data Centres to thrive in the absence of extensive
human oversight.
Thankfully, AI technology o ers just such a solution, assisting with a range of server
functions without automating IT management entirely. AI platforms can autonomously
perform routine tasks like systems updating, security patching, and le backups while
leaving more nuanced, qualitative tasks to IT personnel. Without the burden of handling
each and every user request or incident alert, IT professionals can assume oversight roles
over tasks that previously required their painstaking attention, a ording them more time
to focus on bigger picture management challenges.
Data Centre Project Report Page 177 of 185
ffi
ff
ff
ff
fi
fi
ffi
ff
fi
ff
fi
fi
ffi
For both individual companies and third-party data Centre vendors, this partnership-
based approach provides a happy medium between outright automation and chronic
understa ng. In ve or ten years’ time, this “hybrid” management model is likely to be the
norm throughout the data Centre industry. Machines are not going to replace human
workers—at least not anytime soon—but they can help overworked IT teams do
everything that needs to be done to keep a data Centre running smoothly.
Improving security
Data Centres are prone to di erent kinds of cyber threats. Cybercriminals are always
nding new ways to obtain data from data Centres. For this purpose, hackers regularly
develop more advanced malware strains and plan cyber attacks that can stealthily
in ltrate organizations' networks. With such malware, hackers can gain access to
con dential data of millions of users. For instance, a security researcher recently reported
a massive data breach that exposed 773 million emails and 21 million passwords. This
data breach can be extremely dangerous as the breach has accumulated data from
various sources, resulting in 1.6 billion unique combinations of email addresses and
passwords. Such data breaches are a common occurrence for data-driven businesses.
Hence, every organization hires cybersecurity professionals to analyze new cyber attacks
and create prevention and mitigation strategies. However, nding and analyzing cyber
attacks is extremely labor intensive for cybersecurity experts.
Organizations can deploy AI in the data Centre for data security. For this purpose, AI can
learn normal network behavior and detect cyber threats based on deviation from that
behavior. Additionally, the utilization of AI in the data Centre can detect malware and
identify security loopholes in data Centre systems. Additionally, AI-based cybersecurity
can screen and analyze incoming and outgoing data for security threats thoroughly.
Conserving energy
Running a data Centre can consume large amounts of electricity. A signi cant portion of
the energy is utilized on cooling systems for data Centres. In the US alone, data Centres
consume more than 90 billion kilowatt-hours of electricity in a year. On a global scale,
data Centres use around 416 terawatts of electricity. Hence, energy consumption is a
signi cant problem for data Centres. Additionally, electricity consumption will double
every four years as global data tra c increases. To conserve energy, organizations are
continually looking for new solutions.
Tech giants are using AI in the data Centre for conserving energy. For instance, Google
has deployed AI to utilize energy in its data Centres e ciently. As a result, Google
executives reduced their data Centre’s cooling system energy consumption by 40%. Even
40% of savings can be equivalent to millions of dollars’ worth of energy savings for an
industry giant like Google. Likewise, every data-driven business can deploy AI in the data
Centre for energy savings. AI can learn and analyze temperature set points, test ow
rates, and evaluate cooling equipment. Organizations can also train their AI by collecting
critical data with the help of smart sensors. With this approach, AI can identify sources of
Reducing downtime
Data outages in data Centres can lead to signi cant downtimes. Therefore, organizations
hire skilled professionals for monitoring and predicting data outages. However, manually
predicting data outages can be a complicated task. The sta in data Centres have to
decode and analyze multiple issues to nd the root cause of di erent problems. However,
the implementation of AI in the data Centre can be a feasible solution for this crisis.
Arti cial intelligence can monitor server performance, network congestions, and disk
utilization to detect and predict data outages. With the help of AI, organizations can
leverage advanced predictive analytics to track power levels and identify potential
defective areas in the systems. For instance, an arti cial intelligence-based predictive
engine can be deployed in an organization to predict and identify data outages in the data
Centre, and built-in signatures can recognize users that might be a ected. Then, AI
systems can autonomously implement mitigation strategies to help the data Centre
recover from the data outage.
Deploying AI in the data Centre can help distribute the workload across various servers
with the help of predictive analytics. AI-powered load balancing algorithms can learn from
past data to distribute workload e ciently. AI-based server optimization can help nd
possible aws in data Centres, reduce processing times, and resolve risk factors quicker
than traditional approaches. With this approach, organizations can maximize server
optimization and performance.
Monitoring equipment
Data Centre engineers need to monitor the equipment for detecting aws and the need
for repairs regularly. However, there is always a possibility that data Centre engineers
might miss some de ciencies in the system, leading to equipment failures. Such
equipment failures can turn out to be expensive for organizations as they might need to
repair or sometimes, replace equipment. Also, equipment failures can lead to downtimes,
resulting in low productivity and poor delivery of services for customers. Equipment
failures can be a common occurrence for data Centres as they process increasing
volumes of data daily. Such high processing requirements heat the entire system,
a ecting data Centre equipment constantly. In case a cooling system has an undetected
Organizations can leverage AI in the data Centre for active equipment monitoring tasks.
Arti cial intelligence can identify defects in the data Centre equipment using pattern-
based learning. For this purpose, AI can use smart sensors installed in the equipment. In
case AI systems nd any excessive or low vibrations and unwanted sounds, the system
would notify data Centre engineers about possible defects. With this approach,
implementing AI in the data Centre can predict potential equipment failures to avoid
downtimes.
The advent of AI in the data Centre is showing promising potential in various industry
sectors. Soon, AI will dominate the world of data Centres and colocation service
providers by proactively helping in disaster recovery and regulatory compliance. Hence,
data Centres and colocation service providers must utilize AI to keep up with emerging
technology trends and gain a competitive edge.
Born to satisfy the demands of gamers, graphics processing units (GPUs) are nding
a new niche in enterprise data Centres. Their parallel architecture makes them well-suited
to some traditional workloads that can bene t from GPU-accelerated computingGraphics
processing units (GPUs) can enhance performance for data Centre applications that
require complex math functions and large data sets, such as parallel processing, SQL
database calculation, image recognition, machine learning and big data analysis.
Chip designers developed the GPUs to process graphics algorithms in the gaming
industry because central processing units (CPUs) weren't equipped to render 3D images
on a 2D display and to render special e ects. The idea was to install GPU hardware as a
specialized chip -- often deployed on an expansion card such as a PCI Express card -- to
o oad graphics from an application. The GPU performs the rendering and applies the
required e ects to craft each frame of the image and then sends the image to a display
attached directly to a port on the GPU card.
Because a GPU has thousands of cores and was designed to handle split-second
movement of graphics on a large screen, if you point it at rows and columns of data, it's
incredibly fast at doing analytics processing.
When CPU clock speeds doubled every two years, there was little demand for the type of
speed GPU-accelerated computing enables. But as improvements in clock speed
tapered, users with high compute demands began exploring GPUs as an alternative, he
said. GPUs o er slower clock speeds than CPUs, but they can process thousands of
threads simultaneously. Early adopters included scienti c computing and real-world
Data Centre Project Report Page 182 of 185
ffl
ffi
ff
ff
ff
fi
fi
fi
modeling applications, which must simultaneously process the in uence of multiple
variables.
GPUs, however, can also handle much of the complex math needed for non-gaming
applications. GPU hardware can perform 2.5 times faster than a CPU. This means the
application can potentially use the GPU to perform double the work or to deliver a
calculation in a fraction of the time compared to a general-purpose CPU.
The only caveat is that the application must have the proper code to support GPU
hardware. Today, software vendors embed GPU code in tools for deep learning, container
orchestration, and cluster management and monitoring. Not all software supports GPU
processing; admins should check legacy, proprietary and mainframe applications before
using GPUs.
GPU hardware works well as workload accelerators in the data Centre. For example,
admins can use GPUs in virtual desktop infrastructure setups to support graphics and
rendering tasks for virtual desktops.
Many major cloud providers now o er instances for GPU-accelerated computing, which
can lower the cost of entry for a company experimenting with an analytics or visualization
project. However, a number of factors could slow the growth of GPU-powered analytics,
Stamper said. With systems as large, complex and expensive as enterprise database
platforms, there's an understandable hesitance to rip and replace. And most of the
companies o ering GPU-accelerated database systems are small startups that don't
have the name recognition, sales force or support infrastructure of traditional database
giants. Even so, for companies looking to quickly analyze large data sets, the raw
performance gains will be impossible to ignore. GPU-accelerated database systems are
capable of replacing entire server clusters for some workloads
GPUs are not a silver bullet for all workloads. An application must be refactored or
speci cally written to run on a GPU. Some processes are obviously parallel and bene t
the most from GPUs. Other processes are highly sequential and run best on CPUs.
A smarter approach
The ability of a single GPU to process simultaneous threads faster than a cluster of CPU-
based servers makes GPUs ideal for another type of emerging workload. In the last few
years, we've seen this big bang of arti cial intelligence and machine learning, which just
happens to be the ideal parallel workload
There are several approaches to arti cial intelligence, but one that is growing rapidly --
thanks to the emergence of GPUs for general-purpose computing -- is deep learning.
Data Centre Project Report Page 183 of 185
fi
ff
ff
fi
fi
fl
fi
The deep learning angle on AI aims to emulate the human brain's array of neurons with a
virtual neural network of computer nodes. A network is made of several layers, with each
node performing a speci c function. Output data from each node is then weighed to
deduce a probable result. The concept is decades old, but, until recently, only very large
clusters and supercomputers were capable of creating a robust neural net.
While the horizon is promising for the GPU, that doesn't necessarily mean it will be the
only technology to supplant CPUs. Intel is betting big -- speci cally the $16.7 billion it
spent to acquire specialized chipmaker Altera -- not on GPUs, but on eld-programmable
gate arrays (FPGAs). Like GPUs, FPGAs can perform certain calculations faster than
CPUs. Rather than serve as general-purpose processors, FPGAs can be programmed to
more e ciently execute a speci c set of instructions.
Admins can also deploy GPUs in big data and scienti c computing scenarios to run
weather models, nd correlations in huge data sets, develop machine learning
applications and perform facial recognition from video images.