0% found this document useful (0 votes)
54 views20 pages

UNIT 2 Virtual Machines and Virtualization of Clusters and Data Centers

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views20 pages

UNIT 2 Virtual Machines and Virtualization of Clusters and Data Centers

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

UNIT 2 Virtual Machines and

Virtualization of Clusters and Data


Centers
Implementation Levels of Virtualization,
Virtualization is a technique, which allows to share single physical instance of an
application or resource among multiple organizations or tenants (customers). It does so
by assigning a logical name to a physical resource and providing a pointer to that
physical resource on demand.

Virtualization Concept

Creating a virtual machine over existing operating system and hardware is referred as
Hardware Virtualization. Virtual Machines provide an environment that is logically
separated from the underlying hardware.

The machine on which the virtual machine is created is known as host


machine and virtual machine is referred as a guest machine. This virtual machine is
managed by a software or firmware, which is known as hypervisor.

Hypervisor
The hypervisor is a firmware or low-level program that acts as a Virtual Machine
Manager. There are two types of hypervisor:
Type 1 hypervisor executes on bare system. LynxSecure, RTS Hypervisor, Oracle VM,
Sun xVM Server, VirtualLogic VLX are examples of Type 1 hypervisor. The following
diagram shows the Type 1 hypervisor.

The type1 hypervisor does not have any host operating system because they are
installed on a bare system.
Type 2 hypervisor is a software interface that emulates the devices with which a
system normally interacts. Containers, KVM, Microsoft Hyper V, VMWare Fusion, Virtual
Server 2005 R2, Windows Virtual PC and VMWare workstation 6.0 are examples of
Type 2 hypervisor. The following diagram shows the Type 2 hypervisor.
Explore our latest online courses and learn new skills at your own pace. Enroll and
become a certified expert to boost your career.

Types of Hardware Virtualization

Here are the three types of hardware virtualization:

 Full Virtualization
 Emulation Virtualization
 Paravirtualization

Full Virtualization
In full virtualization, the underlying hardware is completely simulated. Guest software
does not require any modification to run.

Emulation Virtualization
In Emulation, the virtual machine simulates the hardware and hence becomes
independent of it. In this, the guest operating system does not require modification.
Paravirtualization
In Paravirtualization, the hardware is not simulated. The guest software run their own
isolated domains.

VMware vSphere is highly developed infrastructure that offers a management


infrastructure framework for virtualization. It virtualizes the system, storage and
networking hardware.

Virtualization Structures/ Tools and Mechanisms

Virtualization is the "creation of a virtual (rather than actual) version of something,


such as a server, a desktop, a storage device, an operating system or network
resources".
In other words, Virtualization is a technique, which allows to share a single physical
instance of a resource or an application among multiple customers and
organizations. It does by assigning a logical name to a physical storage and
providing a pointer to that physical resource when demanded.

What is the concept behind the Virtualization?


Creation of a virtual machine over existing operating system and hardware is known
as Hardware Virtualization. A Virtual machine provides an environment that is
logically separated from the underlying hardware.

The machine on which the virtual machine is going to create is known as Host
Machine and that virtual machine is referred as a Guest Machine

Types of Virtualization:
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.

1) Hardware Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly
installed on the hardware system is known as hardware virtualization.

The main job of hypervisor is to control and monitoring the processor, memory and
other hardware resources.

After virtualization of hardware system we can install different operating system on it


and run different applications on those OS.

Usage:

Hardware virtualization is mainly done for the server platforms, because controlling
virtual machines is much easier than controlling a physical server.

2) Operating System Virtualization:


When the virtual machine software or virtual machine manager (VMM) is installed on
the Host operating system instead of directly on the hardware system is known as
operating system virtualization.

ADVERTISEMENT

Usage:

Operating System Virtualization is mainly used for testing the applications on


different platforms of OS.
3) Server Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly
installed on the Server system is known as server virtualization.

Usage:

Server virtualization is done because a single physical server can be divided into
multiple servers on the demand basis and for balancing the load.

4) Storage Virtualization:
Storage virtualization is the process of grouping the physical storage from multiple
network storage devices so that it looks like a single storage device.

Storage virtualization is also implemented by using software applications.

Usage:

Storage virtualization is mainly done for back-up and recovery purposes.

How does virtualization work in cloud computing?


Virtualization plays a very important role in the cloud computing technology,
normally in the cloud computing, users share the data present in the clouds like
application etc, but actually with the help of virtualization users shares the
Infrastructure.

The main usage of Virtualization Technology is to provide the applications with


the standard versions to their cloud users, suppose if the next version of that
application is released, then cloud provider has to provide the latest version to their
cloud users and practically it is possible because it is more expensive.

To overcome this problem we use basically virtualization technology, By using


virtualization, all severs and the software application which are required by other
cloud providers are maintained by the third party people, and the cloud providers
has to pay the money on monthly or annual basis.
Conclusion
Mainly Virtualization means, running multiple operating systems on a single machine
but sharing all the hardware resources. And it helps us to provide the pool of IT
resources so that we can share these IT resources in order get benefits in the
business.

, Virtualization of CPU,
Virtualization is used to create a virtual version of an underlying service With the help of
Virtualization, multiple operating systems and applications can run on the same machine and
its same hardware at the same time, increasing the utilization and flexibility of hardware. It
was initially developed during the mainframe era.
It is one of the main cost-effective, hardware-reducing, and energy-saving techniques used by
cloud providers. Virtualization allows sharing of a single physical instance of a resource or an
application among multiple customers and organizations at one time. It does this by assigning
a logical name to physical storage and providing a pointer to that physical resource on
demand. The term virtualization is often synonymous with hardware virtualization, which
plays a fundamental role in efficiently delivering Infrastructure-as-a-Service (IaaS) solutions
for cloud computing. Moreover, virtualization technologies provide a virtual environment for
not only executing applications but also for storage, memory, and networking.
Memory and I/O Devices,

Several memory chips and I/O devices are connected to a microprocessor.

The following figure shows a schematic diagram to interface memory chips and I/O
devices to a microprocessor.

Memory Interfacing
When we are executing any instruction, the address of memory location or an I/O
device is sent out by the microprocessor. The corresponding memory chip or I/O
device is selected by a decoding circuit.

Memory requires some signals to read from and write to registers and
microprocessor transmits some signals for reading or writing data.

I/O interfacing
As we know, keyboard and displays are used as communication channel with outside
world. Therefore, it is necessary that we interface keyboard and displays with the
microprocessor. This is called I/O interfacing. For this type of interfacing, we use
latches and buffers for interfacing the keyboards and displays with the
microprocessor.

But the main drawback of this interfacing is that the microprocessor can perform
only one function.

8279 Programmable Keyboard


The Intel 8279 is a programmable keyboard interfacing device. Data input and
display are the integral part of microprocessor kits and microprocessor-based
systems.

8279 has been designed for the purpose of 8-bit Intel microprocessors.

8279 has two sections namely keyboard section and display section.

The function of the keyboard section is to interface the keyboard which is used as
input device for the microprocessor. It can also interface toggle or thumb switches.
The purpose of the display section is to drive alphanumeric displays or indicator
lights. It is directly connected to the microprocessor bus.

The microprocessor is relieved from the burden of scanning the keyboard or


refreshing the display.

Some important Features are:

o Simultaneous keyboard display operations


o Scanned sensor mode
o Scanned keyboard mode
o 8-character keyboard FIFO
o Strobed input entry mode
o 2-key lock out or N-key roll over with contact debounce
o Single 16-charcter display
o Dual 8 or 16 numerical display
o Interrupt output on key entry
o Programmable scan timing and mode programmable from CPU

8257 DMA Controller


The data transfer from fast I/O devices to the memory or from the memory to I/O
devices through the accumulator is a time consuming process. For this situation,
the Direct Memory Access (DMA) technique is preferred. In DMA data transfer
scheme, data is directly transferred from an I/O device to RAM or from RAM to an I/O
device.

Using a DMA controller, the device requests the CPU to hold its address, data and
control bus, so the device is free to transfer data directly to/from the memory. The
DMA data transfer is initiated only after receiving HLDA signal from the CPU.

How DMA operations are performed?


Following are the operations performed by a DMA:

ADVERTISEMENT

o Initially, the device has to send DMA request (DRQ) to DMA controller for
sending the data between the device and the memory.
o The DMA controller sends Hold request (HRQ) to the CPU and waits for the
CPU for the HLDA.
o When CPU gets the HLDA signal then, it leaves the control over the bus and
acknowledges the HOLD request through HLDA signal.
o Now the CPU is in the HOLD state and the DMA controller has to manage the
operations over the buses between the CPU, memory and I/O devices.
Intel 8257
o The Intel 8257 is a programmable DMA controller.
o It is a 4-channel programmable Direct Memory Access (DMA) controller.
o It is a 40 pin I.C. package and requires +5V supply for its operation.
o It can perform three operations, namely read, write, and verify.
o Each channel incorporates two 16-bit registers, namely DMA address register
and byte count register.
o Each channel can transfer data up to 64kb and can be programmed
independently.
o It operates in 2 -modes: Master mode and Slave mode.

8257 Architecture
The following diagram is the architecture of Intel 8257:

8257 Pin Description

DRQ0 - DRQ3: These are DMA request lines. An I/O device sends the DMA request on
one of these lines. On the line, a HIGH status generates a DMA request.

DACK0 - DACK3 : These are DMA acknowledge lines. The Intel 8257 sends an
acknowledge signal through one of these lines informing an I/O device that it has
been selected for DMA data transfer. On the line, a LOW acknowledges the I/O
device.

A0 - A7: These are address lines. A0 - A3 are bidirectional lines. These lines carry 4
LSBs of 16-bit memory address generated by the 8257 in the master mode. In the
slave mode, these lines are all the input lines. The inputs select one from the
registers to be read or programmed. A 4 - A7 lines gives tristated outputs in the
master mode which carry 4 through 7 of the 16-bit memory address generated by
the Intel 8257.

D0 - D7: These are data lines. These are bidirectional three state lines. While
programming the controller the CPU sends data for the DMA address register, the
byte count register and the mode set register through these data lines.

AEN: Address latch enable.

ADSTB: A HIGH on this line latches the 8MSBs of the address, which are sent on D-
bus, into Intel 8212 connected for this purpose.

CS: It is chip select.

(I/OR): I/O read. It is a bidirectional line. In output mode it is used to access data
from the I/O device during the DMA write cycle.

(I/OW): I/O write. It is a bidirectional line. In output mode it allows the transfer of
data to the I/O device during the DMA read cycle. Data is transferred from the
memory.

MEMR: Memory read

MEMW: Memory write

TC: Byte count (Terminal count).

MARK: Modulo 128 Mark.

CLK: Clock

HRQ: Hold request

HLDA: Hold acknowledge

, Virtual Clusters and Resource Management

Cluster computing defines several computers linked on a network and implemented like
an individual entity. Each computer that is linked to the network is known as a node.

Cluster computing provides solutions to solve difficult problems by providing faster


computational speed, and enhanced data integrity. The connected computers implement
operations all together thus generating the impression like a single system (virtual
device). This procedure is defined as the transparency of the system.
Advantages of Cluster Computing

The advantages of cluster computing are as follows −

 Cost-Effectiveness − Cluster computing is considered to be much more


costeffective. These computing systems provide boosted implementation
concerning the mainframe computer devices.
 Processing Speed − The processing speed of cluster computing is validated
with that of the mainframe systems and other supercomputers demonstrate
around the globe.
 Increased Resource Availability − Availability plays an important role in
cluster computing systems. Failure of some connected active nodes can be simply
transformed onto different active nodes on the server, providing high availability.
 Improved Flexibility − In cluster computing, better description can be updated
and improved by inserting unique nodes into the current server.

Types of Cluster Computing

The types of cluster computing are as follows −

High Availability (HA) and Failover Clusters

These cluster models generate the availability of services and resources in an


uninterrupted technique using the system’s implicit redundancy. The basic term of
Cluster is that if a node declines, then applications and services can be made available to
different nodes. These methods of clusters deliver as the element for critical missions,
mails, documents, and application servers.

Load Balancing Clusters

This cluster allocates all the incoming traffic/requests for resources from nodes that run
the equal programs and machines. In this cluster model, some nodes are answerable for
tracking orders, and if a node declines, therefore the requests are distributed amongst all
the nodes available. Such a solution is generally used on web server farms.

HA & Load Balancing Clusters

This cluster model associates both cluster features, resulting in boost availability and
scalability of services and resources. This kind of cluster is generally used for email, web,
news, and FTP servers.

Distributed & Parallel Processing Clusters

This cluster model boosts availability and implementation for applications that have huge
computational tasks. A large computational task has been divided into smaller tasks and
distributed across the stations. Such clusters are generally used for numerical computing
or financial analysis that needs high processing power.

Virtualization for Data-Center Automation

A data center - also known as a data center or data center - is a facility made up of
networked computers, storage systems, and computing infrastructure that
businesses and other organizations use to organize, process, store large amounts of
data. And to broadcast. A business typically relies heavily on applications, services,
and data within a data center, making it a focal point and critical asset for everyday
operations.

Enterprise data centers increasingly incorporate cloud computing resources and


facilities to secure and protect in-house, onsite resources. As enterprises increasingly
turn to cloud computing, the boundaries between cloud providers' data centers and
enterprise data centers become less clear.

How do Data Centers work?


A data center facility enables an organization to assemble its resources and
infrastructure for data processing, storage, and communication, including:

o systems for storing, sharing, accessing, and processing data across the
organization;
o physical infrastructure to support data processing and data communication;
And
o Utilities such as cooling, electricity, network access, and uninterruptible power
supplies (UPS).

Gathering all these resources in one data center enables the organization
to:

o protect proprietary systems and data;


o Centralizing IT and data processing employees, contractors, and vendors;
o Enforcing information security controls on proprietary systems and data; And
o Realize economies of scale by integrating sensitive systems in one place.

Why are data centers important?


Data centers support almost all enterprise computing, storage, and business
applications. To the extent that the business of a modern enterprise runs on
computers, the data center is business.

Data centers enable organizations to concentrate their processing power, which in


turn enables the organization to focus its attention on:
o IT and data processing personnel;
o computing and network connectivity infrastructure; And
o Computing Facility Security.

What are the main components of Data Centers?


Elements of a data center are generally divided into three categories:

1. Calculation
2. enterprise data storage
3. networking

A modern data center concentrates an organization's data systems in a


well-protected physical infrastructure, which includes:

o Server;
o storage subsystems;
o networking switches, routers, and firewalls;
o cabling; And
o Physical racks for organizing and interconnecting IT equipment.

Datacenter Resources typically include:

o power distribution and supplementary power subsystems;


o electrical switching;
o UPS;
o backup generator;
o ventilation and data center cooling systems, such as in-row cooling
configurations and computer room air conditioners; And
o Adequate provision for network carrier (telecom) connectivity.

It demands a physical facility with physical security access controls and sufficient
square footage to hold the entire collection of infrastructure and equipment.

How are Datacenters managed?


Datacenter management is required to administer many different topics related to
the data center, including:

o Facilities Management. Management of a physical data center facility may


include duties related to the facility's real estate, utilities, access control, and
personnel.
o Datacenter inventory or asset management. Datacenter features include
hardware assets and software licensing, and release management.
o Datacenter Infrastructure Management. DCIM lies at the intersection of
IT and facility management and is typically accomplished by monitoring data
center performance to optimize energy, equipment, and floor use.
o Technical support. The data center provides technical services to the
organization, and as such, it should also provide technical support to the end-
users of the enterprise.
o Datacenter management includes the day-to-day processes and services
provided by the data center.

The image shows an IT professional installing and maintaining a high-capacity


rack-mounted system in a data center.

Datacenter Infrastructure Management and Monitoring


Modern data centers make extensive use of monitoring and management software.
Software, including DCIM tools, allows remote IT data center administrators to
monitor facility and equipment, measure performance, detect failures and
implement a wide range of corrective actions without ever physically entering the
data center room.

The development of virtualization has added another important dimension to data


center infrastructure management. Virtualization now supports the abstraction of
servers, networks, and storage, allowing each computing resource to be organized
into pools regardless of their physical location.

Action Network, storage and server virtualization can be implemented through


software, giving software-defined data centers traction. Administrators can then
provision workloads, storage instances, and even network configurations from those
common resource pools. When administrators no longer need those resources, they
can return them to the pool for reuse.

Energy Consumption and Efficiency


Datacenter designs also recognize the importance of energy efficiency. A simple
data center may require only a few kilowatts of energy, but enterprise data centers
may require more than 100 megawatts. Today, green data centers with minimal
environmental impact through low-emission building materials, catalytic converters,
and alternative energy technologies are growing in popularity.

Data centers can maximize efficiency through physical layouts, known as hot aisle
and cold isle layouts. The server racks are lined up in alternating rows, with cold air
intakes on one side and hot air exhausts. The result is alternating hot and cold aisles,
with the exhaust forming a hot aisle and the intake forming a cold aisle. Exhausts
are pointing to air conditioning equipment. The equipment is often placed between
the server cabinets in the row or aisle and distributes the cold air back into the cold
aisle. This configuration of air conditioning equipment is known as in-row cooling.
Organizations often measure data center energy efficiency through power usage
effectiveness (PUE), which represents the ratio of the total power entering the data
center divided by the power used by IT equipment.

However, the subsequent rise of virtualization has allowed for more productive use
of IT equipment, resulting in much higher efficiency, lower energy usage, and
reduced energy costs. Metrics such as PUE are no longer central to energy efficiency
goals. However, organizations can still assess PUE and use comprehensive power
and cooling analysis to understand better and manage energy efficiency.

Datacenter Level
Data centers are not defined by their physical size or style. Small businesses can
operate successfully with multiple servers and storage arrays networked within a
closet or small room. At the same time, major computing organizations -- such as
Facebook, Amazon, or Google -- can fill a vast warehouse space with data center
equipment and infrastructure.

In other cases, data centers may be assembled into mobile installations, such as
shipping containers, also known as data centers in a box, that can be moved and
deployed.

However, data centers can be defined by different levels of reliability or flexibility,


sometimes referred to as data center tiers.

In 2005, the American National Standards Institute (ANSI) and the


Telecommunications Industry Association (TIA) published the standard ANSI/TIA-942,
"Telecommunications Infrastructure Standards for Data Centers", which defined four
levels of data center design and implementation guidelines.

Each subsequent level aims to provide greater flexibility, security, and reliability
than the previous level. For example, a Tier I data center is little more than a server
room, while a Tier IV data center provides redundant subsystems and higher
security.

Levels can be differentiated by available resources, data center capabilities, or


uptime guarantees. The Uptime Institute defines data center levels as:

o Tier I. These are the most basic types of data centers, including UPS. Tier I
data centers do not provide redundant systems but must guarantee at least
99.671% uptime.
o Tier II.These data centers include system, power and cooling redundancy and
guarantee at least 99.741% uptime.
o Tier III. These data centers offer partial fault tolerance, 72-hour outage
protection, full redundancy, and a 99.982% uptime guarantee.
o Tier IV. These data centers guarantee 99.995% uptime - or no more than
26.3 minutes of downtime per year - as well as full fault tolerance, system
redundancy, and 96 hours of outage protection.
Most data center outages can be attributed to these four general categories.

Datacenter Architecture and Design


Although almost any suitable location can serve as a data center, a data center's
deliberate design and implementation require careful consideration. Beyond the
basic issues of cost and taxes, sites are selected based on several criteria:
geographic location, seismic and meteorological stability, access to roads and
airports, availability of energy and telecommunications, and even the prevailing
political environment.

Once the site is secured, the data center architecture can be designed to focus on
the structure and layout of mechanical and electrical infrastructure and IT
equipment. These issues are guided by the availability and efficiency goals of the
desired data center tier.

Datacenter Security
ADVERTISEMENT

Datacenter designs must also implement sound security and security practices. For
example, security is often reflected in the layout of doors and access corridors,
which must accommodate the movement of large, cumbersome IT equipment and
allow employees to access and repair infrastructure.

Fire fighting is another major safety area, and the widespread use of sensitive, high-
energy electrical and electronic equipment precludes common sprinklers. Instead,
data centers often use environmentally friendly chemical fire suppression systems,
which effectively oxygenate fires while minimizing collateral damage to equipment.
Comprehensive security measures and access controls are needed as the data
center is also a core business asset. These may include:

o Badge Access;
o biometric access control, and
o video surveillance.

These security measures can help detect and prevent employee, contractor, and
intruder misconduct.
What is Data Center Consolidation?
There is no need for a single data center. Modern businesses can use two or more
data center installations in multiple locations for greater flexibility and better
application performance, reducing latency by locating workloads closer to users.

Conversely, a business with multiple data centers may choose to consolidate data
centers while reducing the number of locations to reduce the cost of IT operations.
Consolidation typically occurs during mergers and acquisitions when most
businesses no longer need data centers owned by the subordinate business.

What is Data Center Colocation?


Datacenter operators may also pay a fee to rent server space in a colocation facility.
A colocation is an attractive option for organizations that want to avoid the large
capital expenditure associated with building and maintaining their data centers.

Today, colocation providers are expanding their offerings to include managed


services such as interconnectivity, allowing customers to connect to the public cloud.

Because many service providers today offer managed services and their colocation
features, the definition of managed services becomes hazy, as all vendors market
the term slightly differently. The important distinction to make is:

o The organization pays a vendor to place their hardware in a facility. The


customer is paying for the location alone.
o Managed services. The organization pays the vendor to actively maintain or
monitor the hardware through performance reports, interconnectivity,
technical support, or disaster recovery.

What is the difference between Data Center vs. Cloud?


Cloud computing vendors offer similar features to enterprise data centers. The
biggest difference between a cloud data center and a typical enterprise data center
is scale. Because cloud data centers serve many different organizations, they can
become very large. And cloud computing vendors offer these services through their
data centers.

Large enterprises such as Google may require very large data centers, such as the
Google data center in Douglas County, Ga.

Because enterprise data centers increasingly implement private cloud software, they
increasingly see end-users, like the services provided by commercial cloud providers.

Private cloud software builds on virtualization to connect cloud-like services,


including:

o system automation;
o user self-service; And
o Billing/Charge Refund to Data Center Administration.
The goal is to allow individual users to provide on-demand workloads and other
computing resources without IT administrative intervention.

Further blurring the lines between the enterprise data center and cloud computing is
the development of hybrid cloud environments. As enterprises increasingly rely on
public cloud providers, they must incorporate connectivity between their data
centers and cloud providers.

For example, platforms such as Microsoft Azure emphasize hybrid use of local data
centers with Azure or other public cloud resources. The result is not the elimination
of data centers but the creation of a dynamic environment that allows organizations
to run workloads locally or in the cloud or move those instances to or from the cloud
as desired.

Evolution of Data Centers


The origins of the first data centers can be traced back to the 1940s and the
existence of early computer systems such as the Electronic Numerical Integrator and
Computer (ENIAC). These early machines were complicated to maintain and operate
and had cables connecting all the necessary components. They were also in use by
the military - meaning special computer rooms with racks, cable trays, cooling
mechanisms, and access restrictions were necessary to accommodate all equipment
and implement appropriate safety measures.

However, it was not until the 1990s, when IT operations began to gain complexity
and cheap networking equipment became available, that the term data center first
came into use. It became possible to store all the necessary servers in one room
within the company. These specialized computer rooms gained traction, dubbed data
centers within organizations.

At the time of the dot-com bubble in the late 1990s, the need for Internet speed and
a constant Internet presence for companies required large amounts of networking
equipment required large facilities. At this point, data centers became popular and
began to look similar to those described above.

In the history of computing, as computers get smaller and networks get bigger, the
data center has evolved and shifted to accommodate the necessary technology of
the day.

Difference between Cloud and Data Center


Most organizations rely heavily on data for their respective day-to-day operations,
irrespective of the industry or the nature of the data. This data can range from
making business decisions, identifying patterns to improving the services provided,
or analyzing weak links in a workflow.

Cloud
Cloud may be a term used to describe a group of services, either a global or
individual network of servers, that have a unique function. Cloud is not a physical
entity, but they are a group or network of remote servers arched together to operate
as a single unit for an assigned task.
In short, a cloud is a building containing many computer systems. We access the
cloud through the Internet because cloud providers provide the cloud as a service.

One of the many confusions we have is whether the cloud is the same as cloud
computing? The answer is no. Cloud services like Compute run in the cloud. The
computing service offered by the cloud lets users' rent' computer systems in a data
center over the Internet.

Another example of a cloud service is storage. AWS says, "Cloud computing is the
on-demand delivery of IT resources over the Internet with pay-as-you-go pricing.
Instead of buying, owning, and maintaining physical data centers and servers, you
can access technology services, such as computing power, storage, and databases,
from a cloud provider such as Amazon Web Services (AWS)."

Types of Cloud:

Businesses use cloud resources in different ways. There are mainly four of them:

o Public Cloud: The cloud method is open to all with the Internet on a pay-per-
use method.
o Private Cloud: This is a cloud method used by organizations to make their
data centers accessible only with the organization's permission.
o Hybrid cloud: It is a cloud method that combines public and private clouds.
It caters to the various needs of an organization for its services.
o Community cloud is a cloud method that provides services to an
organization or a group of people within a single community.

Data Center
ADVERTISEMENT

A data center can be described as a facility/location of networked computers and


associated components (such as telecommunications and storage) that help
businesses and organizations handle large amounts of data. These data centers
allow data to be organized, processed, stored, and transmitted across applications
used by businesses.

Types of Data Center:

Businesses use different types of data centers, including:

o Telecom Data Center: It is a type of data center operated by


telecommunications or service providers. It requires high-speed connectivity
to work.
o Enterprise data center: This is a type of data center built and owned by a
company that may or may not be onsite.
o Colocation Data Center: This type of data center consists of a single data
center owner's location, providing cooling to multiple enterprises and hyper-
scale their customers.
o Hyper-Scale Data Center: This is a type of data center owned and operated
by the company itself.

Difference between Cloud and Data Center:


S.N Cloud Data Center
o

1. Cloud is a virtual resource Data Center is a physical


that helps businesses store, resource that helps businesses
organize, and operate data store, organize, and operate
efficiently. data efficiently.

2. The scalability of the cloud The scalability of the Data


required less amount of Center is huge in investment
investment. compared to the cloud.

3. Maintenance cost is less as Maintenance cost is high


compared to service because the developers of the
providers. organization do the
maintenance.

4. The organization needs to The organization's developers


rely on third parties to store are trusted for the data stored in
its data. the data centers.

5. The performance is huge The performance is less than the


compared to the investment. investment.

6. This requires a plan for It is easily customizable without


optimizing the cloud. any hard planning.

7. It requires a stable internet This may or may not require an


connection to provide the internet connection.
function.

8. The cloud is easy to operate Data centers require


and is considered a viable experienced developers to
option. operate and are not considered
a viable option.

You might also like