UNIT 2 Virtual Machines and Virtualization of Clusters and Data Centers
UNIT 2 Virtual Machines and Virtualization of Clusters and Data Centers
Virtualization Concept
Creating a virtual machine over existing operating system and hardware is referred as
Hardware Virtualization. Virtual Machines provide an environment that is logically
separated from the underlying hardware.
Hypervisor
The hypervisor is a firmware or low-level program that acts as a Virtual Machine
Manager. There are two types of hypervisor:
Type 1 hypervisor executes on bare system. LynxSecure, RTS Hypervisor, Oracle VM,
Sun xVM Server, VirtualLogic VLX are examples of Type 1 hypervisor. The following
diagram shows the Type 1 hypervisor.
The type1 hypervisor does not have any host operating system because they are
installed on a bare system.
Type 2 hypervisor is a software interface that emulates the devices with which a
system normally interacts. Containers, KVM, Microsoft Hyper V, VMWare Fusion, Virtual
Server 2005 R2, Windows Virtual PC and VMWare workstation 6.0 are examples of
Type 2 hypervisor. The following diagram shows the Type 2 hypervisor.
Explore our latest online courses and learn new skills at your own pace. Enroll and
become a certified expert to boost your career.
Full Virtualization
Emulation Virtualization
Paravirtualization
Full Virtualization
In full virtualization, the underlying hardware is completely simulated. Guest software
does not require any modification to run.
Emulation Virtualization
In Emulation, the virtual machine simulates the hardware and hence becomes
independent of it. In this, the guest operating system does not require modification.
Paravirtualization
In Paravirtualization, the hardware is not simulated. The guest software run their own
isolated domains.
The machine on which the virtual machine is going to create is known as Host
Machine and that virtual machine is referred as a Guest Machine
Types of Virtualization:
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.
1) Hardware Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly
installed on the hardware system is known as hardware virtualization.
The main job of hypervisor is to control and monitoring the processor, memory and
other hardware resources.
Usage:
Hardware virtualization is mainly done for the server platforms, because controlling
virtual machines is much easier than controlling a physical server.
ADVERTISEMENT
Usage:
Usage:
Server virtualization is done because a single physical server can be divided into
multiple servers on the demand basis and for balancing the load.
4) Storage Virtualization:
Storage virtualization is the process of grouping the physical storage from multiple
network storage devices so that it looks like a single storage device.
Usage:
, Virtualization of CPU,
Virtualization is used to create a virtual version of an underlying service With the help of
Virtualization, multiple operating systems and applications can run on the same machine and
its same hardware at the same time, increasing the utilization and flexibility of hardware. It
was initially developed during the mainframe era.
It is one of the main cost-effective, hardware-reducing, and energy-saving techniques used by
cloud providers. Virtualization allows sharing of a single physical instance of a resource or an
application among multiple customers and organizations at one time. It does this by assigning
a logical name to physical storage and providing a pointer to that physical resource on
demand. The term virtualization is often synonymous with hardware virtualization, which
plays a fundamental role in efficiently delivering Infrastructure-as-a-Service (IaaS) solutions
for cloud computing. Moreover, virtualization technologies provide a virtual environment for
not only executing applications but also for storage, memory, and networking.
Memory and I/O Devices,
The following figure shows a schematic diagram to interface memory chips and I/O
devices to a microprocessor.
Memory Interfacing
When we are executing any instruction, the address of memory location or an I/O
device is sent out by the microprocessor. The corresponding memory chip or I/O
device is selected by a decoding circuit.
Memory requires some signals to read from and write to registers and
microprocessor transmits some signals for reading or writing data.
I/O interfacing
As we know, keyboard and displays are used as communication channel with outside
world. Therefore, it is necessary that we interface keyboard and displays with the
microprocessor. This is called I/O interfacing. For this type of interfacing, we use
latches and buffers for interfacing the keyboards and displays with the
microprocessor.
But the main drawback of this interfacing is that the microprocessor can perform
only one function.
8279 has been designed for the purpose of 8-bit Intel microprocessors.
8279 has two sections namely keyboard section and display section.
The function of the keyboard section is to interface the keyboard which is used as
input device for the microprocessor. It can also interface toggle or thumb switches.
The purpose of the display section is to drive alphanumeric displays or indicator
lights. It is directly connected to the microprocessor bus.
Using a DMA controller, the device requests the CPU to hold its address, data and
control bus, so the device is free to transfer data directly to/from the memory. The
DMA data transfer is initiated only after receiving HLDA signal from the CPU.
ADVERTISEMENT
o Initially, the device has to send DMA request (DRQ) to DMA controller for
sending the data between the device and the memory.
o The DMA controller sends Hold request (HRQ) to the CPU and waits for the
CPU for the HLDA.
o When CPU gets the HLDA signal then, it leaves the control over the bus and
acknowledges the HOLD request through HLDA signal.
o Now the CPU is in the HOLD state and the DMA controller has to manage the
operations over the buses between the CPU, memory and I/O devices.
Intel 8257
o The Intel 8257 is a programmable DMA controller.
o It is a 4-channel programmable Direct Memory Access (DMA) controller.
o It is a 40 pin I.C. package and requires +5V supply for its operation.
o It can perform three operations, namely read, write, and verify.
o Each channel incorporates two 16-bit registers, namely DMA address register
and byte count register.
o Each channel can transfer data up to 64kb and can be programmed
independently.
o It operates in 2 -modes: Master mode and Slave mode.
8257 Architecture
The following diagram is the architecture of Intel 8257:
DRQ0 - DRQ3: These are DMA request lines. An I/O device sends the DMA request on
one of these lines. On the line, a HIGH status generates a DMA request.
DACK0 - DACK3 : These are DMA acknowledge lines. The Intel 8257 sends an
acknowledge signal through one of these lines informing an I/O device that it has
been selected for DMA data transfer. On the line, a LOW acknowledges the I/O
device.
A0 - A7: These are address lines. A0 - A3 are bidirectional lines. These lines carry 4
LSBs of 16-bit memory address generated by the 8257 in the master mode. In the
slave mode, these lines are all the input lines. The inputs select one from the
registers to be read or programmed. A 4 - A7 lines gives tristated outputs in the
master mode which carry 4 through 7 of the 16-bit memory address generated by
the Intel 8257.
D0 - D7: These are data lines. These are bidirectional three state lines. While
programming the controller the CPU sends data for the DMA address register, the
byte count register and the mode set register through these data lines.
ADSTB: A HIGH on this line latches the 8MSBs of the address, which are sent on D-
bus, into Intel 8212 connected for this purpose.
(I/OR): I/O read. It is a bidirectional line. In output mode it is used to access data
from the I/O device during the DMA write cycle.
(I/OW): I/O write. It is a bidirectional line. In output mode it allows the transfer of
data to the I/O device during the DMA read cycle. Data is transferred from the
memory.
CLK: Clock
Cluster computing defines several computers linked on a network and implemented like
an individual entity. Each computer that is linked to the network is known as a node.
This cluster allocates all the incoming traffic/requests for resources from nodes that run
the equal programs and machines. In this cluster model, some nodes are answerable for
tracking orders, and if a node declines, therefore the requests are distributed amongst all
the nodes available. Such a solution is generally used on web server farms.
This cluster model associates both cluster features, resulting in boost availability and
scalability of services and resources. This kind of cluster is generally used for email, web,
news, and FTP servers.
This cluster model boosts availability and implementation for applications that have huge
computational tasks. A large computational task has been divided into smaller tasks and
distributed across the stations. Such clusters are generally used for numerical computing
or financial analysis that needs high processing power.
A data center - also known as a data center or data center - is a facility made up of
networked computers, storage systems, and computing infrastructure that
businesses and other organizations use to organize, process, store large amounts of
data. And to broadcast. A business typically relies heavily on applications, services,
and data within a data center, making it a focal point and critical asset for everyday
operations.
o systems for storing, sharing, accessing, and processing data across the
organization;
o physical infrastructure to support data processing and data communication;
And
o Utilities such as cooling, electricity, network access, and uninterruptible power
supplies (UPS).
Gathering all these resources in one data center enables the organization
to:
1. Calculation
2. enterprise data storage
3. networking
o Server;
o storage subsystems;
o networking switches, routers, and firewalls;
o cabling; And
o Physical racks for organizing and interconnecting IT equipment.
It demands a physical facility with physical security access controls and sufficient
square footage to hold the entire collection of infrastructure and equipment.
Data centers can maximize efficiency through physical layouts, known as hot aisle
and cold isle layouts. The server racks are lined up in alternating rows, with cold air
intakes on one side and hot air exhausts. The result is alternating hot and cold aisles,
with the exhaust forming a hot aisle and the intake forming a cold aisle. Exhausts
are pointing to air conditioning equipment. The equipment is often placed between
the server cabinets in the row or aisle and distributes the cold air back into the cold
aisle. This configuration of air conditioning equipment is known as in-row cooling.
Organizations often measure data center energy efficiency through power usage
effectiveness (PUE), which represents the ratio of the total power entering the data
center divided by the power used by IT equipment.
However, the subsequent rise of virtualization has allowed for more productive use
of IT equipment, resulting in much higher efficiency, lower energy usage, and
reduced energy costs. Metrics such as PUE are no longer central to energy efficiency
goals. However, organizations can still assess PUE and use comprehensive power
and cooling analysis to understand better and manage energy efficiency.
Datacenter Level
Data centers are not defined by their physical size or style. Small businesses can
operate successfully with multiple servers and storage arrays networked within a
closet or small room. At the same time, major computing organizations -- such as
Facebook, Amazon, or Google -- can fill a vast warehouse space with data center
equipment and infrastructure.
In other cases, data centers may be assembled into mobile installations, such as
shipping containers, also known as data centers in a box, that can be moved and
deployed.
Each subsequent level aims to provide greater flexibility, security, and reliability
than the previous level. For example, a Tier I data center is little more than a server
room, while a Tier IV data center provides redundant subsystems and higher
security.
o Tier I. These are the most basic types of data centers, including UPS. Tier I
data centers do not provide redundant systems but must guarantee at least
99.671% uptime.
o Tier II.These data centers include system, power and cooling redundancy and
guarantee at least 99.741% uptime.
o Tier III. These data centers offer partial fault tolerance, 72-hour outage
protection, full redundancy, and a 99.982% uptime guarantee.
o Tier IV. These data centers guarantee 99.995% uptime - or no more than
26.3 minutes of downtime per year - as well as full fault tolerance, system
redundancy, and 96 hours of outage protection.
Most data center outages can be attributed to these four general categories.
Once the site is secured, the data center architecture can be designed to focus on
the structure and layout of mechanical and electrical infrastructure and IT
equipment. These issues are guided by the availability and efficiency goals of the
desired data center tier.
Datacenter Security
ADVERTISEMENT
Datacenter designs must also implement sound security and security practices. For
example, security is often reflected in the layout of doors and access corridors,
which must accommodate the movement of large, cumbersome IT equipment and
allow employees to access and repair infrastructure.
Fire fighting is another major safety area, and the widespread use of sensitive, high-
energy electrical and electronic equipment precludes common sprinklers. Instead,
data centers often use environmentally friendly chemical fire suppression systems,
which effectively oxygenate fires while minimizing collateral damage to equipment.
Comprehensive security measures and access controls are needed as the data
center is also a core business asset. These may include:
o Badge Access;
o biometric access control, and
o video surveillance.
These security measures can help detect and prevent employee, contractor, and
intruder misconduct.
What is Data Center Consolidation?
There is no need for a single data center. Modern businesses can use two or more
data center installations in multiple locations for greater flexibility and better
application performance, reducing latency by locating workloads closer to users.
Conversely, a business with multiple data centers may choose to consolidate data
centers while reducing the number of locations to reduce the cost of IT operations.
Consolidation typically occurs during mergers and acquisitions when most
businesses no longer need data centers owned by the subordinate business.
Because many service providers today offer managed services and their colocation
features, the definition of managed services becomes hazy, as all vendors market
the term slightly differently. The important distinction to make is:
Large enterprises such as Google may require very large data centers, such as the
Google data center in Douglas County, Ga.
Because enterprise data centers increasingly implement private cloud software, they
increasingly see end-users, like the services provided by commercial cloud providers.
o system automation;
o user self-service; And
o Billing/Charge Refund to Data Center Administration.
The goal is to allow individual users to provide on-demand workloads and other
computing resources without IT administrative intervention.
Further blurring the lines between the enterprise data center and cloud computing is
the development of hybrid cloud environments. As enterprises increasingly rely on
public cloud providers, they must incorporate connectivity between their data
centers and cloud providers.
For example, platforms such as Microsoft Azure emphasize hybrid use of local data
centers with Azure or other public cloud resources. The result is not the elimination
of data centers but the creation of a dynamic environment that allows organizations
to run workloads locally or in the cloud or move those instances to or from the cloud
as desired.
However, it was not until the 1990s, when IT operations began to gain complexity
and cheap networking equipment became available, that the term data center first
came into use. It became possible to store all the necessary servers in one room
within the company. These specialized computer rooms gained traction, dubbed data
centers within organizations.
At the time of the dot-com bubble in the late 1990s, the need for Internet speed and
a constant Internet presence for companies required large amounts of networking
equipment required large facilities. At this point, data centers became popular and
began to look similar to those described above.
In the history of computing, as computers get smaller and networks get bigger, the
data center has evolved and shifted to accommodate the necessary technology of
the day.
Cloud
Cloud may be a term used to describe a group of services, either a global or
individual network of servers, that have a unique function. Cloud is not a physical
entity, but they are a group or network of remote servers arched together to operate
as a single unit for an assigned task.
In short, a cloud is a building containing many computer systems. We access the
cloud through the Internet because cloud providers provide the cloud as a service.
One of the many confusions we have is whether the cloud is the same as cloud
computing? The answer is no. Cloud services like Compute run in the cloud. The
computing service offered by the cloud lets users' rent' computer systems in a data
center over the Internet.
Another example of a cloud service is storage. AWS says, "Cloud computing is the
on-demand delivery of IT resources over the Internet with pay-as-you-go pricing.
Instead of buying, owning, and maintaining physical data centers and servers, you
can access technology services, such as computing power, storage, and databases,
from a cloud provider such as Amazon Web Services (AWS)."
Types of Cloud:
Businesses use cloud resources in different ways. There are mainly four of them:
o Public Cloud: The cloud method is open to all with the Internet on a pay-per-
use method.
o Private Cloud: This is a cloud method used by organizations to make their
data centers accessible only with the organization's permission.
o Hybrid cloud: It is a cloud method that combines public and private clouds.
It caters to the various needs of an organization for its services.
o Community cloud is a cloud method that provides services to an
organization or a group of people within a single community.
Data Center
ADVERTISEMENT