Unit 3
Unit 3
In VDI deployment model, the operating system runs on a virtual machine (VM)
hosted on a server in a data center. The desktop image travels over the network to
the end user’s device, where the end user can interact with the desktop (and the
underlying applications and operating system) as if they were local.
VDI gives each user his or her own dedicated VM running its own operating system.
The operating system resources—drivers, CPUs, memory, etc.—operate from a
software layer called a hypervisor that mimics their output, manages the resource
allocation to multiple VMs, and allows them to run side by side on the same server.
A key benefit of VDI is that it can deliver the Windows 10 desktop and operating
system to the end user’s devices. However, because VDI supports only one user per
Windows 10 instance, it requires a separate VM for each Windows 10 user.
From the end user’s perspective, RDS and VDI are identical. But because one instance
of Windows Server can support as many simultaneous users as the server hardware
can handle, RDS can be a more cost-effective desktop virtualization option. It’s also
worth noting applications tested or certified to run on Windows 10 may not be tested
or certified to run on the Windows Server OS.
Desktop-as-a-Service (DaaS)
Like other types of cloud desktop virtualization, DaaS shares many of the
general benefits of cloud computing, including support for fluctuating workloads and
changing storage demands, usage-based pricing, and the ability to make applications
and data accessible from almost any internet-connected device. The chief drawback
to DaaS is that features and configurations are not always as customizable as required.
Choosing a model
VDI is a popular choice because it offers a virtualized version of a familiar computing
model—physical desktop computing. But implementing VDI requires you to manage
all aspects of the infrastructure yourself, including the hardware, operating systems
and applications, and hypervisor and associated software. This can be challenging if
your VDI experience and expertise is limited. Purchasing all infrastructure components
can require a larger upfront investment.
RDS/RDSH can be a solid choice if it supports the specific applications you need to run
and your end users only need access to those applications, not full Windows desktops.
RDS offers greater end-user density per server than VDI, and systems are usually
cheaper and more scalable than full VDI environments. Your staff does need the
requisite skill set and experience to administer and manage RDS/RDSH technology,
however.
DaaS is currently gaining in popularity as IT teams grow more comfortable with shared
desktops and shared applications. Overall, it tends to be the most cost-effective
option. It’s also the easiest to administer, requiring little in-house expertise in
managing infrastructure or VDI. It’s readily scalable and involves operating
expenditures rather than capital expenditures, a more affordable cost structure for
many businesses.
Benefits of desktop virtualization
Virtualizing desktops provides many potential benefits that can vary depending upon
the deployment model you choose.
Cost savings. Many virtual desktop solutions allow you to shift more of your IT
budget from capital expenditures to operating expenditures. Because compute-
intensive applications require less processing power when they’re delivered via VMs
hosted on a data center server, desktop virtualization can extend the life of older or
less powerful end-user devices. On-premise virtual desktop solutions may require a
significant initial investment in server hardware, hypervisor software, and other
infrastructure, making cloud-based DaaS—wherein you simply pay a regular usage-
based charge—a more attractive option.
Improved productivity. Desktop virtualization makes it easier for employees to
access enterprise computing resources. They can work anytime, anywhere, from any
supported device with an Internet connection.
Support for a broad variety of device types. Virtual desktops can support remote
desktop access from a wide variety of devices, including laptop and desktop
computers, thin clients, zero clients, tablets, and even some mobile phones. You can
use virtual desktops to deliver workstation-like experiences and access to the full
desktop anywhere, anytime, regardless of the operating system native to the end user
device.
Agility and scalability. It’s quick and easy to deploy new VMs or serve new
applications whenever necessary, and it is just as easy to delete them when they’re
no longer needed.
The software required for delivering virtual desktops depends on the virtualization
method you chose.
With virtual desktop infrastructure (VDI), the desktop operating system (most
commonly Microsoft Windows) runs and is managed in the data center. Hypervisor
software runs on the host server, delivering access to a VM to each end user over
the network. Connection broker software is required to authenticate users, connect
each to a virtual machine, monitor activity levels, and reassign the VM when the
connection is terminated. Connection brokers may be bundled with, or purchased
separately from, the hypervisor.
Remote desktop services (RDS/RDSH) can be implemented using utilities that are
bundled with the Microsoft Windows Server operating system.
This whole process requires very less time and works in an efficient manner. Storage
virtualization in Cloud Computing does not show the actual complexity of the
Storage Area Network (SAN). This virtualization is applicable to all levels of SAN.
Why Storage Virtualization should be implemented?
Following are the reasons shows why we storage virtualization in Cloud Computing
implements:
It runs a separate version of the Linux Kernel. Kernel level allows running multiple
servers in a single host. It uses a device driver to communicate between main Linux
Kernel and the virtual machine. This virtualization is a special form of Server
Virtualization.
iii. Hypervisor Virtualization
A hypervisor is a layer between the Operating system and hardware. With the help
of hypervisor multiple operating systems can work. Moreover, it provides
features and necessary services which help OS to work properly.
iv. Para-Virtualization
It is based on hypervisor which handles emulation and trapping of software. Here, the
guest operating system is modified before installing it to any further machine. The
modified system communicates directly with the hypervisor and improves the
performance.
v. Full Virtualization
i. Limited Adoption
The one-third of the enterprise is reporting in a computer economics survey that they
are increasing the funds for storage virtualization. There are some understanding of
adoption rates, return of investment and the cost of ownership.
Before very less VMS was used but now there has been a rapid growth of VMS which
makes it difficult to distinguish between the important and the important VMS. To
make it more future proof building a naming system and sharing with it with all
involved parties should be done.
iii. Failure
The failure occurs due to downtime and data loss. The installation of VMware which
hosts crucial services becomes a single point of failure. So to eliminate this threat
the protection of virtual machine data should prioritize to the top.
Storage Virtualization – Risk & Methods
Methods of Storage Virtualization
This type of virtualization is used for a specific purpose and can apply to network-
attached storage (NAS) system.
This is done between the data being accessed and the location of the physical memory.
It also provides a benefit of better handling file migration in the background which
improves the performance.
The Block based virtual storage is more widely used than the virtual storage system
as the virtual storage system is sometimes used for a specific purpose. The block-
based virtual storage system uses logical storage such as drive partition from the
physical memory in a storage device.
It also abstracts the logical storage such as a hard disk drive or any solid state memory
device. This also allows the virtualization management software to get familiar with
the capacity of the available device and split them into shared resources to assign.
i. Performs Tasks
The appliances of storage virtualization are responsible for several tasks such as
heterogeneous replication and federation. These devices lineup in front of arrays and
create a common interface for the host.
This allows the administrator to mix and match the protocols and array which are
behind the appliances
It does not send multiple copies of the similar data over WAN. The WAN accelerator
use to cache the data and send it LAN speed without changing the performance of
WAN.
Storage virtualization in Cloud Computing can increase disk utilization and is flexible.
This ameliorates disaster recovery and the continuity of the business.
Storage tiering is a technique which monitors and selects the most commonly used
data and put it on its highest performing storage pool. The least used data is put in
the weakest performance storage pool.
This is an example of a battery management system and the customer won’t face
any issue regarding storage.
iii. Security
In storage virtualization, the data stores in different place and secure with maximum
security. If any disaster takes place the data can be retrieved from some other place
and it won’t affect the customer.
The security has the ability to meet the real utilization necessities rather than
providing additional storage.
Following are the different ways for storage applies to the virtualization:
Host-Based
Network-Based
Array-Based
i. Host-Based Storage Virtualization
Here, all the virtualizations and management is done at the host level with the help
of software and physical storage, it can be any device or array.
The host is made up of multiple hosts which present virtual drives of a set to the
guest machines. Doesn’t matter whether they are VMs in an enterprise or PCs.
Network-based storage virtualization is the most common form which are using
nowadays. Devices such as a smart switch or purpose-built server connect to all the
storage device in a fibre channel storage network and present the storage as a
virtual pool.
iii. Array-Based Storage Virtualization
Here the storage array provides different types of storage which are physical and used
as storage tiers. The software is available which handles the amount of storage tier
made up of solid-state drives hard drives.
The storage virtualization technique is now common among the users as it has
its own benefits. With the help of storage virtualization in Cloud Computing, all the
drives can combine with a single centrally managed resource.
Moreover, it allows modifying and making changes without downtime. This provides
flexibility to the customer by making data migration flexible.
System-
level of Operating Virtualization
System-
level of Operating Virtualization
SYSTEM-LEVEL OF OPERATING VIRTUALIZATION
1. The host operating system employs CPU, memory, and other hardware IT
resources.
2. Hardware-related calls from guest operating systems need to navigate
numerous layers to and from the hardware, which shrinkage overall
performance.
3. Licenses are frequently essential for host operating systems, in addition
to individual licenses for each of their guest operating systems.
Advantages of Operating System-Based Virtualization:
Resource Efficiency: Operating system-based virtualization allows for
greater resource efficiency as containers do not need to emulate a
complete hardware environment, which reduces resource overhead.
High Scalability: Containers can be quickly and easily scaled up or
down depending on the demand, which makes it easy to respond to
changes in the workload.Easy Management: Containers are easy to
manage as they can be managed through simple commands, which
makes it easy to deploy and maintain large numbers of containers.
Reduced Costs: Operating system-based virtualization can significantly
reduce costs, as it requires fewer resources and infrastructure than
traditional virtual machines.
Faster Deployment: Containers can be deployed quickly, reducing the
time required to launch new applications or update existing ones.
Portability: Containers are highly portable, making it easy to move
them from one environment to another without requiring changes to the
underlying application.
Disadvantages of Operating System-Based Virtualization:
Security: Operating system-based virtualization may pose security risks
as containers share the same host operating system, which means that a
security breach in one container could potentially affect all other
containers running on the same system.
Limited Isolation: Containers may not provide complete isolation
between applications, which can lead to performance degradation or
resource contention.
Complexity: Operating system-based virtualization can be complex to
set up and manage, requiring specialized skills and knowledge.
Dependency Issues: Containers may have dependency issues with
other containers or the host operating system, which can lead to
compatibility issues and hinder deployment.
Limited Hardware Access: Containers may have limited access to
hardware resources, which can limit their ability to perform certain tasks
or applications that require direct hardware access.
This involves
There are common inst allat ions for most users or app licat ions,
such as op erating syst ems or user -level p rogramming
libraries.
These software packages can be preinst alled as temp lat es
(called template VMs).
With these t emplates, users can build their own software
stacks.
New OS inst ances can b e cop ied from the temp lat e VM.
User -sp ecific comp onent s such as programming libraries and
applicat ions can be inst alled to those instances.
Three physical clust ers are shown on the left side of Figure
3.18.
Four virtual clust ers are created on the right , over the
physical clust ers.
The physical machines are also called host systems.
In contrast, the VMs are guest sys tems.
The host and guest syst ems may run with d ifferent operating
systems.
Each VM can be inst alled on a remote server or rep licat ed on
mult iple servers belong ing to the same or different physical
clusters.
The bound ary of a virtual cluster can change as VM nodes are
added, removed, or migrat ed dynamically over t ime.
Approaches
o Focus on saving the energy cost of component s in
individual workstat ions
o Apply clust er-wid e energy-efficient techniques on
homog eneous workstat ions and specific app licat ions.
In a clust er built with mixed nodes of host and guest syst ems,
the normal method of op erat ion is to run everything on the
physical machine.
Virtual clust ers can be app lied in computat ional grid s, cloud
platforms, and high -p erformance computing (HPC) systems.
Virtual clustering provid es dynamic resources that can be
quickly put together upon user demand or after a node failure.
In particular, virtual clustering plays a key role in cloud
comput ing.
There are four ways to manage a virtual cluster.
o cluster manager resid es on a guest syst em
o Cluster manager resid es on the host syst ems. The host -
based manager supervises the guest systems and can
restart the guest system on another phy sical machine.
o Use an ind epend ent cluster manag er on both the host
and guest syst ems – issue – makes infrastructure
management more complex.
o Use an int egrated clust er on the guest and host systems.
This means the manag er must b e designed to d istinguish
between virtualized resources and physical resources.
Solut ion 1
Solut ion 2
Solut ion 3
Solut ion 5 – proactive st ate transfer solut ion – predict new location
A migrat ing VM should maint ain all open net work connect ions
without relying on forwarding mechanisms on the original host
or on support from mob ility or redirect ion mechanisms.
In general, a migrat ing VM includ es all the prot ocol stat es and
carries its IP address with it .
If the source and destination machines of a VM migration are
typically connect ed to a sing le swit ched LAN, an unsolicit ed
ARP rep ly from t he migrating h ost is provided advert ising t hat
the IP has moved to a new location.
This solves the op en network connect ion problem by
reconfiguring all the peers to send future packet s to a new
locat ion.
Although a few p ackets t hat have alread y been transmitt ed
might be lost, there are no ot her prob lems with this
mechanism.
Here, all memory pag es are transferred only once during the
whole migrat ion process and the baseline total migrat ion t ime
is reduced.
But the downt ime is much higher than that of precopy due t o
the lat ency of fet ching pages from the source nod e before the
VM can be resumed on the target.
It runs on top of an emulating software called the hypervisor which sits between
the hardware and the virtual machine. The hypervisor is the key to enabling
virtualization. It manages the sharing of physical resources into virtual
machines. Each virtual machine runs its guest operating system. They are less
agile and have lower portability than containers.
Container:
It sits on the top of a physical server and its host operating system. They
share a common operating system that requires care and feeding for bug
fixes and patches. They are more agile and have higher portability than virtual
machines.
Let’s see the difference between Virtual machines and Containers.
SNo. Virtual Machines(VM) Containers
Python3
#!/usr/bin/env python3
2. Click on the “Create Repository” button, put the name of the file, and click
on “Create”.
3. Now will “tag our image” and “push it to the Docker Hub repository” which
we just created.
Image ID is used to tag the image. The syntax to tag the image is:
docker tag <image-id> <your dockerhub username>/python-test:latest
$ docker tag c7857f97ebbd afrozchakure/python-test:latest
1. To remove all versions of a particular image from our local system, we use
the Image ID for it.
$ docker rmi -f af939ee31fdc
2. Now run the image, it will fetch the image from the docker hub if it doesn’t
exist on your local machine.
$ docker run afrozchakure/python-test
Docker overview
Docker is an open platform for developing, shipping, and running applications.
Docker enables you to separate your applications from your infrastructure so you
can deliver software quickly. With Docker, you can manage your infrastructure in the
same ways you manage your applications. By taking advantage of Docker's
methodologies for shipping, testing, and deploying code, you can significantly reduce
the delay between writing code and running it in production.
Docker provides tooling and a platform to manage the lifecycle of your containers:
Your developers write code locally and share their work with their colleagues using
Docker containers.
They use Docker to push their applications into a test environment and run
automated and manual tests.
When developers find bugs, they can fix them in the development environment and
redeploy them to the test environment for testing and validation.
When testing is complete, getting the fix to the customer is as simple as pushing the
updated image to the production environment.
Docker's portability and lightweight nature also make it easy to dynamically manage
workloads, scaling up or tearing down applications and services as business needs
dictate, in near real time.
Docker architecture
Docker uses a client-server architecture. The Docker client talks to the Docker
daemon, which does the heavy lifting of building, running, and distributing your
Docker containers. The Docker client and daemon can run on the same system, or
you can connect a Docker client to a remote Docker daemon. The Docker client and
daemon communicate using a REST API, over UNIX sockets or a network interface.
Another Docker client is Docker Compose, that lets you work with applications
consisting of a set of containers.
The Docker daemon
The Docker daemon ( dockerd ) listens for Docker API requests and manages Docker
objects such as images, containers, networks, and volumes. A daemon can also
communicate with other daemons to manage Docker services.
The Docker client ( docker ) is the primary way that many Docker users interact with
Docker. When you use commands such as docker run , the client sends these
commands to dockerd , which carries them out. The docker command uses the Docker
API. The Docker client can communicate with more than one daemon.
Docker Desktop
Docker registries
A Docker registry stores Docker images. Docker Hub is a public registry that anyone
can use, and Docker looks for images on Docker Hub by default. You can even run
your own private registry.
When you use the docker pull or docker run commands, Docker pulls the required
images from your configured registry. When you use the docker push command,
Docker pushes your image to your configured registry.
Docker objects
When you use Docker, you are creating and using images, containers, networks,
volumes, plugins, and other objects. This section is a brief overview of some of
those objects.
Images
An image is a read-only template with instructions for creating a Docker container.
Often, an image is based on another image, with some additional customization. For
example, you may build an image which is based on the ubuntu image, but installs
the Apache web server and your application, as well as the configuration details
needed to make your application run.
You might create your own images or you might only use those created by others
and published in a registry. To build your own image, you create a Dockerfile with a
simple syntax for defining the steps needed to create the image and run it. Each
instruction in a Dockerfile creates a layer in the image. When you change the
Dockerfile and rebuild the image, only those layers which have changed are rebuilt.
This is part of what makes images so lightweight, small, and fast, when compared to
other virtualization technologies.
Containers
A container is a runnable instance of an image. You can create, start, stop, move, or
delete a container using the Docker API or CLI. You can connect a container to one
or more networks, attach storage to it, or even create a new image based on its
current state.
By default, a container is relatively well isolated from other containers and its host
machine. You can control how isolated a container's network, storage, or other
underlying subsystems are from other containers or from the host machine.
A container is defined by its image as well as any configuration options you provide
to it when you create or start it. When a container is removed, any changes to its
state that aren't stored in persistent storage disappear.
When you run this command, the following happens (assuming you are using the
default registry configuration):
1. If you don't have the ubuntu image locally, Docker pulls it from your
configured registry, as though you had run docker pull ubuntu manually.
2. Docker creates a new container, as though you had run a docker container
create command manually.
5. Docker starts the container and executes /bin/bash . Because the container is
running interactively and attached to your terminal (due to the -i and -
t flags), you can provide input using your keyboard while Docker logs the
output to your terminal.
6. When you run exit to terminate the /bin/bash command, the container stops
but isn't removed. You can start it again or remove it.
The underlying technology
Docker is written in the Go programming languageopen_in_new and takes
advantage of several features of the Linux kernel to deliver its functionality. Docker
uses a technology called namespaces to provide the isolated workspace called the
container. When you run a container, Docker creates a set of namespaces for that
container.
A Docker registry is a service that stores and manages Docker images. Docker
registry could be hosted by a third party, as a public or private registry. Some
examples of Docker registries are as follows:
Docker Hub
GitLab
AWS Container Registry
Google Container Registry
Docker – Private Registries
Docker Repository
A Docker repository is a collection of different Docker images with the same
name, that have different tags. Tags basically are identifiers of the image within
a repository.
In this tutorial, we will use Docker Hub to host our repositories, which is free
for public use.
Steps
Step 1. Creating An Account On Docker Hub. Go to DockerHub and create a
new account or log in to your existing account.
$ npx express-generator -e
Now, create a Dockerfile for the application and copy the content as shown
below:
$ touch Dockerfile
FROM node:16
WORKDIR /usr/src/app
COPY package*.json ./
COPY . .
EXPOSE 3000
One thing to notice is as did not specify the tag name, it will be given the :
latest tag.
Step 4. Run This Image Locally
$ docker run -p 3000:3000 rhythmshandlya/express-app
Step 5. Push Image to docker hub. To push a local Image to the docker hub
we will need to log in to the docker hub with our terminal.
$ docker login
Output:
Now that we have hosted our image in public anyone can pull and run them on
their machines.
$ docker pull rhythmshandlya/express-app:latest
OR