UNIT III Notes
UNIT III Notes
Desktop virtualization is a software technology that separates the desktop environment and
associated application software from the physical client device that is used to access it.
Desktop virtualization can be used in conjunction with application virtualization and user
profile management systems, now termed user virtualization, to provide a comprehensive
desktop environment management system. In this mode, all the components of the desktop
are virtualized, which allows for a highly flexible and much more secure desktop delivery
model. In addition, this approach supports a more complete desktop disaster recovery strategy
as all components are essentially saved in the data center and backed up through traditional
redundant maintenance systems. If a user's device or hardware is lost, the restore is
straightforward and simple, because the components will be present at login from another
device. In addition, because no data are saved to the user's device, if that device is lost, there
is much less chance that any critical data can be retrieved and compromised.
System architectures
Desktop virtualization implementations are classified based on whether the virtual
desktop runs remotely or locally, on whether the access is required to be constant or is
designed to be intermittent, and on whether or not the virtual desktop persists between
sessions. Typically, software products that deliver desktop virtualization solutions can
combine local and remote implementations into a single product to provide the most
appropriate support specific to requirements. The degrees of independent functionality of the
client device is necessarily interdependent with the server location and access strategy. And
virtualization is not strictly required for remote control to exist. Virtualization is employed to
present independent instances to multiple users and requires a strategic segmentation of the
host server and presentation at some layer of the host's architecture. The enabling
layer—usually application software—is called a hypervisor.[1]
Remote desktop virtualization can also provide a means of resource sharing, to distribute
low-cost desktop computing services in environments where providing every user with a
dedicated desktop PC is either too expensive or otherwise unnecessary.
For IT administrators, this means a more centralized, efficient client environment that is
easier to maintain and able to respond more quickly to the changing needs of the user and
business.[3][4]
Presentation virtualization
Remote desktop software allows a user to access applications and data on a remote
computer over a network using a remote-display protocol. A VDI service provides individual
desktop operating system instances (e.g., Windows XP, 7, 8.1, 10, etc.) for each user, whereas
remote desktop sessions run in a single shared-server operating system. Both session
collections and virtual machines support full desktop based sessions and remote application
deployment.
The use of a single shared-server operating system instead of individual desktop operating
system instances consumes significantly fewer resources than the same number of VDI
sessions. At the same time, VDI licensing is both more expensive and less flexible than
equivalent remote desktop licenses. Together, these factors can combine to make remote
desktop-based remote desktop virtualization more attractive than VDI.
VDI implementations allow for delivering personalized workspace back to a user, which
retains all the user's customizations. There are several methods to accomplish this.
Application virtualization
Application virtualization improves delivery and compatibility of applications by
encapsulating them from the underlying operating system on which they are executed. A fully
virtualized application is not installed on hardware in the traditional sense. Instead, a
hypervisor layer intercepts the application, which at runtime acts as if it is interfacing with
the original operating system and all the resources managed by it when in reality it is not.
User virtualization
User virtualization separates all of the software aspects that define a user’s personality on a
device from the operating system and applications to be managed independently and applied
to a desktop as needed without the need for scripting, group policies, or use of roaming
profiles. The term "user virtualization" sounds misleading; this technology is not limited to
virtual desktops. User virtualization can be used regardless of platform – physical, virtual,
cloud, etc. The major desktop virtualization platform vendors, Citrix, Microsoft and VMware,
all offer a form of basic user virtualization in their platforms.
Layering
Desktop layering is a method of desktop virtualization that divides a disk image into logical
parts to be managed individually. For example, if all members of a user group use the same
OS, then the core OS only needs to be backed up once for the entire environment who share
this layer. Layering can be applied to local physical disk images, client-based virtual
machines, or host-based desktops. Windows operating systems are not designed for layering,
therefore each vendor must engineer their own proprietary solution.
Desktop as a service
Remote desktop virtualization can also be provided via cloud computing similar to that
provided using a software as a service model. This approach is usually referred to as
cloud-hosted virtual desktops. Cloud-hosted virtual desktops are divided into two
technologies:
Local desktop virtualization is well suited for environments where continuous network
connectivity cannot be assumed and where application resource requirements can be better
met by using local system resources. However, local desktop virtualization implementations
do not always allow applications developed for one system architecture to run on another. For
example, it is possible to use local desktop virtualization to run Windows 7 on top of OS
X on an Intel-based Apple Mac, using a hypervisor, as both use the same x86 architecture.
NETWORK VIRTUALIZATION
Network Virtualization is a process of logically grouping physical networks and making them
operate as single or multiple independent networks called Virtual Networks.
Server Virtualization is most important part of Cloud Computing. So, Talking about Cloud
Computing, it is composed of two words, cloud and computing. Cloud means Internet and
computing means to solve problems with help of computers. Computing is related to CPU &
RAM in digital world. Now Consider situation, You are using Mac OS on your machine but
particular application for your project can be operated only on Windows. You can either buy
new machine running windows or create virtual environment in which windows can be
installed and used. Second option is better because of less cost and easy implementation. This
scenario is called Virtualization. In it, virtual CPU, RAM, NIC and other resources are
provided to OS which it needed to run. This resources is virtually provided and controlled by
an application called Hypervisor. The new OS running on virtual hardware resources is
collectively called Virtual Machine (VM).
Figure – Virtualization on local machine
Now migrate this concept to data centers where lot of servers (machines with fast CPU, large
RAM and enormous storage) are available. Enterprise owning data centre provide resources
requested by customers as per their need. Data centers have all resources and on user request,
particular amount of CPU, RAM, NIC and storage with preferred OS is provided to users.
This concept of virtualization in which services are requested and provided over Internet is
called Server Virtualization.
Storage Virtualization
Storage virtualization is functional RAID levels and controllers are made desirable, which is
an important component of storage servers. Applications and operating systems on the device
can directly access the discs for writing. Local storage is configured by the controllers in
RAID groups, and the operating system sees the storage based on the configuration. The
controller, however, is in charge of figuring out how to write or retrieve the data that the
operating system requests because the storage is abstracted.
Types of Storage Virtualization
Below are some types of Storage Virtualization.
● Kernel-level virtualization: In hardware virtualization, a different version of the Linux
kernel functions. One host may execute several servers thanks to the kernel level.
● Hypervisor Virtualization: Installed between the operating system and the hardware is a
section known as a hypervisor. It enables the effective operation of several operating
systems.
● Hardware-assisted Virtualization: Hardware-assisted virtualization is similar to
complete para-virtualization, however, it needs hardware maintenance.
● Para-virtualization: The foundation of para-virtualization is a hypervisor, which handles
software emulation and trapping.
Methods of Storage Virtualization
● Network-based storage virtualization: The most popular type of virtualization used by
businesses is network-based storage virtualization. All of the storage devices in an FC or
iSCSI SAN are connected to a network device, such as a smart switch or specially
designed server, which displays the network’s storage as a single virtual pool.
● Host-based storage virtualization: Host-based storage virtualization is software-based
and most often seen in HCI systems and cloud storage. In this type of virtualization, the
host, or a hyper-converged system made up of multiple hosts, presents virtual drives of
varying capacity to the guest machines, whether they are VMs in an enterprise
environment, physical servers or computers accessing file shares or cloud storage.
● Array-based storage virtualization: Storage using arrays The most popular use of
virtualization is when a storage array serves as the main storage controller and is
equipped with virtualization software. This allows the array to share storage resources
with other arrays and present various physical storage types that can be used as storage
tiers.
How Storage Virtualization Works?
● Physical storage hardware is replicated in a virtual volume during storage virtualization.
● A single server is utilized to aggregate several physical discs into a grouping that creates
a basic virtual storage system.
● Operating systems and programs can access and use the storage because a virtualization
layer separates the physical discs from the virtual volume.
● The physical discs are separated into objects called logical volumes (LV), logical unit
numbers (LUNs), or RAID groups, which are collections of tiny data blocks.
● RAID arrays can serve as virtual storage in a more complex setting. many physical drives
simulate a single storage device that copies data to several discs in the background while
stripping it.
● The virtualization program has to take an extra step in order to access data from the
physical discs.
● Block-level and file-level storage environments can both be used to create virtual storage.
Advantages of Storage Virtualization
Below are some Advantages of Storage Virtualization.
● Advanced features like redundancy, replication, and disaster recovery are all possible
with the storage devices.
● It enables everyone to establish their own company prospects.
● Data is kept in more practical places that are farther from the particular host. Not always
is the data compromised in the event of a host failure.
● IT operations may now provision, divide, and secure storage in a more flexible way by
abstracting the storage layer.
Disadvantages of Storage Virtualization
Below are some Disadvantages of Storage Virtualization.
● Storage Virtualization still has limitations which must be considered.
● Data security is still a problem. Virtual environments can draw new types of cyberattacks,
despite the fact that some may contend that virtual computers and servers are more secure
than physical ones.
● The deployment of storage virtualization is not always easy. There aren’t many
technological obstacles, including scalability.
● Your data’s end-to-end perspective is broken by virtualization. Integrating the virtualized
storage solution with current tools and systems is a requirement.
Introduction to Docker
● Docker is a platform designed to help developers build, share, and run container
applications.
● Docker is a tool that allows developers, sys-admins etc. to easily deploy their
applications in a sandbox (called containers) to run on the host operating system
i.e. Linux.
Expand table
Feature Virtual machine Container
Isolation Provides complete isolation from Typically provides lightweight isolation
the host operating system and from the host and other containers, but
other VMs. This is useful when a doesn't provide as strong a security
strong security boundary is boundary as a VM. (You can increase the
critical, such as hosting apps from security by using Hyper-V isolation
competing companies on the same mode to isolate each container in a
server or cluster. lightweight VM).
Operating Runs a complete operating system Runs the user mode portion of an
system including the kernel, thus requiring operating system, and can be tailored to
more system resources (CPU, contain just the needed services for your
memory, and storage). app, using fewer system resources.
Guest Runs just about any operating Runs on the same operating system
compatibility system inside the virtual machine version as the host (Hyper-V isolation
enables you to run earlier versions of the
same OS in a lightweight VM
environment)
Deployment Deploy individual VMs by using Deploy individual containers by using
Windows Admin Center or Docker via command line; deploy
Hyper-V Manager; deploy multiple containers by using an
multiple VMs by using PowerShell orchestrator such as Azure Kubernetes
or System Center Virtual Machine Service.
Manager.
Operating Download and install operating Updating or upgrading the operating
system updates system updates on each VM. system files within a container is the
and upgrades Installing a new operating system same:
version requires upgrading or often
just creating an entirely new VM. 1. Edit your container image's build
This can be time-consuming, file (known as a Dockerfile) to
especially if you have a lot of point to the latest version of the
VMs... Windows base image.
Feature Virtual machine Container
2. Rebuild your container image with
this new base image.
3. Push the container image to your
container registry.
4. Redeploy using an orchestrator.
The orchestrator provides powerful
automation for doing this at scale.
For details, see Tutorial: Update an
application in Azure Kubernetes
Service.
Persistent Use a virtual hard disk (VHD) for Use Azure Disks for local storage for a
storage local storage for a single VM, or single node, or Azure Files (SMB shares)
an SMB file share for storage for storage shared by multiple nodes or
shared by multiple servers servers.
Load balancing Virtual machine load balancing Containers themselves don't move;
moves running VMs to other instead an orchestrator can automatically
servers in a failover cluster. start or stop containers on cluster nodes to
manage changes in load and availability.
Fault tolerance VMs can fail over to another If a cluster node fails, any containers
server in a cluster, with the VM's running on it are rapidly recreated by the
operating system restarting on the orchestrator on another cluster node.
new server.
the process of automating the management
and networking of containers to deploy
applications at scale.
Networking Uses virtual network adapters. Uses an isolated view of a virtual network
adapter, providing a little less
virtualization–the host's firewall is shared
with containers–while using less
resources. For more, see Windows
container networking.
Docker Architecture
Before learning the Docker architecture, first, you should know about the Docker Daemon.
Docker daemon runs on the host operating system. It is responsible for running containers to
manage docker services. Docker daemon communicates with other daemons. It offers various
Docker objects such as images, containers, networking, and storage. s
Docker architecture
Docker follows Client-Server architecture, which includes the three main components that
are Docker Client, Docker Host, and Docker Registry.
1. Docker Client
Note: Docker Client has an ability to communicate with more than one docker daemon.
Docker Client uses Command Line Interface (CLI) to run the following commands -
docker build
docker pull
docker run
2. Docker Host
Docker Host is used to provide an environment to execute and run applications. It contains
the docker daemon, images, containers, networks, and storage.
3. Docker Registry
Docker Objects
Docker Images
Docker images are the read-only binary templates used to create Docker Containers. It uses a
private container registry to share container images within the enterprise and also uses public
container registry to share container images within the whole world. Metadata is also used by
docket images to describe the container's abilities.
1.Docker image is a template with instructions which is used for creating containers.
Docker Containers
Containers are the structural units of Docker, which is used to hold the entire package that is
needed to run the application. The advantage of containers is that it requires very less
resources.
In other words, we can say that the image is a template, and the container is a copy of that
template.
Docker Networking
Using Docker Networking, an isolated package can be communicated. Docker contains the
following network drivers -
o Bridge - Bridge is a default network driver for the container. It is used when multiple
docker communicates with the same docker host.
o Host - It is used when we don't need for network isolation between the container and
the host.
o None - It disables all the networking.
o Overlay - Overlay offers Swarm services to communicate with each other. It enables
containers to run on the different docker host.
o Macvlan - Macvlan is used when we want to assign MAC addresses to the
containers.
Docker Storage
Docker Storage is used to store data on the container. Docker offers the following options for
the Storage -
o Data Volume - Data Volume provides the ability to create persistence storage. It also
allows us to name volumes, list volumes, and containers associates with the volumes.
o Directory Mounts - It is one of the best options for docker storage. It mounts a host's
directory into a container.
o Storage Plugins - It provides an ability to connect to external storage platforms.
The use of Docker is widespread in many industries. Docker is being used by businesses of
all kinds, from startups to established corporations, to IT like Google and Netflix. Here are a
few examples −
● E-commerce − Docker is perfect for e-commerce platforms that manage large levels
of traffic since it can scale quickly and meet scalability requirements.
● Media & Entertainment − Businesses in this industry use Docker to handle workflows
related to media processing and content delivery networks.
Different Container Orchestration Tools
Tools to manage, scale, and maintain containerized applications are called orchestrators. The
most common examples include Kubernetes, Docker Swarm, and Apache Mesos.
Also read: Container (Docker) vs Virtual Machines (VM) to understand what is their
difference.
Kubernetes
Kubernetes has become an ideal platform for hosting cloud-native apps that require rapid
scaling and deployment. Kubernetes also provides portability and load balancer services by
enabling them to move applications across different platforms without redesigning them.
Docker Swarm
Docker swarm is also a container orchestration tool, meaning that it allows the user to
manage multiple containers deployed across multiple host machines.
Docker Swarm main benefits include that it offers a high level of availability for applications.
Like Kubernetes, Docker Swarm also has several worker nodes and manager node which
handles the worker nodes’ resources and ensures that the cluster operates efficiently.
Even though we have Kubernetes as the container orchestration, the company still offers
Docker Swarm. It is the fully integrated container orchestration tool but it is
slightly less extensible and complex than Kubernetes. Docker Swarm is useful or a good
choice for Docker enthusiasts who wants an easier and faster path to container deployments.
In fact, Docker bundles both Swarm and Kubernetes in its enterprise edition in hopes of
making them complementary tools.
Check out: Azure Databricks is an easy, fast, and collaborative Apache spark-based analytics
platform.
Apache Mesos
Amazon EKS offers Kubernetes as a service that makes it easy to run Kubernetes on AWS.
By using Amazon EKS, users don’t have to maintain a Kubernetes control plan on their own.
It helps in automating the deployment, scaling, and maintaining the containerized application.
EKS works with almost all of the operating systems. And through EKS, organizations can
even run Kubernetes without installing in their local system and can operate a Kubernetes
control plane or worker nodes easily and effectively. We can also say that EKS is a
managed containers-as-a-service (CaaS) which drastically simplifies Kubernetes
deployment on AWS.
Know more about Amazon Kubernetes Service in detail.
Google Cloud Kubernetes Engine
The Google Kubernetes Engine (GKE) is also a container orchestration platform provided
by Google. The Google Kubernetes Engine (GKE) is a fully managed Kubernetes service for
deploying, managing and scaling containerized applications on Google Cloud.
The GKE environment consists of multiple machines (specifically, Compute Engine
instances) grouped together to form a cluster.
Check out different ways to set up and run Kubernetes.