0% found this document useful (0 votes)
11 views22 pages

UNIT III Notes

The document discusses various aspects of virtualization infrastructure, including desktop, network, server, and operating system virtualization, as well as the use of Docker and container orchestration. It highlights the benefits and challenges of different virtualization types, such as improved resource management, security, and cost-effectiveness, while also addressing potential drawbacks like complexity and compatibility issues. Additionally, it covers the architecture and tools involved in network virtualization and the role of hypervisors in managing virtual machines.

Uploaded by

linren2005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views22 pages

UNIT III Notes

The document discusses various aspects of virtualization infrastructure, including desktop, network, server, and operating system virtualization, as well as the use of Docker and container orchestration. It highlights the benefits and challenges of different virtualization types, such as improved resource management, security, and cost-effectiveness, while also addressing potential drawbacks like complexity and compatibility issues. Additionally, it covers the architecture and tools involved in network virtualization and the role of hypervisors in managing virtual machines.

Uploaded by

linren2005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

UNIT III VIRTUALIZATION INFRASTRUCTURE AND DOCKER

Desktop Virtualization – Network Virtualization – Storage Virtualization –Server


Virtualization – Operating system Virtualization – Containers vs. Virtual Machines –
Introduction to Docker – Docker architecture – Process of Container Orchestration -
Container Orchestration Tools.

Desktop virtualization is a software technology that separates the desktop environment and
associated application software from the physical client device that is used to access it.

Desktop virtualization can be used in conjunction with application virtualization and user
profile management systems, now termed user virtualization, to provide a comprehensive
desktop environment management system. In this mode, all the components of the desktop
are virtualized, which allows for a highly flexible and much more secure desktop delivery
model. In addition, this approach supports a more complete desktop disaster recovery strategy
as all components are essentially saved in the data center and backed up through traditional
redundant maintenance systems. If a user's device or hardware is lost, the restore is
straightforward and simple, because the components will be present at login from another
device. In addition, because no data are saved to the user's device, if that device is lost, there
is much less chance that any critical data can be retrieved and compromised.

System architectures
Desktop virtualization implementations are classified based on whether the virtual
desktop runs remotely or locally, on whether the access is required to be constant or is
designed to be intermittent, and on whether or not the virtual desktop persists between
sessions. Typically, software products that deliver desktop virtualization solutions can
combine local and remote implementations into a single product to provide the most
appropriate support specific to requirements. The degrees of independent functionality of the
client device is necessarily interdependent with the server location and access strategy. And
virtualization is not strictly required for remote control to exist. Virtualization is employed to
present independent instances to multiple users and requires a strategic segmentation of the
host server and presentation at some layer of the host's architecture. The enabling
layer—usually application software—is called a hypervisor.[1]

Remote desktop virtualization


Remote desktop virtualization implementations operate in a client/server computing
environment. Application execution takes place on a remote operating system which
communicates with the local client device over a network using a remote display protocol
through which the user interacts with applications. All applications and data used remain on
the remote system with only display, keyboard, and mouse information communicated with
the local client device, which may be a conventional PC/laptop, a thin client device, a tablet,
or even a smartphone. A common implementation of this approach involves hosting multiple
desktop operating system instances on a server hardware platform running a hypervisor. Its
latest iteration is generally referred to as Virtual Desktop Infrastructure, or "VDI" (note
that "VDI" is often used incorrectly to refer to any desktop virtualization implementation[2]).

Remote desktop virtualization is frequently used in the following scenarios:

● in distributed environments with high availability requirements and where desk-side


technical support is not readily available, such as branch office and retail environments.
● in environments where high network latency degrades the performance of
conventional client/server applications [citation needed]
● in environments where remote access and data security requirements create conflicting
requirements that can be addressed by retaining all (application) data within the data
center – with only display, keyboard, and mouse information communicated with the
remote client.
It is also used as a means of providing access to Windows applications on non-Windows
endpoints (including tablets, smartphones, and non-Windows-based desktop PCs and
laptops).

Remote desktop virtualization can also provide a means of resource sharing, to distribute
low-cost desktop computing services in environments where providing every user with a
dedicated desktop PC is either too expensive or otherwise unnecessary.

For IT administrators, this means a more centralized, efficient client environment that is
easier to maintain and able to respond more quickly to the changing needs of the user and
business.[3][4]

Presentation virtualization
Remote desktop software allows a user to access applications and data on a remote
computer over a network using a remote-display protocol. A VDI service provides individual
desktop operating system instances (e.g., Windows XP, 7, 8.1, 10, etc.) for each user, whereas
remote desktop sessions run in a single shared-server operating system. Both session
collections and virtual machines support full desktop based sessions and remote application
deployment.

The use of a single shared-server operating system instead of individual desktop operating
system instances consumes significantly fewer resources than the same number of VDI
sessions. At the same time, VDI licensing is both more expensive and less flexible than
equivalent remote desktop licenses. Together, these factors can combine to make remote
desktop-based remote desktop virtualization more attractive than VDI.

VDI implementations allow for delivering personalized workspace back to a user, which
retains all the user's customizations. There are several methods to accomplish this.

Application virtualization
Application virtualization improves delivery and compatibility of applications by
encapsulating them from the underlying operating system on which they are executed. A fully
virtualized application is not installed on hardware in the traditional sense. Instead, a
hypervisor layer intercepts the application, which at runtime acts as if it is interfacing with
the original operating system and all the resources managed by it when in reality it is not.

User virtualization
User virtualization separates all of the software aspects that define a user’s personality on a
device from the operating system and applications to be managed independently and applied
to a desktop as needed without the need for scripting, group policies, or use of roaming
profiles. The term "user virtualization" sounds misleading; this technology is not limited to
virtual desktops. User virtualization can be used regardless of platform – physical, virtual,
cloud, etc. The major desktop virtualization platform vendors, Citrix, Microsoft and VMware,
all offer a form of basic user virtualization in their platforms.

Layering
Desktop layering is a method of desktop virtualization that divides a disk image into logical
parts to be managed individually. For example, if all members of a user group use the same
OS, then the core OS only needs to be backed up once for the entire environment who share
this layer. Layering can be applied to local physical disk images, client-based virtual
machines, or host-based desktops. Windows operating systems are not designed for layering,
therefore each vendor must engineer their own proprietary solution.

Desktop as a service
Remote desktop virtualization can also be provided via cloud computing similar to that
provided using a software as a service model. This approach is usually referred to as
cloud-hosted virtual desktops. Cloud-hosted virtual desktops are divided into two
technologies:

1. Managed VDI, which is based on VDI technology provided as an outsourced


managed service, and
2. Desktop as a service (DaaS), which provides a higher level of automation and real
multi-tenancy, reducing the cost of the technology. The DaaS provider typically takes
full responsibility for hosting and maintaining the computer, storage, and access
infrastructure, as well as applications and application software licenses needed to
provide the desktop service in return for a fixed monthly fee.
Cloud-hosted virtual desktops can be implemented using both VDI and Remote Desktop
Services-based systems and can be provided through the public cloud, private cloud
infrastructure, and hybrid cloud platforms. Private cloud implementations are commonly
referred to as "managed VDI". Public cloud offerings tend to be based on
desktop-as-a-service technology.

Local desktop virtualization


Local desktop virtualization implementations run the desktop environment on the client
device using hardware virtualization or emulation. For hardware virtualization, depending on
the implementation both Type I and Type II hypervisors may be used.[7]

Local desktop virtualization is well suited for environments where continuous network
connectivity cannot be assumed and where application resource requirements can be better
met by using local system resources. However, local desktop virtualization implementations
do not always allow applications developed for one system architecture to run on another. For
example, it is possible to use local desktop virtualization to run Windows 7 on top of OS
X on an Intel-based Apple Mac, using a hypervisor, as both use the same x86 architecture.

NETWORK VIRTUALIZATION

Network Virtualization is a process of logically grouping physical networks and making them
operate as single or multiple independent networks called Virtual Networks.

General Architecture Of Network Virtualization

Tools for Network Virtualization :


1. Physical switch OS –
It is where the OS must have the functionality of network virtualization.
2. Hypervisor –
It is which uses third-party software or built-in networking and the functionalities of
network virtualization.
The basic functionality of the OS is to give the application or the executing process with a
simple set of instructions. System calls that are generated by the OS and executed through the
libc library are comparable to the service primitives given at the interface between the
application and the network through the SAP (Service Access Point).
The hypervisor is used to create a virtual switch and configuring virtual networks on it. The
third-party software is installed onto the hypervisor and it replaces the native networking
functionality of the hypervisor. A hypervisor allows us to have various VMs all working
optimally on a single piece of computer hardware.
Functions of Network Virtualization :
● It enables the functional grouping of nodes in a virtual network.
● It enables the virtual network to share network resources.
● It allows communication between nodes in a virtual network without routing of frames.
● It restricts management traffic.
● It enforces routing for communication between virtual networks.

Network Virtualization in Virtual Data Center :


1. Physical Network
● Physical components: Network adapters, switches, bridges, repeaters, routers and hubs.
● Grants connectivity among physical servers running a hypervisor, between physical
servers and storage systems and between physical servers and clients.
2. VM Network
● Consists of virtual switches.
● Provides connectivity to hypervisor kernel.
● Connects to the physical network.
● Resides inside the physical server.

Network Virtualization In VDC


Advantages of Network Virtualization :
Improves manageability –
● Grouping and regrouping of nodes are eased.
● Configuration of VM is allowed from a centralized management workstation using
management software.
Reduces CAPEX –
● The requirement to set up separate physical networks for different node groups is
reduced.
Improves utilization –
● Multiple VMs are enabled to share the same physical network which enhances the
utilization of network resource.
Enhances performance –
● Network broadcast is restricted and VM performance is improved.
Enhances security –
● Sensitive data is isolated from one VM to another VM.
● Access to nodes is restricted in a VM from another VM.
Disadvantages of Network Virtualization :
● It needs to manage IT in the abstract.
● It needs to coexist with physical devices in a cloud-integrated hybrid environment.
● Increased complexity.
● Upfront cost.
● Possible learning curve.

Examples of Network Virtualization :


Virtual LAN (VLAN) –
● The performance and speed of busy networks can be improved by VLAN.
● VLAN can simplify additions or any changes to the network.
Network Overlays –
● A framework is provided by an encapsulation protocol called VXLAN for overlaying
virtualized layer 2 networks over layer 3 networks.
● The Generic Network Virtualization Encapsulation protocol (GENEVE) provides a new
way to encapsulation designed to provide control-plane independence between the
endpoints of the tunnel.
Network Virtualization Platform: VMware NSX –
● VMware NSX Data Center transports the components of networking and security such as
switching, firewalling and routing that are defined and consumed in software.
● It transports the operational model of a virtual machine (VM) for the network.
Applications of Network Virtualization :
● Network virtualization may be used in the development of application testing to mimic
real-world hardware and system software.
● It helps us to integrate several physical networks into a single network or separate single
physical networks into multiple analytical networks.
● In the field of application performance engineering, network virtualization allows the
simulation of connections between applications, services, dependencies, and end-users for
software testing.
● It helps us to deploy applications in a quicker time frame, thereby supporting a faster
go-to-market.
● Network virtualization helps the software testing teams to derive actual results with
expected instances and congestion issues in a networked environment.

Server Virtualization is most important part of Cloud Computing. So, Talking about Cloud
Computing, it is composed of two words, cloud and computing. Cloud means Internet and
computing means to solve problems with help of computers. Computing is related to CPU &
RAM in digital world. Now Consider situation, You are using Mac OS on your machine but
particular application for your project can be operated only on Windows. You can either buy
new machine running windows or create virtual environment in which windows can be
installed and used. Second option is better because of less cost and easy implementation. This
scenario is called Virtualization. In it, virtual CPU, RAM, NIC and other resources are
provided to OS which it needed to run. This resources is virtually provided and controlled by
an application called Hypervisor. The new OS running on virtual hardware resources is
collectively called Virtual Machine (VM).
Figure – Virtualization on local machine

Now migrate this concept to data centers where lot of servers (machines with fast CPU, large
RAM and enormous storage) are available. Enterprise owning data centre provide resources
requested by customers as per their need. Data centers have all resources and on user request,
particular amount of CPU, RAM, NIC and storage with preferred OS is provided to users.
This concept of virtualization in which services are requested and provided over Internet is
called Server Virtualization.

Figure – Server Virtualization


To implement Server Virtualization, hypervisor is installed on server which manages and
allocates host hardware requirements to each virtual machine. This hypervisor sits over server
hardware and regulates resources of each VM. A user can increase or decrease resources or
can delete entire VM as per his/her need. This servers with VM created on them is called
server virtualization and concept of controlling this VM by users through internet is
called Cloud Computing.
Advantages of Server Virtualization:
● Each server in server virtualization can be restarted separately without affecting the
operation of other virtual servers.
● Server virtualization lowers the cost of hardware by dividing a single server into several
virtual private servers.
● One of the major benefits of server virtualization is disaster recovery. In server
virtualization, data may be stored and retrieved from any location and moved rapidly and
simply from one server to another.
● It enables users to keep their private information in the data centers.
Disadvantages of Server Virtualization:
● The major drawback of server virtualization is that all websites that are hosted by the
server will cease to exist if the server goes offline.
● The effectiveness of virtualized environments cannot be measured.
● It consumes a significant amount of RAM.
● Setting it up and keeping it up are challenging.
● Virtualization is not supported for many essential databases and apps.

Operating system based Virtualization


Operating system-based Virtualization refers to an operating system feature in which the
kernel enables the existence of various isolated user-space instances. The installation of
virtualization software also refers to Operating system-based virtualization. It is installed over
a pre-existing operating system and that operating system is called the host operating system.
In this virtualization, a user installs the virtualization software in the operating system of his
system like any other program and utilizes this application to operate and generate various
virtual machines. Here, the virtualization software allows direct access to any of the created
virtual machines to the user. As the host OS can provide hardware devices with the
mandatory support, operating system virtualization may affect compatibility issues of
hardware even when the hardware driver is not allocated to the virtualization software.
Virtualization software is able to convert hardware IT resources that require unique software
for operation into virtualized IT resources. As the host OS is a complete operating system in
itself, many OS-based services are available as organizational management and
administration tools can be utilized for the virtualization host management.

Some major operating system-based services are mentioned below:


1. Backup and Recovery.
2. Security Management.
3. Integration to Directory Services.
Various major operations of Operating System Based Virtualization are described below:
1. Hardware capabilities can be employed, such as the network connection and CPU.
2. Connected peripherals with which it can interact, such as a webcam, printer, keyboard, or
Scanners.
3. Data that can be read or written, such as files, folders, and network shares.
The Operating system may have the capability to allow or deny access to such resources
based on which the program requests them and the user account in the context of which it
runs. OS may also hide these resources, which leads that when a computer program computes
them, they do not appear in the enumeration results. Nevertheless, from a programming
perspective, the computer program has interacted with those resources and the operating
system has managed an act of interaction.
With operating-system-virtualization or containerization, it is probable to run programs
within containers, to which only parts of these resources are allocated. A program that is
expected to perceive the whole computer, once run inside a container, can only see the
allocated resources and believes them to be all that is available. Several containers can be
formed on each operating system, to each of which a subset of the computer’s resources is
allocated. Each container may include many computer programs. These programs may run
parallel or distinctly, even interrelate with each other.

features of operating system-based virtualization are:

● Resource isolation: Operating system-based virtualization provides a high level of


resource isolation, which allows each container to have its own set of resources, including
CPU, memory, and I/O bandwidth.
● Lightweight: Containers are lightweight compared to traditional virtual machines as they
share the same host operating system, resulting in faster startup and lower resource usage.
● Portability: Containers are highly portable, making it easy to move them from one
environment to another without needing to modify the underlying application.
● Scalability: Containers can be easily scaled up or down based on the application
requirements, allowing applications to be highly responsive to changes in demand.
● Security: Containers provide a high level of security by isolating the containerized
application from the host operating system and other containers running on the same
system.
● Reduced Overhead: Containers incur less overhead than traditional virtual machines, as
they do not need to emulate a full hardware environment.
● Easy Management: Containers are easy to manage, as they can be started, stopped, and
monitored using simple commands.
Operating system-based virtualization can raise demands and problems related to
performance overhead, such as:
1. The host operating system employs CPU, memory, and other hardware IT resources.
2. Hardware-related calls from guest operating systems need to navigate numerous layers to
and from the hardware, which shrinkage overall performance.
3. Licenses are frequently essential for host operating systems, in addition to individual
licenses for each of their guest operating systems.
Advantages of Operating System-Based Virtualization:
● Resource Efficiency: Operating system-based virtualization allows for greater resource
efficiency as containers do not need to emulate a complete hardware environment, which
reduces resource overhead.
● High Scalability: Containers can be quickly and easily scaled up or down depending on
the demand, which makes it easy to respond to changes in the workload.Easy
Management: Containers are easy to manage as they can be managed through simple
commands, which makes it easy to deploy and maintain large numbers of containers.
Reduced Costs: Operating system-based virtualization can significantly reduce costs, as
it requires fewer resources and infrastructure than traditional virtual machines.
● Faster Deployment: Containers can be deployed quickly, reducing the time required to
launch new applications or update existing ones.
● Portability: Containers are highly portable, making it easy to move them from one
environment to another without requiring changes to the underlying application.
Disadvantages of Operating System-Based Virtualization:
● Security: Operating system-based virtualization may pose security risks as containers
share the same host operating system, which means that a security breach in one container
could potentially affect all other containers running on the same system.
● Limited Isolation: Containers may not provide complete isolation between applications,
which can lead to performance degradation or resource contention.
● Complexity: Operating system-based virtualization can be complex to set up and
manage, requiring specialized skills and knowledge.
● Dependency Issues: Containers may have dependency issues with other containers or the
host operating system, which can lead to compatibility issues and hinder deployment.
● Limited Hardware Access: Containers may have limited access to hardware resources,
which can limit their ability to perform certain tasks or applications that require direct
hardware access.

Storage Virtualization

Storage virtualization is functional RAID levels and controllers are made desirable, which is
an important component of storage servers. Applications and operating systems on the device
can directly access the discs for writing. Local storage is configured by the controllers in
RAID groups, and the operating system sees the storage based on the configuration. The
controller, however, is in charge of figuring out how to write or retrieve the data that the
operating system requests because the storage is abstracted.
Types of Storage Virtualization
Below are some types of Storage Virtualization.
● Kernel-level virtualization: In hardware virtualization, a different version of the Linux
kernel functions. One host may execute several servers thanks to the kernel level.
● Hypervisor Virtualization: Installed between the operating system and the hardware is a
section known as a hypervisor. It enables the effective operation of several operating
systems.
● Hardware-assisted Virtualization: Hardware-assisted virtualization is similar to
complete para-virtualization, however, it needs hardware maintenance.
● Para-virtualization: The foundation of para-virtualization is a hypervisor, which handles
software emulation and trapping.
Methods of Storage Virtualization
● Network-based storage virtualization: The most popular type of virtualization used by
businesses is network-based storage virtualization. All of the storage devices in an FC or
iSCSI SAN are connected to a network device, such as a smart switch or specially
designed server, which displays the network’s storage as a single virtual pool.
● Host-based storage virtualization: Host-based storage virtualization is software-based
and most often seen in HCI systems and cloud storage. In this type of virtualization, the
host, or a hyper-converged system made up of multiple hosts, presents virtual drives of
varying capacity to the guest machines, whether they are VMs in an enterprise
environment, physical servers or computers accessing file shares or cloud storage.
● Array-based storage virtualization: Storage using arrays The most popular use of
virtualization is when a storage array serves as the main storage controller and is
equipped with virtualization software. This allows the array to share storage resources
with other arrays and present various physical storage types that can be used as storage
tiers.
How Storage Virtualization Works?
● Physical storage hardware is replicated in a virtual volume during storage virtualization.
● A single server is utilized to aggregate several physical discs into a grouping that creates
a basic virtual storage system.
● Operating systems and programs can access and use the storage because a virtualization
layer separates the physical discs from the virtual volume.
● The physical discs are separated into objects called logical volumes (LV), logical unit
numbers (LUNs), or RAID groups, which are collections of tiny data blocks.
● RAID arrays can serve as virtual storage in a more complex setting. many physical drives
simulate a single storage device that copies data to several discs in the background while
stripping it.
● The virtualization program has to take an extra step in order to access data from the
physical discs.
● Block-level and file-level storage environments can both be used to create virtual storage.
Advantages of Storage Virtualization
Below are some Advantages of Storage Virtualization.
● Advanced features like redundancy, replication, and disaster recovery are all possible
with the storage devices.
● It enables everyone to establish their own company prospects.
● Data is kept in more practical places that are farther from the particular host. Not always
is the data compromised in the event of a host failure.
● IT operations may now provision, divide, and secure storage in a more flexible way by
abstracting the storage layer.
Disadvantages of Storage Virtualization
Below are some Disadvantages of Storage Virtualization.
● Storage Virtualization still has limitations which must be considered.
● Data security is still a problem. Virtual environments can draw new types of cyberattacks,
despite the fact that some may contend that virtual computers and servers are more secure
than physical ones.
● The deployment of storage virtualization is not always easy. There aren’t many
technological obstacles, including scalability.
● Your data’s end-to-end perspective is broken by virtualization. Integrating the virtualized
storage solution with current tools and systems is a requirement.
Introduction to Docker
● Docker is a platform designed to help developers build, share, and run container
applications.
● Docker is a tool that allows developers, sys-admins etc. to easily deploy their
applications in a sandbox (called containers) to run on the host operating system
i.e. Linux.

Containers vs. virtual machines


The following table shows some of the similarities and differences of these complementary
technologies.

Expand table
Feature Virtual machine Container
Isolation Provides complete isolation from Typically provides lightweight isolation
the host operating system and from the host and other containers, but
other VMs. This is useful when a doesn't provide as strong a security
strong security boundary is boundary as a VM. (You can increase the
critical, such as hosting apps from security by using Hyper-V isolation
competing companies on the same mode to isolate each container in a
server or cluster. lightweight VM).
Operating Runs a complete operating system Runs the user mode portion of an
system including the kernel, thus requiring operating system, and can be tailored to
more system resources (CPU, contain just the needed services for your
memory, and storage). app, using fewer system resources.
Guest Runs just about any operating Runs on the same operating system
compatibility system inside the virtual machine version as the host (Hyper-V isolation
enables you to run earlier versions of the
same OS in a lightweight VM
environment)
Deployment Deploy individual VMs by using Deploy individual containers by using
Windows Admin Center or Docker via command line; deploy
Hyper-V Manager; deploy multiple containers by using an
multiple VMs by using PowerShell orchestrator such as Azure Kubernetes
or System Center Virtual Machine Service.
Manager.
Operating Download and install operating Updating or upgrading the operating
system updates system updates on each VM. system files within a container is the
and upgrades Installing a new operating system same:
version requires upgrading or often
just creating an entirely new VM. 1. Edit your container image's build
This can be time-consuming, file (known as a Dockerfile) to
especially if you have a lot of point to the latest version of the
VMs... Windows base image.
Feature Virtual machine Container
2. Rebuild your container image with
this new base image.
3. Push the container image to your
container registry.
4. Redeploy using an orchestrator.
The orchestrator provides powerful
automation for doing this at scale.
For details, see Tutorial: Update an
application in Azure Kubernetes
Service.
Persistent Use a virtual hard disk (VHD) for Use Azure Disks for local storage for a
storage local storage for a single VM, or single node, or Azure Files (SMB shares)
an SMB file share for storage for storage shared by multiple nodes or
shared by multiple servers servers.
Load balancing Virtual machine load balancing Containers themselves don't move;
moves running VMs to other instead an orchestrator can automatically
servers in a failover cluster. start or stop containers on cluster nodes to
manage changes in load and availability.
Fault tolerance VMs can fail over to another If a cluster node fails, any containers
server in a cluster, with the VM's running on it are rapidly recreated by the
operating system restarting on the orchestrator on another cluster node.
new server.
the process of automating the management
and networking of containers to deploy
applications at scale.

Networking Uses virtual network adapters. Uses an isolated view of a virtual network
adapter, providing a little less
virtualization–the host's firewall is shared
with containers–while using less
resources. For more, see Windows
container networking.
Docker Architecture

Before learning the Docker architecture, first, you should know about the Docker Daemon.

What is Docker daemon?

Docker daemon runs on the host operating system. It is responsible for running containers to
manage docker services. Docker daemon communicates with other daemons. It offers various
Docker objects such as images, containers, networking, and storage. s

Docker architecture

Docker follows Client-Server architecture, which includes the three main components that
are Docker Client, Docker Host, and Docker Registry.
1. Docker Client

Docker client uses commands and REST APIs(Representational state transfer) to


communicate with the Docker Daemon (Server). When a client runs any docker command on
the docker client terminal, the client terminal sends these docker commands to the Docker
daemon. Docker daemon receives these commands from the docker client in the form of
command and REST API's request.

Note: Docker Client has an ability to communicate with more than one docker daemon.

Docker Client uses Command Line Interface (CLI) to run the following commands -

docker build

docker pull

docker run

2. Docker Host

Docker Host is used to provide an environment to execute and run applications. It contains
the docker daemon, images, containers, networks, and storage.

3. Docker Registry

Docker Registry manages and stores the Docker images.


There are two types of registries in the Docker -

Pubic Registry - Public Registry is also called as Docker hub.

Private Registry - It is used to share images within the enterprise.

Docker Objects

There are the following Docker Objects -

Docker Images

Docker images are the read-only binary templates used to create Docker Containers. It uses a
private container registry to share container images within the enterprise and also uses public
container registry to share container images within the whole world. Metadata is also used by
docket images to describe the container's abilities.

1.Docker image is a template with instructions which is used for creating containers.

2. Docker image is built using a file called Docker File

3.Docker image is ready made available in Docker Registry

Docker Containers

Containers are the structural units of Docker, which is used to hold the entire package that is
needed to run the application. The advantage of containers is that it requires very less
resources.

In other words, we can say that the image is a template, and the container is a copy of that
template.

Docker Networking

Using Docker Networking, an isolated package can be communicated. Docker contains the
following network drivers -
o Bridge - Bridge is a default network driver for the container. It is used when multiple
docker communicates with the same docker host.
o Host - It is used when we don't need for network isolation between the container and
the host.
o None - It disables all the networking.
o Overlay - Overlay offers Swarm services to communicate with each other. It enables
containers to run on the different docker host.
o Macvlan - Macvlan is used when we want to assign MAC addresses to the
containers.

Docker Storage

Docker Storage is used to store data on the container. Docker offers the following options for
the Storage -

o Data Volume - Data Volume provides the ability to create persistence storage. It also
allows us to name volumes, list volumes, and containers associates with the volumes.
o Directory Mounts - It is one of the best options for docker storage. It mounts a host's
directory into a container.
o Storage Plugins - It provides an ability to connect to external storage platforms.

Who Uses Docker?

The use of Docker is widespread in many industries. Docker is being used by businesses of
all kinds, from startups to established corporations, to IT like Google and Netflix. Here are a
few examples −

● Technology Companies − Docker integration is offered by cloud providers such as


Microsoft Azure, Google Cloud Platform (GCP), Amazon Web Services (AWS), and
others, which makes it an ideal choice for businesses developing cloud-native
applications.

● FinTech − Due to Docker's security and dependability when developing financial


apps, financial institutions are using it more and more.

● E-commerce − Docker is perfect for e-commerce platforms that manage large levels
of traffic since it can scale quickly and meet scalability requirements.
● Media & Entertainment − Businesses in this industry use Docker to handle workflows
related to media processing and content delivery networks.
Different Container Orchestration Tools
Tools to manage, scale, and maintain containerized applications are called orchestrators. The
most common examples include Kubernetes, Docker Swarm, and Apache Mesos.
Also read: Container (Docker) vs Virtual Machines (VM) to understand what is their
difference.
Kubernetes

Kubernetes is an open-source container orchestration tool or orchestrators, it was


developed by Google. Google donated the Kubernetes project to the newly formed Cloud
Native Computing Foundation in 2015.
Kubernetes allows us to build application services that deploy multiple containers, schedule
them across the cluster, scale those containers and manage the lifecycle of those Containers.
It helps in making the process automated by eliminating many of the manual processes
involved in deploying and scaling containerized applications. Kubernetes gives the platform
to manage the clusters easily and efficiently.

Kubernetes has become an ideal platform for hosting cloud-native apps that require rapid
scaling and deployment. Kubernetes also provides portability and load balancer services by
enabling them to move applications across different platforms without redesigning them.
Docker Swarm

Docker swarm is also a container orchestration tool, meaning that it allows the user to
manage multiple containers deployed across multiple host machines.

Docker Swarm main benefits include that it offers a high level of availability for applications.
Like Kubernetes, Docker Swarm also has several worker nodes and manager node which
handles the worker nodes’ resources and ensures that the cluster operates efficiently.
Even though we have Kubernetes as the container orchestration, the company still offers
Docker Swarm. It is the fully integrated container orchestration tool but it is
slightly less extensible and complex than Kubernetes. Docker Swarm is useful or a good
choice for Docker enthusiasts who wants an easier and faster path to container deployments.
In fact, Docker bundles both Swarm and Kubernetes in its enterprise edition in hopes of
making them complementary tools.
Check out: Azure Databricks is an easy, fast, and collaborative Apache spark-based analytics
platform.
Apache Mesos

Apache Mesos is slightly older than Kubernetes. It is an open-source software project


originally developed at the University of California at Berkeley, but now widely adopted in
organizations like Twitter, Uber, and Paypal. Mesos’ lightweight interface lets it scale easily
up to 10,000 nodes (or more) and allows frameworks that run on top of it to evolve
independently. Its APIs support popular languages like Java, C++, and Python, and it also
supports out-of-the-box high availability. Unlike Swarm or Kubernetes, however, Mesos only
provides management of the cluster, so a number of frameworks have been built on top of
Mesos, including Marathon, a “production-grade” container orchestration platform.
Read this blog to know about what is Kubernetes Pod which is an important component of
Kubernetes.
Container orchestration platforms
With the enormous growth of container usage, container orchestration solutions are greatly
increasing in popularity. Containers can be supported in practically any type of environment,
ranging from on-premise servers to the cloud. Talking about the cloud, the most common
examples are Azure Kubernetes Service (AKS), Amazon Elastic Kubernetes Services
(EKS) and Google Cloud Kubernetes Engine (GKE).
Azure Kubernetes Service (AKS)

A Fully Managed Kubernetes Cluster


Azure Kubernetes Service (AKS) is a managed Kubernetes service. AKS manages the
master node and the users have to manage the Worker nodes. Users can use AKS to deploy,
scale, and manage Docker containers and container-based applications across a cluster of
container hosts. As a managed Kubernetes service AKS is free – you only pay for the worker
nodes within your clusters, not for the masters. You can create an AKS cluster in the Azure
portal, with the Azure CLI, or template-driven deployment options such as Resource
Manager templates and Terraform.
Know more about Azure Kubernetes Service in detail.
Oracle Kubernetes Engine (OKE)
Oracle Cloud
Infrastructure Container Engine is a container orchestration platforms. It is a fully managed,
scalable, and highly available service that we can use to deploy our containerized applications
to the cloud. Container Engine for Kubernetes uses Kubernetes – the open-source system for
automating deployment, scaling, and management of containerized applications across
clusters of hosts.
Know more about Oracle Kubernetes Engine (OKE).
Amazon Elastic Kubernetes Service (Amazon EKS)

Amazon EKS offers Kubernetes as a service that makes it easy to run Kubernetes on AWS.
By using Amazon EKS, users don’t have to maintain a Kubernetes control plan on their own.
It helps in automating the deployment, scaling, and maintaining the containerized application.
EKS works with almost all of the operating systems. And through EKS, organizations can
even run Kubernetes without installing in their local system and can operate a Kubernetes
control plane or worker nodes easily and effectively. We can also say that EKS is a
managed containers-as-a-service (CaaS) which drastically simplifies Kubernetes
deployment on AWS.
Know more about Amazon Kubernetes Service in detail.
Google Cloud Kubernetes Engine

The Google Kubernetes Engine (GKE) is also a container orchestration platform provided
by Google. The Google Kubernetes Engine (GKE) is a fully managed Kubernetes service for
deploying, managing and scaling containerized applications on Google Cloud.
The GKE environment consists of multiple machines (specifically, Compute Engine
instances) grouped together to form a cluster.
Check out different ways to set up and run Kubernetes.

You might also like