Chapitre
Chapitre
Chapitre
ion
Virtualization is a technology that enables the creation of multiple
virtual versions of hardware and software resources, such as servers,
operating systems, and storage devices, on a single physical
machine. It allows users to run multiple operating systems and
applications on a single physical machine, without having to
purchase additional hardware or software resources.
Types of virtualization
Server Virtualization
Desktop Virtualization
Storage Virtualization
Network Virtualization
Hypervisor types
There are two types of hypervisors:
Type 1 Hypervisor Type 2 Hypervisor
Runs directly on the physical hardware Runs on top of an existing operating system
Provides direct access to physical hardware Uses the host operating system to access hardware
resources resources
Typically used in server virtualization scenarios Typically used in desktop virtualization scenarios
Offers higher performance and better security Offers lower performance and security than Type 1
than Type 2 hypervisors hypervisors
Examples include VMware ESXi, Microsoft Examples include Oracle VirtualBox, VMware
Hyper-V, and Citrix XenServer Workstation, and Parallels Desktop
Examples
AWS Lightsail
AWS Lightsail is a simplified, easy-to-use cloud computing solution
offered by Amazon Web Services (AWS). It provides users with a
pre-configured virtual private server (VPS), storage, and networking
capabilities, as well as a range of other features, all at an affordable
price point.
Features
AWS Lightsail offers a range of features to help users easily deploy
and manage their applications and websites, including:
Benefits
AWS Lightsail offers several benefits to users, including:
https://www.ibm.com/topics/virtualization
What is virtualization?
Virtualization is a process that allows for more efficient utilization of physical computer
hardware and is the foundation of cloud computing.
Virtualization uses software to create an abstraction layer over computer hardware that allows
the hardware elements of a single computer—processors, memory, storage and more—to be
divided into multiple virtual computers, commonly called virtual machines (VMs). Each VM
runs its own operating system (OS) and behaves like an independent computer, even though it
is running on just a portion of the actual underlying computer hardware.
It follows that virtualization enables more efficient utilization of physical computer hardware
and allows a greater return on an organization’s hardware investment.
For a further overview of how virtualization works, see our video “Virtualization Explained”:
Benefits of virtualization
Virtualization brings several benefits to data center operators and service providers:
Resource efficiency: Before virtualization, each application server required its own dedicated
physical CPU—IT staff would purchase and configure a separate server for each application
they wanted to run. (IT preferred one application and one operating system (OS) per computer
for reliability reasons.) Invariably, each physical server would be underused. In contrast,
server virtualization lets you run several applications—each on its own VM with its own OS
—on a single physical computer (typically an x86 server) without sacrificing reliability. This
enables maximum utilization of the physical hardware’s computing capacity.
Minimal downtime: OS and application crashes can cause downtime and disrupt user
productivity. Admins can run multiple redundant virtual machines alongside each other and
failover between them when problems arise. Running multiple redundant physical servers is
more expensive.
Several companies offer virtualization solutions covering specific data center tasks or end
user-focused, desktop virtualization scenarios. Better-known examples include VMware,
which specializes in server, desktop, network, and storage virtualization; Citrix, which has a
niche in application virtualization but also offers server virtualization and virtual desktop
solutions; and Microsoft, whose Hyper-V virtualization solution ships with Windows and
focuses on virtual versions of server and desktop computers.
Virtual machines (VMs)
Virtual machines (VMs) are virtual environments that simulate a physical compute in
software form. They normally comprise several files containing the VM’s configuration, the
storage for the virtual hard drive, and some snapshots of the VM that preserve its state at a
particular point in time.
For a complete overview of VMs, see "What is a Virtual Machine?"
Hypervisors
A hypervisor is the software layer that coordinates VMs. It serves as an interface between the
VM and the underlying physical hardware, ensuring that each has access to the physical
resources it needs to execute. It also ensures that the VMs don’t interfere with each other by
impinging on each other’s memory space or compute cycles.
To this point we’ve discussed server virtualization, but many other IT infrastructure elements
can be virtualized to deliver significant advantages to IT managers (in particular) and the
enterprise as a whole. In this section, we'll cover the following types of virtualization:
Desktop virtualization
Network virtualization
Storage virtualization
Data virtualization
Application virtualization
Data center virtualization
CPU virtualization
GPU virtualization
Linux virtualization
Cloud virtualization
Desktop virtualization
Desktop virtualization lets you run multiple desktop operating systems, each in its own VM
on the same computer.
Virtual desktop infrastructure (VDI) runs multiple desktops in VMs on a central server and
streams them to users who log in on thin client devices. In this way, VDI lets an organization
provide its users access to variety of OS's from any device, without installing OS's on any
device. See "What is Virtual Desktop Infrastructure (VDI)?" for a more in-depth explanation.
Local desktop virtualization runs a hypervisor on a local computer, enabling the user to run
one or more additional OSs on that computer and switch from one OS to another as needed
without changing anything about the primary OS.
For more information on virtual desktops, see “Desktop-as-a-Service (DaaS).”
Network virtualization
Network virtualization uses software to create a “view” of the network that an administrator
can use to manage the network from a single console. It abstracts hardware elements and
functions (e.g., connections, switches, routers, etc.) and abstracts them into software running
on a hypervisor. The network administrator can modify and control these elements without
touching the underlying physical components, which dramatically simplifies network
management.
Modern enterprises store data from multiple applications, using multiple file formats, in
multiple locations, ranging from the cloud to on-premise hardware and software systems.
Data virtualization lets any application access all of that data—irrespective of source, format,
or location.
Data virtualization tools create a software layer between the applications accessing the data
and the systems storing it. The layer translates an application’s data request or query as
needed and returns results that can span multiple systems. Data virtualization can help break
down data silos when other types of integration aren’t feasible, desirable, or affordable.
Application virtualization
Application virtualization runs application software without installing it directly on the user’s
OS. This differs from complete desktop virtualization (mentioned above) because only the
application runs in a virtual environment—the OS on the end user’s device runs as usual.
There are three types of application virtualization:
Local application virtualization: The entire application runs on the endpoint device but runs
in a runtime environment instead of on the native hardware.
Application streaming: The application lives on a server which sends small components of
the software to run on the end user's device when needed.
Server-based application virtualization The application runs entirely on a server that sends
only its user interface to the client device.
Data center virtualization
Data center virtualization abstracts most of a data center’s hardware into software, effectively
enabling an administrator to divide a single physical data center into multiple virtual data
centers for different clients.
Each client can access its own infrastructure as a service (IaaS), which would run on the same
underlying physical hardware. Virtual data centers offer an easy on-ramp into cloud-based
computing, letting a company quickly set up a complete data center environment without
purchasing infrastructure hardware.
CPU virtualization
CPU (central processing unit) virtualization is the fundamental technology that makes
hypervisors, virtual machines, and operating systems possible. It allows a single CPU to be
divided into multiple virtual CPUs for use by multiple VMs.
At first, CPU virtualization was entirely software-defined, but many of today’s processors
include extended instruction sets that support CPU virtualization, which improves VM
performance.
GPU virtualization
A GPU (graphical processing unit) is a special multi-core processor that improves overall
computing performance by taking over heavy-duty graphic or mathematical processing. GPU
virtualization lets multiple VMs use all or some of a single GPU’s processing power for faster
video, artificial intelligence (AI), and other graphic- or math-intensive applications.
Linux includes its own hypervisor, called the kernel-based virtual machine (KVM), which
supports Intel and AMD’s virtualization processor extensions so you can create x86-based
VMs from within a Linux host OS.
As an open source OS, Linux is highly customizable. You can create VMs running versions of
Linux tailored for specific workloads or security-hardened versions for more sensitive
applications.
Cloud virtualization
Secondary storage
A virtual private server can provide secondary storage for data files. For example, it can act as a
file, image, or email server by creating secure, accessible, and centralized storage for a group of
users.
Shared hosting
In a shared hosting solution, all websites share the same physical server and compete for critical
resources like internal memory, hard disk space, and processing power. The downside of this
web hosting service is that other websites that share the hardware can affect your website's
performance.
Using a shared hosting service is comparable to watching a movie while sitting on a couch with a
group of friends. Sometimes one friend may stretch and take up more space, while causing the
others to sit uncomfortably until he adjusts himself again.
Dedicated hosting
In dedicated hosting, you can rent the entire physical hardware for yourself. The web hosting
provider gives you exclusive access to the entire physical server. Your website performance is
not impacted by the behaviour of any other websites.
Dedicated hosting is like having the whole couch to yourself. It's comfortable but expensive—and
you don’t really need all that extra space.
A VPS hosting service is like hiring a first-class cabin on a luxury flight. While there may be other
passengers on the flight, no one can share your cabin. Even better, your cabin can expand or
shrink with your requirements, so you pay for exactly what you need!
Customize applications
Compared to shared hosting, VPS hosting gives you more control over your web server
environment. You can install custom software and custom configurations. Integrations with other
software, like bookkeeping or CRM systems, also work better with VPS hosting. You can install
custom security measures and firewalls for your system as well.
Core managed hosting differs from fully managed hosting in that core doesn’t include virus and
spam protection, external migrations, full control panel support, or control panel upgrades and
patches.
For example, a denial of service (DDoS) attack attempts to bring down a website by
overwhelming the server with thousands of requests at the same time. In a shared hosting
environment, even if the DDoS is directed at another website, it will cause your system to crash
too. This is because both websites share the same underlying resources.
You receive updated best practices and new technologies in VPS hosting.
You get round-the-clock support to reduce downtime.
The VPS hosting provider optimizes your environment for performance and security.
Your IT team can focus on your web application without having to worry about VPS hosting.
The VPS hosting provider can troubleshoot and fix common issues very quickly.
You can click to launch a simple operating system, a preconfigured application, or a development
stack on your virtual server instance.
You can store your static content, such as images, videos, or HTML files, in object storage for data
backup.
You can manage web traffic across your VPS servers so that your websites and applications can
accommodate variations in traffic and be better protected from outages.
Docker
file:///C:/Users/THINKPAD/Downloads/
ContainerizationinCloudComputingforOS-LevelVirtualization.pdf pdf important
chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://
edisciplinas.usp.br/pluginfile.php/318402/course/section/93668/thesis_3.pdf
https://www.researchgate.net/publication/
343599681_On_Security_Measures_for_Containerized_Applications_Imaged_
with_Docker
https://www.geeksforgeeks.org/containerization-using-docker/
Docker is the containerization platform that is used to package your
application and all its dependencies together in the form of containers to
make sure that your application works seamlessly in any environment which
can be developed or tested or in production. Docker is a tool designed to
make it easier to create, deploy, and run applications by using containers.
Docker is the world’s leading software container platform. It was launched in
2013 by a company called Dotcloud, Inc which was later renamed Docker,
Inc. It is written in the Go language. It has been just six years since Docker
was launched yet communities have already shifted to it from VMs. Docker is
designed to benefit both developers and system administrators making it a
part of many DevOps toolchains. Developers can write code without worrying
about the testing and production environment. Sysadmins need not worry
about infrastructure as Docker can easily scale up and scale down the
number of systems. Docker comes into play at the deployment stage of the
software development cycle.
Containerization
Containerization is OS-based virtualization that creates multiple virtual units
in the userspace, known as Containers. Containers share the same host
kernel but are isolated from each other through private namespaces and
resource control mechanisms at the OS level. Container-based Virtualization
provides a different level of abstraction in terms of virtualization and isolation
when compared with hypervisors. Hypervisors use a lot of hardware which
results in overhead in terms of virtualizing hardware and virtual device
drivers. A full operating system (e.g -Linux, Windows) runs on top of this
virtualized hardware in each virtual machine instance.
But in contrast, containers implement isolation of processes at the operating
system level, thus avoiding such overhead. These containers run on top of
the same shared operating system kernel of the underlying host machine and
one or more processes can be run within each container. In containers you
don’t have to pre-allocate any RAM, it is allocated dynamically during the
creation of containers while in VMs you need to first pre-allocate the memory
and then create the virtual machine. Containerization has better resource
utilization compared to VMs and a short boot-up process. It is the next
evolution in virtualization.
Containers can run virtually anywhere, greatly easy development and
deployment: on Linux, Windows, and Mac operating systems; on virtual
machines or bare metal, on a developer’s machine or in data centers on-
premises; and of course, in the public cloud. Containers virtualize CPU,
memory, storage, and network resources at the OS level, providing
developers with a sandboxed view of the OS logically isolated from other
applications. Docker is the most popular open-source container format
available and is supported on Google Cloud Platform and by Google
Kubernetes Engine.
Docker Architecture
Docker architecture consists of Docker client, Docker Daemon running on
Docker Host, and Docker Hub repository. Docker has client-server
architecture in which the client communicates with the Docker Daemon
running on the Docker Host using a combination of REST APIs, Socket IO,
and TCP. If we have to build the Docker image, then we use the client to
execute the build command to Docker Daemon then Docker Daemon builds
an image based on given inputs and saves it into the Docker registry. If you
don’t want to create an image then just execute the pull command from the
client and then Docker Daemon will pull the image from the Docker Hub
finally if we want to run the image then execute the run command from the
client which will create the container.
Components of Docker
The main components of Docker include – Docker clients and servers,
Docker images, Dockerfile, Docker Registries, and Docker containers. These
components are explained in detail in the below section :
1. Docker Clients and Servers– Docker has a client-server architecture.
The Docker Daemon/Server consists of all containers. The Docker
Daemon/Server receives the request from the Docker client through CLI or
REST APIs and thus processes the request accordingly. Docker client and
Daemon can be present on the same host or different host.
1. Docker Images– Docker images are used to build docker containers by
using a read-only template. The foundation of every image is a base
image eg. base images such as – ubuntu14.04 LTS, and Fedora 20. Base
images can also be created from scratch and then required applications
can be added to the base image by modifying it thus this process of
creating a new image is called “committing the change”.
2. Docker File– Dockerfile is a text file that contains a series of instructions
on how to build your Docker image. This image contains all the project
code and its dependencies. The same Docker image can be used to spin
‘n’ number of containers each with modification to the underlying image.
The final image can be uploaded to Docker Hub and shared among
various collaborators for testing and deployment. The set of commands
that you need to use in your Docker File is FROM, CMD, ENTRYPOINT,
VOLUME, ENV, and many more.
3. Docker Registries– Docker Registry is a storage component for Docker
images. We can store the images in either public/private repositories so
that multiple users can collaborate in building the application. Docker Hub
is Docker’s cloud repository. Docker Hub is called a public registry where
everyone can pull available images and push their images without
creating an image from scratch.
4. Docker Containers– Docker Containers are runtime instances of Docker
images. Containers contain the whole kit required for an application, so
the application can be run in an isolated way. For eg.- Suppose there is an
image of Ubuntu OS with NGINX SERVER when this image is run with the
docker run command, then a container will be created and NGINX
SERVER will be running on Ubuntu OS.
Docker Compose
Docker Compose is a tool with which we can create a multi-container
application. It makes it easier to configure and
run applications made up of multiple containers. For example, suppose you
had an application that required WordPress and MySQL, you could create
one file which would start both the containers as a service without the need
to start each one separately. We define a multi-container application in a
YAML file. With the docker-compose-up command, we can start the
application in the foreground. Docker-compose will look for the docker-
compose. YAML file in the current folder to start the application. By adding
the -d option to the docker-compose-up command, we can start the
application in the background. Creating a docker-compose. YAML file for
WordPress application :
#cat docker-compose.yaml
version: ’2’
services:
db:
image: mysql:5.7
volumes:db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: WordPress
MYSQL_DATABASE: WordPress
MYSQL_USER: WordPress
MYSQL_PASSWORD: WordPress
WordPress:
depends_on:
- DB
image: WordPress:latest
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_PASSWORD: wordpress
volumes:
db_data:
In this docker-compose. YAML file, we have the following ports section for
the WordPress container, which means that we are going to map the host’s
8000 port with the container’s 80 port. So that host can access the
application with its IP and port no.
Docker Networks
When we create and run a container, Docker by itself assigns an IP address
to it, by default. Most of the time, it is required to create and deploy Docker
networks as per our needs. So, Docker let us design the network as per our
requirements. There are three types of Docker networks- default networks,
user-defined networks, and overlay networks.
To get a list of all the default networks that Docker creates, we run the
command shown below –
Advantages of Docker –
Docker has become popular nowadays because of the benefits provided by
Docker containers. The main advantages of Docker are:
1. Speed – The speed of Docker containers compared to a virtual machine is
very fast. The time required to build a container is very fast because they
are tiny and lightweight. Development, testing, and deployment can be
done faster as containers are small. Containers can be pushed for testing
once they have been built and then from there on to the production
environment.
2. Portability – The applications that are built inside docker containers are
extremely portable. These portable applications can easily be moved
anywhere as a single element and their performance also remains the
same.
3. Scalability – Docker has the ability that it can be deployed on several
physical servers, data servers, and cloud platforms. It can also be run on
every Linux machine. Containers can easily be moved from a cloud
environment to a local host and from there back to the cloud again at a
fast pace.
4. Density – Docker uses the resources that are available more efficiently
because it does not use a hypervisor. This is the reason that more
containers can be run on a single host as compared to virtual machines.
Docker Containers have higher performance because of their high density
and no overhead wastage of resources.