0% found this document useful (0 votes)
35 views119 pages

Virtualization 119 Pages

The document outlines a course on Virtualization, detailing its objectives, units, practical exercises, and outcomes. It covers various aspects of virtualization including types, hypervisors, server and desktop virtualization, network and storage virtualization, and virtualization tools. Additionally, it includes self-study topics and references for further reading on the subject.

Uploaded by

Ksp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views119 pages

Virtualization 119 Pages

The document outlines a course on Virtualization, detailing its objectives, units, practical exercises, and outcomes. It covers various aspects of virtualization including types, hypervisors, server and desktop virtualization, network and storage virtualization, and virtualization tools. Additionally, it includes self-study topics and references for further reading on the subject.

Uploaded by

Ksp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 119

CCS 372

Virtualization
VI Sem
2024

Learning Material Compiled By


Dr. A.N. Gnana Jeevan

Department of Artificial Intelligence and Data Science


Saranathan College of Engineering
(Autonomous)
Tiruchirappalli.
CCS372 VIRTUALIZATION LTPC2023
COURSE OBJECTIVES:
• To Learn the basics and types of Virtualization
• To understand the Hypervisors and its types
• To Explore the Virtualization Solutions
• To Experiment the virtualization platforms
UNIT I INTRODUCTION TO VIRTUALIZATION 7
Virtualization and cloud computing - Need of virtualization – cost, administration, fast deployment,
reduce infrastructure cost – limitations- Types of hardware virtualization: Full virtualization - partial
virtualization – Para virtualization-Types of Hypervisors
UNIT II SERVER AND DESKTOP VIRTUALIZATION 6
Virtual machine basics- Types of virtual machines- Understanding Server Virtualization- types of
server virtualization- Business Cases for Server Virtualization – Uses of Virtual Server Consolidation –
Selecting Server Virtualization Platform-Desktop Virtualization-Types of Desktop Virtualization
UNIT III NETWORK VIRTUALIZATION 6
Introduction to Network Virtualization-Advantages- Functions-Tools for Network Virtualization-
VLANWAN Architecture-WAN Virtualization
UNIT IV STORAGE VIRTUALIZATION 5
Memory Virtualization-Types of Storage Virtualization-Block, File-Address space Remapping-Risks of
Storage Virtualization-SAN-NAS-RAID
UNIT V VIRTUALIZATION TOOLS 6
VMWare-Amazon AWS-Microsoft HyperV- Oracle VM Virtual Box - IBM PowerVM- Google
Virtualization- Case study. 30 PERIODS
PRACTICAL EXERCISES: 30 PERIODS
1. Create type 2 virtualization in VMWARE or any equivalent Open Source Tool. Allocate memory
and storage space as per requirement. Install Guest OS on that VMWARE.
2. a.Shrink and extend virtual disk
b. Create, Manage, Configure and schedule snapshots
c. Create Spanned, Mirrored and Striped volume
d. Create RAID 5 volume
3. a.Desktop Virtualization using VNC
b.Desktop Virtualization using Chrome Remote Desktop
4.Create type 2 virtualization on ESXI 6.5 server
5. Create a VLAN in CISCO packet tracer
6. Install KVM in Linux
7. Create Nested Virtual Machine(VM under another VM)
COURSE OUTCOMES:
CO1: Analyse the virtualization concepts and Hypervisor
CO2: Apply the Virtualization for real-world applications
CO3: Install & Configure the different VM platforms
CO4: Analyse the storage virtualization concepts
CO5: Experiment with the VM with various software
CO6: Experiment with Real-time Virtualization online tool
TOTAL:60 PERIODS
TEXT BOOKS
1. Cloud computing a practical approach - Anthony T.Velte , Toby J. Velte Robert Elsenpeter, TATA
McGraw- Hill , New Delhi – 2010
2. Cloud Computing (Principles and Paradigms), Edited by Rajkumar Buyya, James Broberg, Andrzej
Goscinski, John Wiley & Sons, Inc. 2011
3. David Marshall, Wade A. Reynolds, Advanced Server Virtualization: VMware and Microsoft
Platform in the Virtual Data Center, Auerbach
4. Chris Wolf, Erick M. Halter, “Virtualization: From the Desktop to the Enterprise”, APress, 2005.
5. James E. Smith, Ravi Nair, “Virtual Machines: Versatile Platforms for Systems and Processes”,
Elsevier/Morgan Kaufmann, 2005.
6. David Marshall, Wade A. Reynolds, “Advanced Server Virtualization: VMware and Microsoft
Platform in the Virtual Data Center”, Auerbach Publications, 2006.

Content Beyond Syllabus:


1. Nutanix Cloud Platform
2. Red Hat Virtualization

Self-Study Topics:
1. Tencent Cloud
2. Oracle Cloud Infrastructure

Faculty HOD
UNIT I INTRODUCTION TO VIRTUALIZATION 7
Virtualization and cloud computing - Need of virtualization – cost, administration, fast
deployment, reduce infrastructure cost – limitations- Types of hardware virtualization: Full
virtualization - partial virtualization – Para virtualization-Types of Hypervisors

Introduction to Virtualization

Virtualization is a technique, which allows to share single physical instance of an


application or resource among multiple organizations or tenants (customers). It does so by
assigning a logical name to a physical resource and providing a pointer to that physical
resource on demand.

Virtualization Concept

Creating a virtual machine over existing operating system and hardware is referred as
Hardware Virtualization. Virtual Machines provide an environment that is logically separated
from the underlying hardware.
The machine on which the virtual machine is created is known as host machine and virtual
machine is referred as a guest machine. This virtual machine is managed by a software or
firmware, which is known as hypervisor.

Hypervisor

The hypervisor is a firmware or low-level program that acts as a Virtual Machine Manager.
There are two types of hypervisor:
Type 1 hypervisor executes on bare system. LynxSecure, RTS Hypervisor, Oracle VM, Sun
xVM Server, VirtualLogic VLX are examples of Type 1 hypervisor. The following diagram
shows the Type 1 hypervisor.

The type1 hypervisor does not have any host operating system because they are installed on
a bare system.
Type 2 hypervisor is a software interface that emulates the devices with which a system
normally interacts. Containers, KVM, Microsoft Hyper V, VMWare Fusion, Virtual Server
2005 R2, Windows Virtual PC and VMWare workstation 6.0 are examples of Type 2
hypervisor. The following diagram shows the Type 2 hypervisor.
Types of Hardware Virtualization

Here are the three types of hardware virtualization:

 Full Virtualization
 Emulation Virtualization
 Paravirtualization

Full Virtualization

In full virtualization, the underlying hardware is completely simulated. Guest software


does not require any modification to run.
Emulation Virtualization
In Emulation, the virtual machine simulates the hardware and hence becomes independent of
it. In this, the guest operating system does not require modification.

Paravirtualization
In Paravirtualization, the hardware is not simulated. The guest software run their own
isolated domains.
VIRTUALIZATION FOR CLOUD

Virtualization is a technology that helps us to install different Operating Systems on a


hardware. They are completely separated and independent from each other. In Wikipedia,
you can find the definition as – “In computing, virtualization is a broad term that refers to the
abstraction of computer resources.
Virtualization hides the physical characteristics of computing resources from their users, their
applications or end users. This includes making a single physical resource (such as a server,
an operating system, an application or a storage device) appear to function as multiple virtual
resources. It can also include making multiple physical resources (such as storage devices or
servers) appear as a single virtual resource...”
Virtualization is often −

 The creation of many virtual resources from one physical resource.


 The creation of one virtual resource from one or more physical resource.

Types of Virtualization

Today the term virtualization is widely applied to a number of concepts, some of which are
described below −

 Server Virtualization
 Client & Desktop Virtualization
 Services and Applications Virtualization
 Network Virtualization
 Storage Virtualization
Let us now discuss each of these in detail.

Server Virtualization
It is virtualizing your server infrastructure where you do not have to use any more physical
servers for different purposes.

Client & Desktop Virtualization

This is similar to server virtualization, but this time is on the user’s site where you virtualize
their desktops. We change their desktops with thin clients and by utilizing the datacenter
resources.

Services and Applications Virtualization

The virtualization technology isolates applications from the underlying operating system and
from other applications, in order to increase compatibility and manageability. For example –
Docker can be used for that purpose.
Network Virtualization

It is a part of virtualization infrastructure, which is used especially if you are going to


visualize your servers. It helps you in creating multiple switching, Vlans, NAT-ing, etc.
The following illustration shows the VMware schema −

Storage Virtualization

This is widely used in datacenters where you have a big storage and it helps you to create,
delete, allocated storage to different hardware. This allocation is done through network
connection. The leader on storage is SAN. A schematic illustration is given below −

Understanding Different Types of Hypervisors

A hypervisor is a thin software layer that intercepts operating system calls to the hardware. It
is also called as the Virtual Machine Monitor (VMM). It creates a virtual platform on the
host computer, on top of which multiple guest operating systems are executed and monitored.
Hypervisors are two types −

 Native of Bare Metal Hypervisor and


 Hosted Hypervisor
Let us now discuss both of these in detail.
Native or Bare Metal Hypervisor

Native hypervisors are software systems that run directly on the host's hardware to control
the hardware and to monitor the Guest Operating Systems. The guest
operating system runs on a separate level above the hypervisor. All of them have a Virtual
Machine Manager.
Examples of this virtual machine architecture are Oracle VM, Microsoft Hyper-V,
VMWare ESX and Xen.

Hosted Hypervisor

Hosted hypervisors are designed to run within a traditional operating system. In other words,

a hosted hypervisor adds a distinct software layer on top of the host operating system. While,
the guest operating system becomes a third software level above the hardware.
A well-known example of a hosted hypervisor is Oracle VM VirtualBox. Others include
VMWare Server and Workstation, Microsoft Virtual PC, KVM, QEMUand
Parallels.

Understanding Local Virtualization and Cloud

Virtualization is one of the fundamental technologies that makes cloud-computing work.


However, virtualization is not cloud computing. Cloud computing is a service that different
providers offer to you based on some costs.
In enterprise networks, virtualization and cloud computing are often used together to build a
public or private cloud infrastructure. In small businesses, each technology will be deployed
separately to gain measurable benefits. In different ways, virtualization and cloud computing
can help you keep your equipment spending to a minimum and get the best possible use from
the equipment you already have.
As mentioned before, virtualization software allows one physical server to run several
individual computing environments. In practice, it is like getting multiple servers for each
physical server you buy. This technology is fundamental to cloud computing. Cloud
providers have large data centers full of servers to power their cloud offerings, but they are
not able to devote a single server to each customer. Thus, they virtually partition the data on
the server, enabling each client to work with a separate “virtual” instance (which can be a
private network, servers farm, etc.) of the same software.
Small businesses are most likely to adopt cloud computing by subscribing to a cloud- based
service. The largest providers of cloud computing are Microsoft with Azure and
Amazon.
The following illustration is provided by Microsoft where you can understand how utilizing
extra infrastructure for your business without the need to spend extra money helps. You can
have the on-premises base infrastructure, while on cloud you can have all your services,
which are based on Virtualized technology.

Need of Virtualization and its Reference Model


There are five major needs of virtualization which are described below:

Figure: Major needs of Virtualization.


1. ENHANCED PERFORMANCE-

Currently, the end user system i.e. PC is sufficiently powerful to fulfill all the basic
computation requirements of the user, with various additional capabilities which are rarely
used by the user. Most of their systems have sufficient resources which can host a virtual
machine manager and can perform a virtual machine with acceptable performance so far.
2. LIMITED USE OF HARDWARE AND SOFTWARE RESOURCES-

The limited use of the resources leads to under-utilization of hardware and software
resources. As all the PCs of the user are sufficiently capable to fulfill their regular
computational needs that’s why many of their computers are used often which can be used
24/7 continuously without any interruption. The efficiency of IT infrastructure could be
increase by using these resources after hours for other purposes. This environment is possible
to attain with the help of Virtualization.

3. SHORTAGE OF SPACE-

The regular requirement for additional capacity, whether memory storage or compute power,
leads data centers raise rapidly. Companies like Google, Microsoft and Amazon develop their
infrastructure by building data centers as per their needs. Mostly, enterprises unable to pay to
build any other data center to accommodate additional resource capacity. This heads to the
diffusion of a technique which is known as server consolidation.

4. ECO-FRIENDLY INITIATIVES-

At this time, corporations are actively seeking for various methods to minimize their
expenditures on power which is consumed by their systems. Data centers are main power
consumers and maintaining a data center operations needs a continuous power supply as well
as a good amount of energy is needed to keep them cool for well-functioning. Therefore,
server consolidation drops the power consumed and cooling impact by having a fall in
number of servers. Virtualization can provide a sophisticated method of server
consolidation.

5. ADMINISTRATIVE COSTS-

Furthermore, the rise in demand for capacity surplus, that convert into more servers in a data
center, accountable for a significant increase in administrative costs. Hardware monitoring,
server setup and updates, defective hardware replacement, server resources monitoring, and
backups are included in common system administration tasks. These are personnel-intensive
operations. The administrative costs is increased as per the number of servers. Virtualization
decreases number of required servers for a given workload, hence reduces the cost of
administrative employees.

VIRTUALIZATION REFERENCE MODEL-


Figure: Reference Model of Virtualization.
Three major Components falls under this category in a virtualized environment:
1. GUEST:
The guest represents the system component that interacts with the virtualization layer rather
than with the host, as would normally happen. Guests usually consist of one or more virtual
disk files, and a VM definition file. Virtual Machines are centrally managed by a host
application that sees and manages each virtual machine as a different application.
2. HOST:
The host represents the original environment where the guest is supposed to be managed.
Each guest runs on the host using shared resources donated to it by the host. The operating
system, works as the host and manages the physical resource management, and the device
support.
3. VIRTUALIZATION LAYER:
The virtualization layer is responsible for recreating the same or a different environment
where the guest will operate. It is an additional abstraction layer between a network and
storage hardware, computing, and the application running on it. Usually it helps to run a
single operating system per machine which can be very inflexible compared to the usage of
virtualization.
UNIT II SERVER AND DESKTOP VIRTUALIZATION
Virtual machine basics- Types of virtual machines- Understanding Server Virtualization-
types of server virtualization- Business Cases for Server Virtualization – Uses of Virtual
Server Consolidation – Selecting Server Virtualization Platform-Desktop Virtualization-
Types of Desktop Virtualization

Types of Virtual Machines


Virtual Machine is like fake computer system operating on your hardware. It partially uses
the hardware of your system (like CPU, RAM, disk space, etc.) but its space is completely
separated from your main system. Two virtual machines don’t interrupt in each other’s
working and functioning nor they can access each other’s space which gives an illusion that
we are using totally different hardware system. More detail at Virtual Machine.
Question : Is there any limit to no. of virtual machines one can install?
Answer – In general there is no limit because it depends on the hardware of your system. As
the VM is using hardware of your system, if it goes out of it’s capacity then it will limit you
not to install further virtual machines.
Question : Can one access the files of one VM from another?
Answer – In general No, but as an advanced hardware feature, we can allow the file-sharing
for different virtual machines.
Types of Virtual Machines : You can classify virtual machines into two types:
1. System Virtual Machine: These types of virtual machines gives us complete system
platform and gives the execution of the complete virtual operating system. Just like virtual
box, system virtual machine is providing an environment for an OS to be installed
completely. We can see in below image that our hardware of Real Machine is being
distributed between two simulated operating systems by Virtual machine monitor. And then
some programs, processes are going on in that distributed hardware of simulated machines
separately.

2. Process Virtual Machine : While process virtual machines, unlike system virtual
machine, does not provide us with the facility to install the virtual operating system
completely. Rather it creates virtual environment of that OS while using some app or
program and this environment will be destroyed as soon as we exit from that app. Like in
below image, there are some apps running on main OS as well some virtual machines are
created to run other apps. This shows that as those programs required different OS, process
virtual machine provided them with that for the time being those programs are
running. Example – Wine software in Linux helps to run Windows applications.

Virtual Machine Language : It’s type of language which can be understood by different
operating systems. It is platform-independent. Just like to run any programming language (C,
python, or java) we need specific compiler that actually converts that code into system
understandable code (also known as byte code). The same virtual machine language works. If
we want to use code that can be executed on different types of operating systems like
(Windows, Linux, etc) then virtual machine language will be helpful.

What is Server Virtualization?


Server virtualization is used to mask server resources from server users. This can include
the number and identity of operating systems, processors, and individual physical servers.
Server Virtualization Definition
Server virtualization is the process of dividing a physical server into multiple unique and
isolated virtual servers by means of a software application. Each virtual server can run its
own operating systems independently.
Key Benefits of Server Virtualization:
 Higher server ability
 Cheaper operating costs
 Eliminate server complexity
 Increased application performance
 Deploy workload quicker
Three Kinds of Server Virtualization:
 Full Virtualization: Full virtualization uses a hypervisor , a type of software that directly
communicates with a physical server's disk space and CPU. The hypervisor monitors the
physical server's resources and keeps each virtual server independent and unaware of the
other virtual servers. It also relays resources from the physical server to the correct virtual
server as it runs applications. The biggest limitation of using full virtualization is that a
hypervisor has its own processing needs. This can slow down applications and impact server
performance.

 Para-Virtualization: Unlike full virtualization, para-virtualization involves the entire
network working together as a cohesive unit. Since each operating system on the virtual
servers is aware of one another in para-virtualization, the hypervisor does not need to use as
much processing power to manage the operating systems.


 OS-Level Virtualization: Unlike full and para-virtualization, OS-level visualization does not
use a hypervisor. Instead, the virtualization capability, which is part of the physical server
operating system, performs all the tasks of a hypervisor. However, all the virtual servers must
run that same operating system in this server virtualization method.


Why Server Virtualization?
Server virtualization is a cost-effective way to provide web hosting services and effectively
utilize existing resources in IT infrastructure. Without server virtualization, servers only use a
small part of their processing power. This results in servers sitting idle because the workload
is distributed to only a portion of the network’s servers. Data centers become overcrowded
with underutilized servers, causing a waste of resources and power.
By having each physical server divided into multiple virtual servers, server virtualization
allows each virtual server to act as a unique physical device. Each virtual server can run its
own applications and operating system. This process increases the utilization of resources by
making each virtual server act as a physical server and increases the capacity of each physical
machine.
Server virtualization has its fair share of benefits for a business – maximizing the IT
capabilities, saving physical spaces, and cutting costs on energy and new equipment. But for
a company that’s just starting to explore the realm of server virtualization, choosing one from
the three types of server virtualization can be daunting.
So what are the three types of server virtualization and how do companies utilize them? Most
companies use either full virtualization, para-virtualization, and OS-level virtualization. The
difference lies in the OS modification and hypervisor each type employs.
Understanding Server Virtualization: What it is and How it Works
Server computers are powerful – they manage computer networks, store files, and host
applications. But most of the time, these powerful processing units are not utilized to their
full potential because businesses tend to purchase more computers and other hardware
instead, which is not always the wise decision because it occupies more physical space and
consumes more energy.
Server virtualization offers one solution to these two problems by creating multiple virtual
servers in one physical server. This method ensures that each processing unit is maximized to
its full capacity, preventing the need for more computer units in a data center. The adoption
of different virtualization technologies, including server virtualization is expected to rise up
to 56% by 2021.
Currently, there are three types of virtualization used for sharing resources, memory, and
processing.
Full Virtualization
This type of virtualization is widely utilized in the IT community because it only involves
simple virtualization. It makes use of hypervisors to emulate an artificial hardware device
along with everything it needs to host operating systems.
In full virtualization, separate hardware emulations are created to cater to individual guest
operating systems. This makes each guest server fully functional and isolated from the other
servers in a single physical hardware unit.
What’s great about this type is that you can run different operating systems in one server,
since they are independent of each other. Modification of each OS also isn’t necessary for the
full virtualization to be effective.
Currently, enterprises make use of two types of full virtualization:
1. Software Assisted Full Virtualization

Software-assisted full virtualization uses binary translation when trapping and


virtualizing the execution of instruction sets. The binary translation also emulates the
hardware by utilizing software instruction sets. Here’s a list of software under this
type:
 VMware workstation (32-bit guests)
 Virtual PC
 VirtualBox (32-bit guests)
 VMWare Server
2. Hardware-Assisted Full Virtualization
On the other hand, hardware-assisted virtualization eliminates the need for binary translation.
Instead, the original hardware is directly interrupted by the virtualization technology found
on the X86 Processors (Intel VT-x and AMD-V). Depending on the guest OS’s instructions,
privileged instructions can be executed directly on the processor.
This type of full virtualization can use either of the two hypervisor types:
1. Type 1 Hypervisor – also known as the bare-metal hypervisor type, lays directly on
top of the physical server and its hardware. Since there is no software of the operating
system between the two, Type 1 can provide excellent stability and performance.
Since Type 1 hypervisors are relatively simple, there isn’t much functionality to them.
Moreover, once this hypervisor is installed on the hardware, the latter cannot be utilized for
anything else except virtualization. Type 1 hypervisors include:
 VMware vSphere with ESX/ESXi
 Kernel-based Virtual Machine (KVM)
 Microsoft Hyper-V
 Oracle VM
 Citrix Hypervisor
2. Type 2 Hypervisor – also known as the hosted hypervisor type, is installed inside the
operating system of the host machine. Unlike the Type 1 hypervisor, this one has one
software layer underneath.
The Type 2 hypervisor is typically used in data centers that only have a small number of
physical servers. What makes it convenient to use is that it isn’t much different from the
applications in the current operating system. It’s easy to set up and manage multiple virtual
machines once the hypervisor has been installed. Here are some of the type 2 hypervisors in
the market:
 Oracle VM Virtualbox
 VMWare Workstation Pro/VMWare Fusion
 Windows Virtual PC
 Parallels Desktop
Para-Virtualization
Para-virtualization is a type similar to full virtualization because it also uses the host-guest
paradigm. The only main difference is that the guest systems are aware of each other’s
presence and they all work as one entire unit.
This type is also time efficient and less intrusive since the virtual machines do not trap on
privileged instructions. The operating systems acknowledge the hypervisor used in the
hardware, sending the comments – known as hypercalls – in a more direct way.
To exchange the hypercalls between hypervisors and operating systems, both of them must be
modified through implementing an application programming interface (API).
Sine paravirtualization utilizes a slightly different hypervisor than full virtualization, here are
some of the more common products that support it:
 Xen
 IBM LPAR
 Oracle VM for SPARC (LDOM)
 Oracle VM for X86 (OVM)
OS-level Virtualization
Unlike the first two types of server virtualization, OS-level virtualization doesn’t use a
hypervisor and doesn’t apply a host-guest paradigm. Instead, it utilizes a process called
―containerization‖ which creates multiple user-space instances (containers or virtual
environments) through a kernel in the OS.
A specific container can only utilize the amount of resources allocated for them, not the
available resources for the primary OS. Programs can also run in the container but the access
to content is only limited to everything associated with that container and the devices
assigned to it.
In this virtualization, kernels and operating systems can have different versions of OS from
the host – for example, if the host server runs on Linux, the kernels and OS can only use
different versions of Linux and not Windows. Otherwise, the OS-level won’t work.
Here are some of the commonly used containers in the market:
 Oracle Solaris
 Linux LCX
 AIX WPAR
What to Consider for Server Virtualization
Server virtualization is a promising method that can maximize the use of IT resources – that’s
why tech giants like Microsoft, Dell, and IBM are continuously developing this technology.
However, before picking the optimal virtualization for a business, it’s important to determine
their benefits and disadvantages first.
Pros Cons

It can support different and To prevent the slowing down of applications,


Full
unmodified operating systems on you will need to allocate a big part of the
Virtualization
one physical server. physical server’s processor for the hypervisor.

Para-Virtualized servers don’t The operating system of guest servers needs


Para-
need as much space for processing modification to be able to communicate
virtualization
in the physical server. hypercalls with the host.

It does not need a hypervisor,


To build a homogenous environment, you are
OS-Level therefore no additional space
required to install the same operating system
Virtualization requirement needed for
on all guest servers.
processing.
Aside from the virtualization method, you should also consider the following factors before
settling on a specific type:
1. OS Rebooting – Operating system rebooting is typically overlooked because OS are
expected to work all the time. However, there is still a small risk of OS crashes. If this
happens, an independent OS reboot must be possible.
2. Deployment Work – While the type 2 hypervisor is easy to implement, it’s not the
same case for type 1 hypervisor. The bare-metal hypervisor is much more difficult to
handle than the former, so a thorough integration process is needed – especially for
large deployments.
3. Multiprocessing – Before selecting the virtualization solution, check first it includes
symmetric multiprocessing support (SMP) for multiple processors of the same type or
asymmetric multiprocessing support (AMP) for multiple processors of a different
type. Some virtualization infrastructures also come with both SMP and AMP
combined.
Server Virtualization and Consolidation
Reduce IT costs with server virtualization and consolidation. Eliminate over-provisioning,
increase server utilization, centralize server management.
Reduce IT Costs and Increase Control with Server Virtualization
Eliminate over-provisioning, increase server utilization and limit the environmental impact of
IT by consolidating your server hardware with VMware vSphere with Operations
Management*, VMware's virtualization platform.
Server consolidation lets your organization:
 Reduce hardware and operating costs by as much as 50 percent and energy costs by as much
as 80 percent, saving more than $3,000 per year for each virtualized server workload.
 Reduce the time it takes to provision new servers by as much as 70 percent.
 Decrease downtime and improve reliability with business continuity and built-in disaster
recovery.
 Deliver IT services on demand, independent of hardware, operating systems, applications or
infrastructure providers.
*End of Availability of vSphere with Operations Management Enterprise Plus is February 1,
2019
Reduce Server Costs with Desktop and Server Virtualization
By consolidating your server hardware with vSphere with Operations Management, your
organization can increase existing hardware utilization from as low as 5 percent to as much as
80 percent. You can also reduce energy consumption by decreasing the number of servers in
your data center. VMware server virtualization can reduce hardware requirements by a 15:1
ratio, enabling you to lessen the environmental impact of your organization's IT without
sacrificing reliability or service levels. Server and desktop hardware consolidation can also
help you achieve a 20 to 30 percent lower cost per application, as well as defer data center
construction costs by $1,000 per square foot. vSphere with Operations Management allows
for a 50 to 70 percent higher virtual machine density per host than commodity offerings.
Centralize Management of Your Virtual Data Center
Unlike vendors that only offer single-point solutions for server virtualization, VMware lets
you manage an entire virtual data center from a single point of control. With vSphere with
Operations Management, you can monitor health, manage resources, and plan for the data
center growth all from a unified dashboard.
Automate the Virtual Data Center
An automated virtual data center can simplify management while simultaneously delivering
performance, scalability and availability levels that are impossible with physical
infrastructure. The vSphere with Operations Management platform enables your organization
to minimize downtime, enable dynamic, policy-based allocation of IT resources and
eliminate repetitive configuration and maintenance tasks.
Server Consolidation in Cloud Computing
Server consolidation in cloud computing refers to the process of combining multiple servers
into a single, more powerful server or cluster of servers. This can be done in order to
improve the efficiency and cost-effectiveness of the cloud computing environment. Server
consolidation is typically achieved through the use of virtualization technology, which
allows multiple virtual servers to run on a single physical server. This allows for better
utilization of resources, as well as improved scalability and flexibility. It also allows
organizations to reduce the number of physical servers they need to maintain, which can
lead to cost savings on hardware, power, and cooling.
The Architecture of Server Consolidation
As shown in the graphical representation of Server Consolidation basic Architecture
diagram below, multiple physical servers are consolidated into a fewer number of powerful
servers using virtualization. This process results in the creation of logical servers which are
isolated from one another and have their own operating systems and applications, but share
the same physical resources such as CPU, RAM, and storage.

Physical Servers, Virtualization Software, and Virtual Servers make up the three primary
parts of the server consolidation architecture.
 Physical Servers: The server consolidation environment’s hardware consists of
physical servers. These servers are usually powerful machines with high processing
speeds that are built to manage massive volumes of data. They are utilized to run virtual
servers and host virtualization software.
 Virtualization: A single physical server can run several virtual servers thanks to
virtualization software. Multiple virtual servers can share the resources of a single
physical server thanks to the software’s creation of an abstraction layer between the real
hardware and virtual servers.
 Virtual Servers: Physical servers are virtualized into virtual servers. They run on top
of the physical servers and are produced and controlled by the virtualization software.
Each virtual server can execute its own programs and services and is a separate instance
of an operating system.
Server consolidation creates virtual servers that share the resources of the physical servers
by fusing a number of physical servers into a single virtualized environment utilizing
virtualization software. This makes it possible to use resources more effectively and save
money. Additionally, it makes it simple to manage existing servers, set up new ones, and
scale resources up or down as necessary.
Types of Server Consolidation
1. Logical Consolidation: In logical server consolidation, multiple virtual servers are
consolidated onto a single physical server. Each virtual server is isolated from the
others and has its own operating system and applications, but shares the same physical
resources such as CPU, RAM, and storage. This allows organizations to run multiple
virtual servers on a single physical server, which can lead to significant cost savings
and improved performance. Virtual servers can be easily added or removed as needed,
which allows organizations to more easily adjust to changing business needs.
2. Physical Consolidation: Physical Consolidation is a type of server consolidation in
which multiple physical servers are consolidated into a single, more powerful server or
cluster of servers. This can be done by replacing multiple older servers with newer,
more powerful servers, or by adding additional resources such as memory and storage
to existing servers. Physical consolidation can help organizations to improve the
performance and efficiency of their cloud computing environment.
3. Rationalized Consolidation: Rationalized consolidation is a type of server
consolidation in which multiple servers are consolidated based on their workloads. This
process involves identifying and grouping servers based on the applications and
services they are running and then consolidating them onto fewer, more powerful
servers or clusters. The goal of rationalized consolidation is to improve the efficiency
and cost-effectiveness of the cloud computing environment by consolidating servers
that are running similar workloads.
How to Perform Server Consolidation?
Server consolidation in cloud computing typically involves several steps, including:
1. Assessing the Current Environment: The first step in server consolidation is to assess
the current environment to determine which servers are running similar workloads and
which ones are underutilized or over-utilized. This can be done by analyzing the usage
patterns and resource utilization of each server.
2. Identifying and Grouping Servers: Once the current environment has been assessed,
the next step is to identify and group servers based on their workloads. This can help to
identify servers that are running similar workloads and can be consolidated onto fewer,
more powerful servers or clusters.
3. Planning the Consolidation: After identifying and grouping servers, the next step is to
plan the consolidation. This involves determining the best way to consolidate the
servers, such as using virtualization technology, cloud management platforms, or
physical consolidation. It also involves determining the resources required to support
the consolidated servers, such as CPU, RAM, and storage.
4. Testing and Validation: Before consolidating the servers, it is important to test and
validate the consolidation plan to ensure that it will meet the organization’s needs and
that the servers will continue to function as expected.
5. Consolidating the Servers: Once the plan has been tested and validated, the servers
can be consolidated. This typically involves shutting down the servers to be
consolidated, migrating their workloads to the consolidated servers, and then bringing
the servers back online.
6. Monitoring and Maintenance: After the servers have been consolidated, it is
important to monitor the consolidated servers to ensure that they are performing as
expected and to identify any potential issues. Regular maintenance should also be
performed to keep the servers running smoothly.
7. Optimizing the Consolidated Environment: To keep the consolidated environment
optimal, it’s important to regularly evaluate the usage patterns and resource utilization
of the consolidated servers, and make adjustments as needed.
Benefits of Server Consolidation
Server consolidation in cloud computing can provide a number of benefits, including:
 Cost savings: By consolidating servers, organizations can reduce the number of
physical servers they need to maintain, which can lead to cost savings on hardware,
power, and cooling.
 Improved performance: Consolidating servers can also improve the performance of
the cloud computing environment. By using virtualization technology, multiple virtual
servers can run on a single physical server, which allows for better utilization of
resources. This can lead to faster processing times and better overall performance.
 Scalability and flexibility: Server consolidation can also improve the scalability and
flexibility of the cloud environment. By using virtualization technology, organizations
can easily add or remove virtual servers as needed, which allows them to more easily
adjust to changing business needs.
 Management simplicity: Managing multiple servers can be complex and time-
consuming. Consolidating servers can help to reduce the complexity of managing
multiple servers, by providing a single point of management. This can help
organizations to reduce the effort and costs associated with managing multiple servers.
 Better utilization of resources: By consolidating servers, organizations can improve
the utilization of resources, which can lead to better performance and cost savings.
Server consolidation in cloud computing is a process of combining multiple servers into a
single, more powerful server or cluster of servers, in order to improve the efficiency and
cost-effectiveness of the cloud computing environment.
How to choose a virtualization platform
What is a virtualization platform?
A virtualization platform is a solution for managing virtual machines (VMs), enabling an IT
organization to support isolated computing environments that share a pool of hardware
resources.
Organizations use VMs for a variety of reasons, including to efficiently manage many
different kinds of computing environments, to support older operating systems and software,
and to run test environments. A virtualization platform brings together all the technologies
needed to support and manage large numbers of VMs.
VM platforms continue to evolve, prompting some enterprises to explore new virtualization
providers. A clear understanding of virtualization concepts can help inform these choices.
Learn about virtualization solutions at Red Hat
Important virtualization concepts and choices
Virtualization platforms take different approaches to the technologies that make VMs
possible. Here are some concepts to keep in mind when comparing platforms.
Type 1 or type 2 hypervisors
A hypervisor is software that pools computing resources—like processing, memory, and
storage—and reallocates them among VMs. This is the technology that enables users to
create and run multiple VMs on a single physical machine. Hypervisors fall into 2 categories.
Type 1 hypervisors run directly on the host’s hardware, and are sometimes called native or
bare metal hypervisors. A type 1 hypervisor assumes the role of a host operating system
(OS), scheduling and managing resources for each VM. This type of hypervisor is well suited
for enterprise data center or server-based environments. Popular type 1 hypervisors
include KVM (the open source foundation for Red Hat’s virtualization platforms), Microsoft
Hyper-V, and VMware vSphere.
Type 2 hypervisors run as a software layer on top of a conventional OS. The host OS
manages resources for the hypervisor like any other application running on the OS. Type 2
hypervisors are usually best for individuals who want to run multiple operating systems on a
personal workstation. Common examples of type 2 hypervisors include VMware Workstation
and Oracle VirtualBox.
Open source or proprietary technology
Open source software, such as the KVM virtualization technology built into Linux® and
the Kubernetes-based KubeVirt project, rely on community contributions and open standards.
One benefit to open source software, besides its transparency, is cross-platform
compatibility. Open standards and open application programming interfaces (APIs) lead to
flexible integration, making it possible to run virtual environments across different datacenter
and cloud infrastructures.
Conversely, proprietary technology can make it challenging to integrate with other
technologies and harder to switch vendors.
Container and cloud compatibility
Modern IT organizations need to support both VMs and containers. Containers group
together just what’s needed to run a single application or service and tend to be smaller than
VMs, making them lightweight and portable. Containers and VMs may need to operate
seamlessly across hybrid and multicloud environments.
Faced with all this complexity, IT organizations seek to simplify their application
development and deployment pipelines. A platform should support both containers and VMs
and help teams use computing resources efficiently, and ensure applications and services roll
out in an orderly, consistent way.
Traditional virtualization platforms can be separate from container platforms. Sometimes
they are meant to work in a single environment, rather than across multiple cloud
environments.
More modern virtualization platforms act as components of unified platforms that work
across different infrastructure, including on premise and cloud environments. This approach
can streamline deployment, management and monitoring of both VMs and containers. A
unified platform can eliminate duplicate work and improve flexibility, making it easier to
adapt to changes.
What to look for in a virtualization platform
Equipped with an understanding of virtualization concepts, you’ll want to list your
requirements for a virtualization platform and evaluate the benefits and drawbacks of
different choices in the marketplace. Your research should include important qualities like
costs and support levels, as well as features specific to virtualization platforms. Here are a
few such features to look for.
Ease of migration
When moving from one virtualization platform to another, administrators will seek to avoid
disruptions, incompatibilities, and degraded performance. Virtualization platforms can have
different deployment and management processes, and different tooling, especially across
different cloud providers.
Preparation can help avoid many migration pitfalls. Using tested and effective toolkits to
preemptively validate VM compatibility and move multiple VMs at once can help migrations
go quickly and smoothly.
Learn about Red Hat’s migration toolkit for virtualization
Automation
At enterprise scale, with hundreds or thousands of VMs, automation becomes a necessity.
Migrating and managing VMs can be repetitive, time-consuming work without an automation
system. Automation tools that follow infrastructure as code (IaC) and configuration as code
(CaC) methodologies can take over and replace manual processes. Automation helps out
beyond just migration and deployments. Automated workflows can inventory existing VMs,
apply patches, manage configurations, and more.
Explore how to automate VM migration and ops
Management capabilities
VM administrators and site reliability engineers might oversee deployments that span
multiple data centers, private clouds, and public clouds. They need tools and capabilities to
support, manage, and monitor VMs across these environments.
A virtualization platform should provide a single console with built-in security policies and
full visibility and control of VMs. This end-to-end visibility and control helps your teams
deliver new applications and services that comply with policies and regulations.
Security and stability
VM administrators have to protect systems from unauthorized access and service disruptions.
A virtualization platform should make it possible to apply security policies, isolation
technologies, and least privileges principles.
In platforms that combine VMs with container management, Kubernetes security standards
can help ensure virtual machines run without root privileges, complying with industry best
practices and mitigating risks.
Partner ecosystem
Migrating to a new virtualization platform shouldn’t require you to walk away from valued
vendor relationships or integrations. A platform should maintain relationships with partners
who have deep expertise in the virtualization technologies you choose. Specifically for
virtualization platforms, you should look for a strong network of partners who can provide
storage and network virtualization, and backup and disaster recovery. Partnerships with major
hardware providers and IT services providers may also be essential to the success of your
VM program.

How to Choose the Right Virtualization Platform


Choosing the right virtualization platform is a critical decision for any organization looking
to modernize its IT infrastructure. This process is akin to selecting the right tool for a specific
job – not every tool is suitable for every task, and the same applies to virtualization platforms.
For instance, for businesses seeking a solution that excels in streamlined storage management
and high-performance, the vSAN ReadyNode AF-8: SYS-2029BT-HNC0R is an exemplary
option. Let’s delve into the factors that should guide this decision, using real-world examples
and comparisons, including how specific solutions like the vSAN ReadyNode can meet
distinct organizational needs.
Understanding Virtualization Needs
Different organizations have different virtualization needs. A rapidly scaling internet startup,
for instance, might prioritize scalability and the ability to quickly deploy new services. In
contrast, an established B2B company might need support for legacy applications and
backward compatibility due to its stable user base.
Architectural Considerations
Virtualized systems consist of two main layers:
1. Hardware Environment: This includes servers, storage, and networking
components. The compatibility and capacity of your existing hardware are crucial in
determining the choice of virtualization software.
2. Software Platform: This layer abstracts the hardware, providing an idealized
environment for hosted services. The choice of software depends on the hardware
compatibility and the specific requirements of the organization.
In-Depth Analysis of Popular Virtualization Platforms
Choosing the right virtualization platform is crucial for businesses, as each platform offers
unique benefits and caters to different requirements. Here's a closer look at the popular
options: VMware's vSphere, Microsoft Hyper-V, and Citrix XenServer.
VMware's vSphere
Best Suited For: Large enterprises and businesses looking for comprehensive, robust, and
scalable virtualization solutions. As a company that collaborates closely with VMware,
Server Simply is equipped to offer tailored solutions incorporating VMware vSAN. This
partnership allows us to provide enhanced storage capabilities integrated within VMware's
vSphere environment, making it an ideal choice for organizations seeking to leverage the full
potential of their virtualized infrastructure. Explore more about our VMware vSAN
solutions at Server Simply.
Key Advantages:
1. Robustness and Reliability: vSphere is known for its stability and high availability,
making it a reliable choice for mission-critical applications.
2. Advanced Features: It offers a wide array of advanced features like Distributed
Resource Scheduler (DRS), High Availability (HA), and Fault Tolerance (FT).
3. Large Ecosystem: VMware has a vast ecosystem of compatible third-party tools and
a strong community support system.
4. Scalability: vSphere scales effectively, supporting large workloads and numerous
virtual machines without performance degradation.
5. Mature Security: Offers sophisticated security features, crucial for sectors like
finance and healthcare.
Limitations:
 Can be more expensive compared to other solutions, especially for smaller businesses.
 It might require more specialized knowledge and training to utilize its full
capabilities.
Microsoft Hyper-V
Best Suited For: Businesses heavily invested in the Microsoft ecosystem and looking for a
cost-effective virtualization solution.
Key Advantages:
1. Seamless Windows Integration: Hyper-V integrates well with Windows-based
systems, offering a unified experience.
2. Cost-Effective: It’s generally more affordable, especially for small to medium-sized
businesses.
3. Ease of Use: Familiarity with Windows makes it user-friendly and easier to adopt for
teams already using Microsoft products.
4. Flexibility: Hyper-V supports not only Windows but also Linux-based VMs,
providing versatility.
5. Good for Mixed Environments: Effective for businesses running both Windows and
Linux servers.
Limitations:
 While it's improving, Hyper-V historically lagged behind VMware in terms of
features and performance.
 Best suited for Windows-centric environments, which might be limiting for some.
Citrix XenServer
Best Suited For: Organizations looking for an open-source, flexible, and cost-effective
virtualization platform, particularly in niche sectors.
Key Advantages:
1. Open Source Nature: XenServer is open source, providing more flexibility and
customization options.
2. Cost-Effective: Ideal for businesses with limited budgets or those that prefer open-
source solutions.
3. Community Support: Benefits from community-driven innovations and support.
4. Compatibility: Good for environments that are not specifically tied to Windows or
VMware-based products.
5. Simplicity: Easier to manage and less resource-intensive, suitable for smaller IT
teams.
Limitations:
 Smaller market share can mean fewer available experts and less third-party
integration.
 Might lack some of the advanced features provided by VMware.
Making the Choice
When choosing between these platforms, consider your organization's current IT
infrastructure, budget constraints, the level of expertise available among your IT staff, and
your future scalability needs. For instance, a large financial institution might lean towards
VMware's vSphere for its robust security and stability, while a small tech startup might find
the flexibility and cost-effectiveness of Citrix XenServer more appealing. A medium-sized
enterprise already using a range of Microsoft products might find Hyper-V to be the most
seamless and cost-effective solution. Remember, the choice of virtualization platform is not
just about the features it offers; it's about how those features align with your organizational
needs and goals
What is desktop virtualization?
Desktop virtualization is an innovative technology that detaches the desktop environment,
including the operating system, applications, and data, from the physical machine. When the
tools are detached from the machine itself, it allows for a highly flexible and accessible
computing system where the user's desktop is hosted on a server and can be accessed from
anywhere.
In the realm of computing, the concept of desktop virtualization serves as a bridge between
the traditional, physical constraints of hardware and the limitless potential of digital
workspaces. It mirrors the shift in our perception and use of computers and empowers users
to access their personal desktop space remotely, providing flexibility and mobility unheard of
in the traditional computing model.
We are no longer tied to a single location or device. Instead, we embrace the freedom to
work, learn, and interact in a digital space that moves with us. Desktop virtualization
represents a significant leap towards more agile, resilient, and user-centric computing models,
breaking down the barriers imposed by traditional IT infrastructure.
How does it work?
At its core, desktop virtualization operates by hosting a desktop operating system on a
centralized server. This setup allows multiple users to access their own virtualized desktop
instances simultaneously. When a user logs in, they're connected to their desktop instance
running on the server. This connection can be made through various devices—be it a
traditional PC, a thin client, a tablet, or a smartphone—offering a seamless computing
experience regardless of the hardware used.
This versatile solution works in two primary ways: local and remote.
Local desktop virtualization
With local desktop virtualization, the computer's operating system is run directly on a client
device, leveraging the local system resources. This approach is particularly suited for those
who do not require constant network connection and whose computing needs fit within the
local system capacity. When processing is done locally though, local desktop virtualization
doesn’t allow for sharing virtual machines (VMs) or external resources across a network
including mobile devices and thin clients.
Remote desktop virtualization
On the other hand, remote desktop virtualization shines in server-based environments. It
enables users to operate systems and applications housed within the secure confines of a
datacenter while engaging with them on personal devices like laptops or smartphones. This
setup offers IT teams the advantage of centralized management and allows organizations to
stretch their hardware investments by providing remote access to pooled computing power.
Types of desktop virtualization
There are two types of desktop virtualization, hosted and client.
Hosted virtualization
Hosted desktop virtualization involves hosting desktop environments on a central server or in
the cloud. This category can be broken down into several types:
 Virtual desktop infrastructure (VDI) makes desktops and applications an on-
demand service, allowing access anytime and anywhere. With virtual desktop
infrastructure, each user receives a dedicated desktop instance on the server, which
frees them up to use any device to access their instance. This method offers a high
degree of personalization and performance but requires significant server resources.
 Remote desktop services (RDS) enables multiple users to access a shared desktop
and applications from a remote server. With RDI, multiple users share a single
operating system instance, optimizing resource use but offering less personalization.
 Desktop-as-a-Service (DaaS) delivers hosted desktop services from a third party.
With desktop-as-a-service, organizations can give employees anytime-anywhere
access to personalized desktops from virtually any device. This cloud-based
service shifts the burden of managing the backend responsibilities of data storage,
backup, security, and upgrades to the provider. DaaS offers scalability and flexibility
as hybrid environments are increasingly common, making it an attractive option for
small to medium-sized businesses.
Client virtualization
The other type of desktop virtualization, client virtualization, brings a different approach,
focusing on running the virtualization technology directly on the user's device. This category
can be separated into two types:
 Presentation virtualization separates the application layer from the graphical user
interface, displaying the application on the user's device while it runs on a server. It
can support resource efficiency and management and is useful in settings where many
users need to access a standardized set of applications and where the central control
and management of these applications are critical.
 Application virtualization separates an application from the underlying computer
hardware it is stored on. With application visualization, applications are allowed to
run without being directly installed on the operating system. This method simplifies
application deployment and management and is useful in settings where apps need to
be accessed remotely on varied devices.
What are the benefits of desktop virtualization?
Desktop virtualization offers numerous benefits especially as the nature of work
environments and data management continues to evolve and change:
 Enhanced security - Storing business critical data within a datacenter enhances
security because it eliminates the risks associated with data that is stored on local
devices. With data and applications stored in secure datacenters, the risk of data theft
from lost or stolen devices is minimized. Furthermore, desktop virtualization allows
for better control over access to sensitive information, as data never leaves the
datacenter and can be quickly wiped from devices if an employee leaves the company.
 Simplified management and workflows - IT departments can manage and update
desktops and permissions centrally, reducing the complexity and cost of desktop
management. Desktop personalization eliminates the need for manually setting new
desktops for each user, since IT can easily deploy a packaged virtual desktop to the
user’s device. The process for updating across devices is much less involved for IT
teams when application and operating systems data is stored in centralized locations,
instead of on individual users’ machines.
 Cost savings and resource management - Organizations can save on hardware costs
by extending the lifecycle of older devices and reducing the need for expensive client
hardware and upgrades. When users’ machines no longer need to do all the computing
internally, companies can save money on device capabilities with more affordable
machines. From a people ops perspective, centralizing desktop management can
significantly reduce IT overhead and bolster revenue margins.
 Flexibility and streamlined experience - Users can access their desktops and
applications from any device, anywhere, at any time. The ability to access your
personalized computer from anywhere and using any device, one of the most tangible
benefits for end-users, is a game-changer for remote work, education, and even
personal computing. This flexibility improves employee experience and affords new
possibilities for how and where people can work.
What challenges come with employing desktop virtualization?
Despite its many benefits, desktop virtualization also presents a few challenges.
 High touch engagement - The infrastructure required for desktop virtualization can
be complex to set up and manage. The initial setup and ongoing management of
desktop virtualization requires a deep understanding of both the technology and the
specific needs of the organization.
 Performance issues - Ensuring high performance and low latency can be difficult,
especially over wide-area networks. Graphics-intensive applications or usage in low-
bandwidth environments can frustrate users and hamper productivity.
 Upfront cost - The cost of implementing a desktop virtualization solution can also be
a barrier. While there are long-term savings to be had, the upfront investment in
server hardware, software licensing fees, and network infrastructure can be
significant.
 Ongoing complexity - Navigating the complex licensing agreements for virtualized
desktops can be a challenge for some IT departments.
Considering the various challenges of desktop virtualization allows teams to implement
strategies and plans for successfully mitigating and overcoming them.
Use cases
Desktop virtualization is highly versatile, catering to several use cases:
 Remote work - Facilitates secure and efficient remote access to work environments.
In today’s world of hybrid and remote roles, it allows employees to access their work
environment securely from anywhere.
 Education - Provides students access to learning resources from any device. Desktop
virtualization enables the virtualization of computer labs, providing students with
access to specialized software without the need for high-end personal computers.
 Healthcare – Ensures critical information is always available for key staff at
healthcare institutions. Doctors and staff can access patient records and applications
securely and efficiently, from any location.
Desktop virtualization shines across industries in any scenario where flexibility, security, and
management are paramount.
Desktop virtualization and the cloud
The convergence of desktop virtualization and cloud computing is perhaps one of the most
exciting developments in IT. Cloud-hosted virtual desktops, or Desktop-as-a-Service (DaaS),
reduce the need for on-premises infrastructure, making desktop virtualization more accessible
to smaller organizations without the resources to manage a complex IT environment.
The integration of desktop virtualization with cloud computing has expanded its capabilities
and accessibility. This integrated approach enhances scalability, as organizations can quickly
add or remove desktops based on current needs, paying only for what they use. Cloud-hosted
desktop virtualization supports business agility and the quick provisioning of resources.
Desktop virtualization software
Selecting the appropriate software is an important step in setting up a desktop virtualization
infrastructure, and the choice hinges on the specific virtualization path you want to pursue.
In the case of VDI, you'll find the desktop operating system, typically a version of Microsoft
Windows, running within the controlled environment of your datacenter. Here,
a hypervisor takes charge on the host server, facilitating each user's access to a virtual
machine via the network. Additionally, you'll employ connection broker software to manage
user authentication, establish connections to virtual machines, monitor engagement, and
reallocate resources once users log off. Depending on your setup, this connection broker
might come integrated with the hypervisor or need to be a standalone purchase.
For RDS or RDSH deployments, you can utilize the integrated features provided with the
Microsoft Windows Server operating system, which supports such server-based virtualization
natively.
Opting for a DaaS solution? Then you can leave the heavy lifting to the cloud-hosted service
provider. They'll handle the nuts and bolts—installing, configuring, and maintaining
everything from your applications and operating systems to your files and personal settings.
Whatever your virtualization path, there are many tools available to create, manage, and
deliver virtual desktops. They offer features to optimize performance, enhance security, and
simplify management, catering to the diverse needs of businesses and organizations as they
navigate the rapidly changing landscape of IT infrastructure and data management.
UNIT III NETWORK VIRTUALIZATION

Introduction to Network Virtualization-Advantages- Functions-Tools for Network


Virtualization-VLANWAN Architecture-WAN Virtualization

What is network virtualization?


Network Virtualization (NV) refers to abstracting network resources that were traditionally
delivered in hardware to software. NV can combine multiple physical networks to one
virtual, software-based network, or it can divide one physical network into separate,
independent virtual networks.
Network virtualization software allows network administrators to move virtual machines
across different domains without reconfiguring the network. The software creates a network
overlay that can run separate virtual network layers on top of the same physical network

fabric.
Introduction to VMware NSX
 WATCH NOW
Why network virtualization?
Network virtualization is rewriting the rules for the way services are delivered, from the
software-defined data center (SDDC), to the cloud, to the edge. This approach moves
networks from static, inflexible, and inefficient to dynamic, agile, and optimized. Modern
networks must keep up with the demands for cloud-hosted, distributed apps, and the
increasing threats of cybercriminals while delivering the speed and agility you need for faster
time to market for your applications. With network virtualization, you can forget about
spending days or weeks provisioning the infrastructure to support a new application. Apps
can be deployed or updated in minutes for rapid time to value.
How does network virtualization work?
Network virtualization decouples network services from the underlying hardware and allows
virtual provisioning of an entire network. It makes it possible to programmatically create,
provision, and manage networks all in software, while continuing to leverage the underlying
physical network as the packet-forwarding backplane. Physical network resources, such as
switching, routing, firewalling, load balancing, virtual private networks (VPNs), and more,
are pooled, delivered in software, and require only Internet Protocol (IP) packet forwarding
from the underlying physical network.
Network and security services in software are distributed to a virtual layer (hypervisors, in
the data center) and ―attached‖ to individual workloads, such as your virtual machines (VMs)
or containers, in accordance with networking and security policies defined for each connected
application. When a workload is moved to another host, network services and security
policies move with it. And when new workloads are created to scale an application, necessary
policies are dynamically applied to these new workloads, providing greater policy
consistency and network agility.
Benefits of network virtualization
Network virtualization helps organizations achieve major advances in speed, agility, and
security by automating and simplifying many of the processes that go into running a data
center network and managing networking and security in the cloud. Here are some of the key
benefits of network virtualization:
 Reduce network provisioning time from weeks to minutes
 Achieve greater operational efficiency by automating manual processes
 Place and move workloads independently of physical topology
 Improve network security within the data center
Network Virtualization Example
One example of network virtualization is virtual LAN (VLAN). A VLAN is a subsection of a
local area network (LAN) created with software that combines network devices into one
group, regardless of physical location. VLANs can improve the speed and performance of
busy networks and simplify changes or additions to the network.
Another example is network overlays. There are various overlay technologies. One industry-
standard technology is called virtual extensible local area network (VXLAN). VXLAN
provides a framework for overlaying virtualized layer 2 networks over layer 3 networks,
defining both an encapsulation mechanism and a control plane. Another is generic network
virtualization encapsulation (GENEVE), which takes the same concepts but makes them
more extensible by being flexible to multiple control plane mechanisms.
VMware NSX Data Center – Network Virtualization Platform
VMware NSX Data Center is a network virtualization platform that delivers networking and
security components like firewalling, switching, and routing that are defined and consumed in
software. NSX takes an architectural approach built on scale-out network virtualization that
delivers consistent, pervasive connectivity and security for apps and data wherever they
reside, independent of underlying physical infrastructure.
Network Virtualization is a process of logically grouping physical networks and making
them operate as single or multiple independent networks called Virtual Networks.
General Architecture Of Network Virtualization
Tools for Network Virtualization :
1. Physical switch OS –
It is where the OS must have the functionality of network virtualization.
2. Hypervisor –
It is which uses third-party software or built-in networking and the functionalities of
network virtualization.
The basic functionality of the OS is to give the application or the executing process with a
simple set of instructions. System calls that are generated by the OS and executed through
the libc library are comparable to the service primitives given at the interface between the
application and the network through the SAP (Service Access Point).
The hypervisor is used to create a virtual switch and configuring virtual networks on it. The
third-party software is installed onto the hypervisor and it replaces the native networking
functionality of the hypervisor. A hypervisor allows us to have various VMs all working
optimally on a single piece of computer hardware.
Functions of Network Virtualization :
 It enables the functional grouping of nodes in a virtual network.
 It enables the virtual network to share network resources.
 It allows communication between nodes in a virtual network without routing of frames.
 It restricts management traffic.
 It enforces routing for communication between virtual networks.
Network Virtualization in Virtual Data Center :
1. Physical Network
 Physical components: Network adapters, switches, bridges, repeaters, routers and hubs.
 Grants connectivity among physical servers running a hypervisor, between physical
servers and storage systems and between physical servers and clients.
2. VM Network
 Consists of virtual switches.
 Provides connectivity to hypervisor kernel.
 Connects to the physical network.
 Resides inside the physical server.
Network Virtualization In VDC
Advantages of Network Virtualization :
Improves manageability –
 Grouping and regrouping of nodes are eased.
 Configuration of VM is allowed from a centralized management workstation using
management software.
Reduces CAPEX –
 The requirement to set up separate physical networks for different node groups is
reduced.
Improves utilization –
 Multiple VMs are enabled to share the same physical network which enhances the
utilization of network resource.
Enhances performance –
 Network broadcast is restricted and VM performance is improved.
Enhances security –
 Sensitive data is isolated from one VM to another VM.
 Access to nodes is restricted in a VM from another VM.
Disadvantages of Network Virtualization :
 It needs to manage IT in the abstract.
 It needs to coexist with physical devices in a cloud-integrated hybrid environment.
 Increased complexity.
 Upfront cost.
 Possible learning curve.
Examples of Network Virtualization :
Virtual LAN (VLAN) –
 The performance and speed of busy networks can be improved by VLAN.
 VLAN can simplify additions or any changes to the network.
Network Overlays –
 A framework is provided by an encapsulation protocol called VXLAN for overlaying
virtualized layer 2 networks over layer 3 networks.
 The Generic Network Virtualization Encapsulation protocol (GENEVE) provides a new
way to encapsulation designed to provide control-plane independence between the
endpoints of the tunnel.
Network Virtualization Platform: VMware NSX –
 VMware NSX Data Center transports the components of networking and security such
as switching, firewalling and routing that are defined and consumed in software.
 It transports the operational model of a virtual machine (VM) for the network.
Applications of Network Virtualization :
 Network virtualization may be used in the development of application testing to mimic
real-world hardware and system software.
 It helps us to integrate several physical networks into a single network or separate
single physical networks into multiple analytical networks.
 In the field of application performance engineering, network virtualization allows the
simulation of connections between applications, services, dependencies, and end-users
for software testing.
 It helps us to deploy applications in a quicker time frame, thereby supporting a faster
go-to-market.
 Network virtualization helps the software testing teams to derive actual results with
expected instances and congestion issues in a networked environment.
A VLAN WAN architecture, often associated with "WAN virtualization," is a network design
where multiple logical networks (VLANs) are created and managed over a single physical
Wide Area Network (WAN) infrastructure, allowing for efficient traffic segregation and
management across different locations while utilizing diverse underlying connections like
MPLS, broadband, or cellular networks, all seemingly as one unified network; essentially, it
abstracts the physical network complexity, enabling flexible and scalable network operations
across various sites.
Key points about VLAN WAN architecture and WAN virtualization:
 Logical separation:
VLANs within the WAN architecture create virtual network segments, effectively isolating
traffic between different departments, users, or applications, even though they share the
same physical infrastructure.
 Traffic management:
By defining VLANs, network administrators can prioritize specific traffic types, like voice
over IP (VoIP) or critical data, within the WAN, optimizing network performance.
 Multi-link aggregation:
WAN virtualization allows organizations to combine multiple WAN connections (from
different providers) into a single logical network, enhancing redundancy and reliability.
 Cost efficiency:
By leveraging a shared physical infrastructure, the need for dedicated physical connections
for each network segment is reduced, potentially lowering overall costs.
How it works:
 Switch configuration:
Network switches at each site are configured to recognize and manage VLAN tags, which
identify the specific virtual network a packet belongs to.
 Router configuration:
Routers at the network edge are configured to route traffic based on VLAN information,
ensuring data is directed to the correct destination across the WAN.
Benefits of VLAN WAN architecture:
 Improved security:
Isolating network segments through VLANs helps prevent unauthorized access to sensitive
data between different network sections.
 Scalability:
Easily add new VLANs as network needs evolve without major infrastructure changes.
 Simplified management:
Centralized management of VLANs across multiple locations simplifies network
administration.
Important considerations:
 Network design:
Careful planning is needed to define VLANs and their associated network segments based
on business requirements.
 Device compatibility:
Ensure all network devices (switches, routers) support VLAN functionality and tagging.

WAN Virtualization
In today's fast-paced digital world, seamless connectivity is the key to success for businesses
of all sizes. WAN (Wide Area Network) virtualization has emerged as a game-changing
technology, revolutionizing the way organizations connect their geographically dispersed
branches and remote employees. In this blog post, we will explore the concept of WAN
virtualization, its benefits, implementation considerations, and its potential impact on
businesses.

WAN virtualization is a technology that abstracts the physical network infrastructure,


allowing multiple logical networks to operate independently over a shared physical
infrastructure. It enables organizations to combine various types of connectivity, such as
MPLS, broadband, and cellular, into a single virtual network. By doing so, WAN
virtualization enhances network performance, scalability, and flexibility.

Increased Flexibility and Scalability: WAN virtualization allows businesses to scale their
network resources on-demand, facilitating seamless expansion or contraction based on their
requirements. It provides flexibility to dynamically allocate bandwidth, prioritize critical
applications, and adapt to changing network conditions.

Improved Performance and Reliability: By leveraging intelligent traffic management


techniques and load balancing algorithms, WAN virtualization optimizes network
performance. It intelligently routes traffic across multiple network paths, avoiding congestion
and reducing latency. Additionally, it enables automatic failover and redundancy, ensuring
high network availability.

Simplified Network Management: Traditional WAN architectures often involve complex


configurations and manual provisioning. WAN virtualization simplifies network management
by centralizing control and automating tasks. Administrators can easily set policies, monitor
network performance, and make changes from a single management interface, saving time
and reducing human errors.

Multi-Site Connectivity: For organizations with multiple remote sites, WAN virtualization
offers a cost-effective solution. It enables seamless connectivity between sites, allowing
efficient data transfer, collaboration, and resource sharing. With centralized management,
network administrators can ensure consistent policies and security across all sites. Cloud
Connectivity:
As more businesses adopt cloud-based applications and services, WAN virtualization
becomes an essential component. It provides reliable and secure connectivity between on-
premises infrastructure and public or private cloud environments. By prioritizing critical
cloud traffic and optimizing routing, WAN virtualization ensures optimal performance for
cloud-based applications.
### The Basics of WAN
A WAN is a telecommunications network that extends over a large geographical area. It is
designed to connect devices and networks across long distances, using various
communication links such as leased lines, satellite links, or the internet. The primary purpose
of a WAN is to facilitate the sharing of resources and information across locations, making it
a vital component of modern business infrastructure. WANs can be either private, connecting
specific networks of an organization, or public, utilizing the internet for broader connectivity.
### The Role of Virtualization in WAN
Virtualization has revolutionized the way WANs operate, offering enhanced flexibility,
efficiency, and scalability. By decoupling network functions from physical hardware,
virtualization allows for the creation of virtual networks that can be easily managed and
adjusted to meet organizational needs. This approach reduces the dependency on physical
infrastructure, leading to cost savings and improved resource utilization. Virtualized WANs
can dynamically allocate bandwidth, prioritize traffic, and ensure optimal performance,
making them an attractive solution for businesses seeking agility and resilience.
Separating: Control and Data Plane:
1: – WAN virtualization can be defined as the abstraction of physical network resources into
virtual entities, allowing for more flexible and efficient network management. By separating
the control plane from the data plane, WAN virtualization enables the centralized
management and orchestration of network resources, regardless of their physical locations.
This simplifies network administration and paves the way for enhanced scalability and
agility.
2: – WAN virtualization optimizes network performance by intelligently routing traffic and
dynamically adjusting network resources based on real-time conditions. This ensures that
critical applications receive the necessary bandwidth and quality of service, resulting in
improved user experience and productivity.
3: – By leveraging WAN virtualization, organizations can reduce their reliance on expensive
dedicated circuits and hardware appliances. Instead, they can leverage existing network
infrastructure and utilize cost-effective internet connections without compromising security
or performance. This significantly lowers operational costs and capital expenditures.
4: – Traditional WAN architectures often struggle to meet modern businesses’ evolving
needs. WAN virtualization solves this challenge by providing a scalable and flexible network
infrastructure. With virtual overlays, organizations can rapidly deploy and scale their network
resources as needed, empowering them to adapt quickly to changing business requirements.
**Implementing WAN Virtualization**
Successful implementation of WAN virtualization requires careful planning and execution.
Start by assessing your current network infrastructure and identifying areas for improvement.
Choose a virtualization solution that aligns with your organization’s specific needs and
budget. Consider leveraging software-defined WAN (SD-WAN) technologies to simplify the
deployment process and enhance overall network performance.
There are several popular techniques for implementing WAN virtualization, each with its
unique characteristics and use cases. Let’s explore a few of them:
a. MPLS (Multi-Protocol Label Switching): MPLS is a widely used technique that
leverages labels to direct network traffic efficiently. It provides reliable and secure
connectivity, making it suitable for businesses requiring stringent service level agreements
(SLAs).
b. SD-WAN (Software-Defined Wide Area Network): SD-WAN is a revolutionary
technology that abstracts and centralizes the network control plane in software. It offers
dynamic path selection, traffic prioritization, and simplified network management, making it
ideal for organizations with multiple branch locations.
c. VPLS (Virtual Private LAN Service): VPLS extends the functionality of Ethernet-based
LANs over a wide area network. It creates a virtual bridge between geographically dispersed
sites, enabling seamless communication as if they were part of the same local network.
Example Technology: MPLS & LDP
**The Mechanics of MPLS: How It Works**
MPLS operates by assigning labels to data packets at the network’s entry point—an MPLS-
enabled router. These labels determine the path the packet will take through the network,
enabling quick and efficient routing. Each router along the path uses the label to make
forwarding decisions, eliminating the need for complex table lookups. This not only
accelerates data transmission but also allows network administrators to predefine optimal
paths for different types of traffic, enhancing network performance and reliability.
**Exploring LDP: The Glue of MPLS Systems**
The Label Distribution Protocol (LDP) is crucial for the functioning of MPLS networks. LDP
is responsible for the distribution of labels between routers, ensuring that each understands
how to handle the labeled packets appropriately. When routers communicate using LDP, they
exchange label information, which helps in building a label-switched path (LSP). This
process involves the negotiation of label values and the establishment of the end-to-end path
that data packets will traverse, making LDP the unsung hero that ensures seamless and
effective MPLS operation.
**Benefits of MPLS and LDP in Modern Networks**
MPLS and LDP together offer a range of benefits that make them indispensable in
contemporary networking. They provide a scalable solution that supports a wide array of
services, including VPNs, traffic engineering, and quality of service (QoS). This versatility
makes it easier for network operators to manage and optimize traffic, leading to improved
bandwidth utilization and reduced latency. Additionally, MPLS networks are inherently more
secure, as the label-switching mechanism makes it difficult for unauthorized users to
intercept or tamper with data.

Overcoming Potential Challenges


While WAN virtualization offers numerous benefits, it also presents certain challenges.
Security is a top concern, as virtualized networks can introduce new vulnerabilities. It’s
essential to implement robust security measures, such as encryption and access controls, to
protect your virtualized WAN. Additionally, ensure your IT team is adequately trained to
manage and monitor the virtual network environment effectively.
**Section 1: The Complexity of Network Integration**
One of the primary challenges in WAN virtualization is integrating new virtualized solutions
with existing network infrastructures. This task often involves dealing with legacy systems
that may not easily adapt to virtualized environments. Organizations need to ensure
compatibility and seamless operation across all network components. To address this
complexity, businesses can employ network abstraction techniques and use software-defined
networking (SDN) tools that offer greater control and flexibility, allowing for a smoother
integration process.
**Section 2: Security Concerns in Virtualized Environments**
Security remains a critical concern in any network architecture, and virtualization adds
another layer of complexity. Virtual environments can introduce vulnerabilities if not
properly managed. The key to overcoming these security challenges lies in implementing
robust security protocols and practices. Utilizing encryption, firewalls, and regular security
audits can help safeguard the network. Additionally, leveraging network segmentation and
zero-trust models can significantly enhance the security of virtualized WANs.
**Section 3: Managing Performance and Reliability**
Ensuring consistent performance and reliability in a virtualized WAN is another significant
challenge. Virtualization can sometimes lead to latency and bandwidth issues, affecting the
overall user experience. To mitigate these issues, organizations should focus on traffic
optimization techniques and quality of service (QoS) management. Implementing dynamic
path selection and traffic prioritization can ensure that mission-critical applications receive
the necessary bandwidth and performance, maintaining high levels of reliability across the
network.
**Section 4: Cost Implications and ROI**
While WAN virtualization can lead to cost savings in the long run, the initial investment and
transition can be costly. Organizations must carefully consider the cost implications and
potential return on investment (ROI) when adopting virtualized solutions. Conducting
thorough cost-benefit analyses and pilot testing can provide valuable insights into the
financial viability of virtualization projects. By aligning virtualization strategies with
business goals, companies can maximize ROI and achieve sustainable growth.
WAN Virtualisation & SD-WAN Cloud Hub
SD-WAN Cloud Hub is a cutting-edge networking solution that combines the power of
software-defined wide area networking (SD-WAN) with the scalability and reliability of
cloud services. It acts as a centralized hub, enabling organizations to connect their branch
offices, data centers, and cloud resources in a secure and efficient manner. By leveraging SD-
WAN Cloud Hub, businesses can
simplify their network
architecture, improve application
performance, and reduce costs.
Google Cloud needs no
introduction. With its robust
infrastructure, comprehensive
suite of services, and global
reach, it has become a preferred
choice for businesses across
industries. From compute and
storage to AI and analytics, Google Cloud offers a wide range of solutions that empower
organizations to innovate and scale. By integrating SD-WAN Cloud Hub with Google Cloud,
businesses can unlock unparalleled benefits and
take their network connectivity to new heights.

Understanding SD-WAN
SD-WAN is a cutting-edge networking technology that utilizes software-defined principles to
manage and optimize network connections intelligently. Unlike traditional WAN, which
relies on costly and inflexible hardware, SD-WAN leverages software-based solutions to
streamline network management, improve performance, and enhance security.
Key Benefits of SD-WAN
a) Enhanced Performance: SD-WAN intelligently routes traffic across multiple network
paths, ensuring optimal performance and reduced latency. This results in faster data transfers
and improved user experience.
b) Cost Efficiency: With SD-WAN, businesses can leverage affordable broadband
connections rather than relying solely on expensive MPLS (Multiprotocol Label Switching)
links. This not only reduces costs but also enhances network resilience.
c) Simplified Management: SD-WAN centralizes network management through a user-
friendly interface, allowing IT teams to easily configure, monitor, and troubleshoot network
connections. This simplification saves time and resources, enabling IT professionals to focus
on strategic initiatives.
SD-WAN incorporates robust security measures to protect network traffic and sensitive data.
It employs encryption protocols, firewall capabilities, and traffic segmentation techniques to
safeguard against unauthorized access and potential cyber threats. These advanced security
features give businesses peace of mind and ensure data integrity.
WAN Virtualization with Network Connectivity Center
**Understanding Google Network Connectivity Center**
Google Network Connectivity Center (NCC) is a cloud-based service designed to simplify
and centralize network management. By leveraging Google’s extensive global infrastructure,
NCC provides organizations with a unified platform to manage their network connectivity
across various environments, including on-premises data centers, multi-cloud setups, and
hybrid environments.

**Key Features and Benefits**


1. **Centralized Network Management**: NCC offers a single pane of glass for network
administrators to monitor and manage connectivity across different environments. This
centralized approach reduces the complexity associated with managing multiple network
endpoints and enhances operational efficiency.
2. **Enhanced Security**: With NCC, organizations can implement robust security measures
across their network. The service supports advanced encryption protocols and integrates
seamlessly with Google’s security tools, ensuring that data remains secure as it moves
between different environments.
3. **Scalability and Flexibility**: One of the standout features of NCC is its ability to scale
with your organization’s needs. Whether you’re expanding your data center operations or
integrating new cloud services, NCC provides the flexibility to adapt quickly and efficiently.
**Optimizing Data Center Operations**
Data centers are the backbone of modern digital infrastructure, and optimizing their
operations is crucial for any organization. NCC facilitates this by offering tools that enhance
data center connectivity and performance. For instance, with NCC, you can easily set up and
manage VPNs, interconnect data centers across different regions, and ensure high availability
and redundancy.
**Seamless Integration with Other Google Services**
NCC isn’t just a standalone service; it integrates seamlessly with other Google Cloud services
such as Cloud Interconnect, Cloud VPN, and Google Cloud Armor. This integration allows
organizations to build comprehensive network solutions that leverage the best of Google’s
cloud offerings. Whether it’s enhancing security, improving performance, or ensuring
compliance, NCC works in tandem with other services to deliver a cohesive and powerful
network management solution.
Understanding Network Tiers
Google Cloud offers two distinct Network Tiers: Premium Tier and Standard Tier. Each tier
is designed to cater to specific use cases and requirements. The Premium Tier provides users
with unparalleled performance, low latency, and high availability. On the other hand, the
Standard Tier offers a more cost-effective solution without compromising on reliability.
The Premium Tier, powered by Google’s global fiber network, ensures lightning-fast
connectivity and optimal performance for critical workloads. With its vast network of points
of presence (PoPs), it minimizes latency and enables seamless data transfers across regions.
By leveraging the Premium Tier, businesses can ensure superior user experiences and support
demanding applications that require real-time data processing.
While the Premium Tier delivers exceptional performance, the Standard Tier presents an
attractive option for cost-conscious organizations. By utilizing Google Cloud’s extensive
network peering relationships, the Standard Tier offers reliable connectivity at a reduced
cost. It is an ideal choice for workloads that are less latency-sensitive or require moderate
bandwidth.
What is VPC Networking?
VPC networking refers to the virtual network
environment that allows you to securely
connect your resources running in the cloud. It
provides isolation, control, and flexibility,
enabling you to define custom network configurations to suit your specific needs. In Google
Cloud, VPC networking is a fundamental building block for your cloud infrastructure.
Google Cloud VPC networking offers a range of powerful features that enhance your
network management capabilities. These include subnetting, firewall rules, route tables, VPN
connectivity, and load balancing. Let’s explore each of these features in more detail:

Subnetting: With VPC subnetting, you can divide your IP address range into smaller
subnets, allowing for better resource allocation and network segmentation.
Firewall Rules: Google Cloud VPC networking provides robust firewall rules that enable
you to control inbound and outbound traffic, ensuring enhanced security for your applications
and data.
Route Tables: Route tables in VPC networking allow you to define the routing logic for your
network traffic, ensuring efficient communication between different subnets and external
networks.
VPN Connectivity: Google Cloud supports VPN connectivity, allowing you to establish
secure connections between your on-premises network and your cloud resources, creating a
hybrid infrastructure.
Load Balancing: VPC networking offers load balancing capabilities, distributing incoming
traffic across multiple instances, increasing availability and scalability of your applications.

Example: DMVPN ( Dynamic Multipoint VPN)


Separating control from the data plane
DMVPN is a Cisco-developed solution that combines the benefits of multipoint GRE tunnels,
IPsec encryption, and dynamic routing protocols to create a flexible and efficient virtual
private network. It simplifies network architecture, reduces operational costs, and enhances
scalability. With DMVPN, organizations can connect remote sites, branch offices, and mobile
users seamlessly, creating a cohesive network infrastructure.
The underlay infrastructure forms the foundation of DMVPN. It refers to the physical
network that connects the different sites or locations. This could be an existing Wide Area
Network (WAN) infrastructure, such as MPLS, or the public Internet. The underlay provides
the transport for the overlay network, enabling the secure transmission of data packets
between sites.
The overlay network is the virtual network created on top of the underlay infrastructure. It is
responsible for establishing the secure tunnels and routing between the connected sites.
DMVPN uses multipoint GRE tunnels to allow dynamic and direct communication between
sites, eliminating the need for a hub-and-spoke topology. IPsec encryption ensures the
confidentiality and integrity of data transmitted over the overlay network.

Example WAN Technology: Tunneling IPv6 over IPV4


IPv6 tunneling is a technique that allows the transmission of IPv6 packets over an IPv4
network infrastructure. It enables communication between IPv6 networks by encapsulating
IPv6 packets within IPv4 packets. By doing so, organizations can utilize existing IPv4
infrastructure while transitioning to IPv6. Before delving into its various implementations,
understanding the basics of IPv6 tunneling is crucial.
Types of IPv6 Tunneling
There are several types of IPv6 tunneling techniques, each with its advantages and
considerations. Let’s explore a few popular types:
Manual Tunneling: Manual tunneling is a simple method configuring tunnel endpoints. It
also requires manually configuring tunnel interfaces on each participating device. While it
provides flexibility and control, this approach can be time-consuming and prone to human
error.
Automatic Tunneling: Automatic tunneling, also known as 6to4 tunneling, allows for the
automatic creation of tunnels without manual configuration. It utilizes the 6to4 addressing
scheme, where IPv6 packets are encapsulated within IPv4 packets using protocol 41. While
convenient, automatic tunneling may encounter issues with address translation and
compatibility.
Teredo Tunneling: Teredo tunneling is another automatic technique that enables IPv6
connectivity for hosts behind IPv4 Network Address Translation (NAT) devices. It uses UDP
encapsulation to carry IPv6 packets over IPv4 networks. Though widely supported, Teredo
tunneling may suffer from performance limitations due to its reliance on UDP.

WAN Virtualization Technologies


Understanding VRFs
VRFs, in simple terms, allow the creation of multiple virtual routing tables within a single
physical router or switch. Each VRF operates as an independent routing instance with its
routing table, interfaces, and forwarding decisions. This powerful concept allows for logical
separation of network traffic, enabling enhanced security, scalability, and efficiency.
One of VRFs’ primary advantages is network segmentation. By creating separate VRF
instances, organizations can effectively isolate different parts of their network, ensuring
traffic from one VRF cannot directly communicate with another. This segmentation enhances
network security and provides granular control over network resources.
Furthermore, VRFs enable efficient use of network resources. By utilizing VRFs,
organizations can optimize their routing decisions, ensuring that traffic is forwarded through
the most appropriate path based on the specific requirements of each VRF. This dynamic
routing capability leads to improved network performance and better resource utilization.
Use Cases for VRFs
VRFs are widely used in various networking scenarios. One common use case is in service
provider networks, where VRFs separate customer traffic, allowing multiple customers to
share a single physical infrastructure while maintaining isolation. This approach brings cost
savings and scalability benefits.
Another use case for VRFs is in enterprise networks with strict security requirements. By
leveraging VRFs, organizations can segregate sensitive data traffic from the rest of the
network, reducing the risk of unauthorized access and potential data breaches.
Example WAN technology: Cisco PfR
Cisco PfR is an intelligent routing solution that utilizes real-time performance metrics to
make dynamic routing decisions. By continuously monitoring network conditions, such as
latency, jitter, and packet loss, PfR can intelligently reroute traffic to optimize performance.
Unlike traditional static routing protocols, PfR adapts to network changes on the fly, ensuring
optimal utilization of available resources.
Key Features of Cisco PfR
a. Performance Monitoring: PfR continuously collects performance data from various
sources, including routers, probes, and end-user devices. This data provides valuable insights
into network behavior and helps identify areas of improvement.
b. Intelligent Traffic Engineering: With its advanced algorithms, Cisco PfR can
dynamically select the best path for traffic based on predefined policies and performance
metrics. This enables efficient utilization of available network resources and minimizes
congestion.
c. Application Visibility and Control: PfR offers deep visibility into application-level
performance, allowing network administrators to prioritize critical applications and allocate
resources accordingly. This ensures optimal performance for business-critical applications
and improves overall user experience.

Example WAN Technology: Network Overlay


Virtual network overlays serve as a layer of abstraction, enabling the creation of multiple
virtual networks on top of a physical network infrastructure. By encapsulating network traffic
within virtual tunnels, overlays provide isolation, scalability, and flexibility, empowering
organizations to manage their networks efficiently.
Underneath the surface, virtual network overlays rely on encapsulation protocols such as
Virtual Extensible LAN (VXLAN) or Generic Routing Encapsulation (GRE). These
protocols enable the creation of virtual tunnels, allowing network packets to traverse the
physical infrastructure while remaining isolated within their respective virtual networks.
**What is GRE?**
At its core, Generic Routing Encapsulation is a tunneling protocol that allows the
encapsulation of different network layer protocols within IP packets. It acts as an envelope,
carrying packets from one network to another across an intermediate network. GRE provides
a flexible and scalable solution for connecting disparate networks, facilitating seamless
communication.
GRE encapsulates the original packet, often called the payload, within a new IP packet. This
encapsulated packet is then sent to the destination network, where it is decapsulated to
retrieve the original payload. By adding an IP header, GRE enables the transportation of
various protocols across different network infrastructures, including IPv4, IPv6, IPX, and
MPLS.
**Introducing IPSec Services**
IPSec, short for Internet Protocol Security, is a suite of protocols that provides security
services at the IP network layer. It offers data integrity, confidentiality, and authentication
features, ensuring that data transmitted over IP networks remains protected from
unauthorized access and tampering. IPSec operates in two modes: Transport Mode and
Tunnel Mode.
**Combining GRE & IPSec**
By combining GRE and IPSec, organizations can create secure and private communication
channels over public networks. GRE provides the tunneling mechanism, while IPSec adds an
extra layer of security by encrypting and authenticating the encapsulated packets. This
combination allows for the secure transmission of sensitive data, remote access to private
networks, and the establishment of virtual private networks (VPNs).
The combination of GRE and IPSec offers several advantages. First, it enables the creation of
secure VPNs, allowing remote users to connect securely to private networks over public
infrastructure. Second, it protects against eavesdropping and data tampering, ensuring the
confidentiality and integrity of transmitted data. Lastly, GRE and IPSec are vendor-neutral
protocols widely supported by various network equipment, making them accessible and
compatible.

What is MPLS?
MPLS, short for Multi-Protocol Label Switching, is a versatile and scalable protocol used in
modern networks. At its core, MPLS assigns labels to network packets, allowing for efficient
and flexible routing. These labels help streamline traffic flow, leading to improved
performance and reliability. To understand how MPLS works, we need to explore its key
components.
The basic building block is the Label Switched Path (LSP), a predetermined path that packets
follow. Labels are attached to packets at the ingress router, guiding them along the LSP until
they reach their destination. This label-based forwarding mechanism enables MPLS to offer
traffic engineering capabilities and support various network services.
Understanding Label Distributed Protocols
Label distributed protocols, or LDP, are fundamental to modern networking technologies.
They are designed to establish and maintain label-switched paths (LSPs) in a network. LDP
operates by distributing labels, which are used to identify and forward network traffic
efficiently. By leveraging labels, LDP enhances network scalability and enables faster packet
forwarding.
One key advantage of label-distributed protocols is their ability to support multiprotocol label
switching (MPLS). MPLS allows for efficient routing of different types of network traffic,
including IP, Ethernet, and ATM. This versatility makes label-distributed protocols highly
adaptable and suitable for diverse network environments. Additionally, LDP minimizes
network congestion, improves Quality of Service (QoS), and promotes effective resource
utilization.
What is MPLS LDP?
MPLS LDP, or Label Distribution Protocol, is a key component of Multiprotocol Label
Switching (MPLS) technology. It facilitates the establishment of label-switched paths (LSPs)
through the network, enabling efficient forwarding of data packets. MPLS LDP uses labels to
direct network traffic along predetermined paths, eliminating the need for complex routing
table lookups.
One of MPLS LDP’s primary advantages is its ability to enhance network performance. By
utilizing labels, MPLS LDP reduces the time and resources required for packet forwarding,
resulting in faster data transmission and reduced network congestion. Additionally, MPLS
LDP allows for traffic engineering, enabling network administrators to prioritize certain types
of traffic and allocate bandwidth accordingly.
Understanding MPLS VPNs
MPLS VPNs, or Multiprotocol Label
Switching Virtual Private Networks, are
network infrastructure that allows
multiple sites or branches of an
organization to communicate over a
shared service provider network
securely. Unlike traditional VPNs,
MPLS VPNs utilize labels to efficiently
route and prioritize data packets,
ensuring optimal performance and security. By encapsulating data within labels, MPLS
VPNs enable seamless communication between different sites while maintaining privacy and
segregation.

Understanding VPLS
VPLS, short for Virtual Private LAN Service, is a technology that enables the creation of a
virtual LAN (Local Area Network) over a shared or public network infrastructure. It allows
geographically dispersed sites to connect as if they are part of the same LAN, regardless of
their physical distance. This technology uses MPLS (Multiprotocol Label Switching) to
transport Ethernet frames across the network efficiently.
Key Features and Benefits
Scalability and Flexibility: VPLS offers scalability, allowing businesses to easily expand
their network as their requirements grow. It allows adding or removing sites without
disrupting the overall network, making it an ideal choice for organizations with dynamic
needs.
Seamless Connectivity: By extending the LAN across different locations, VPLS provides a
seamless and transparent network experience. Employees can access shared resources, such
as files and applications, as if they were all in the same office, promoting collaboration and
productivity across geographically dispersed teams.
Enhanced Security: VPLS ensures a high level of security by isolating each customer’s
traffic within their own virtual LAN. The data is encapsulated and encrypted, protecting it
from unauthorized access. This
makes VPLS a reliable solution for
organizations that handle sensitive
information and must comply with
strict security regulations.
Advanced WAN Designs
DMVPN Phase 2 Spoke to Spoke
Tunnels
Learning the mapping information
required through NHRP resolution
creates a dynamic spoke-to-spoke
tunnel. How does a spoke know how to perform such a task? As an enhancement to DMVPN
Phase 1, spoke-to-spoke tunnels were first introduced in Phase 2 of the network. Phase 2
handed responsibility for NHRP resolution requests to each spoke individually, which means
that spokes initiated NHRP resolution requests when they determined a packet needed a
spoke-to-spoke tunnel. Cisco Express Forwarding (CEF) would assist the spoke in making
this decision based on information contained in its routing table.

Exploring Single Hub Dual Cloud Architecture


– Single Hub Dual Cloud is a specific deployment model within the DMVPN framework that
provides enhanced redundancy and improved performance. This architecture connects a
single hub device to two separate cloud service providers, creating two independent VPN
clouds. This setup offers numerous advantages, including increased availability, load
balancing, and optimized traffic routing.
– One key benefit of the Single Hub Dual Cloud approach is improved network resiliency.
With two independent clouds, businesses can ensure uninterrupted connectivity even if one
cloud or service provider experiences issues. This redundancy minimizes downtime and helps
maintain business continuity. This architecture’s load-balancing capabilities also enable
efficient traffic distribution, reducing congestion and enhancing overall network
performance.
– Implementing DMVPN Single Hub Dual Cloud requires careful planning and
configuration. Organizations must assess their needs, evaluate suitable cloud service
providers, and design a robust network architecture. Working with experienced network
engineers and leveraging automation tools can streamline deployment and ensure successful
implementation.

WAN Services
Network Address Translation:
In simple terms, NAT is a
technique for modifying IP
addresses while packets traverse
from one network to another. It
bridges private local networks
and the public Internet, allowing multiple devices to share a single public IP address. By
translating IP addresses, NAT enables private networks to communicate with external
networks without exposing their internal structure.
Types of Network Address Translation
There are several types of NAT, each serving a specific purpose. Let’s explore a few
common ones:
Static NAT: Static NAT, also known as one-to-one NAT, maps a private IP address to a
public IP address. It is often used when specific devices on a network require direct access to
the internet. With static NAT, inbound and outbound traffic can be routed seamlessly.
Dynamic NAT: On the other hand, Dynamic NAT allows a pool of public IP addresses to be
shared among several devices within a private network. As devices connect to the internet,
they are assigned an available public IP address from the pool. Dynamic NAT facilitates
efficient utilization of public IP addresses while maintaining network security.
Port Address Translation (PAT): PAT, also called NAT Overload, is an extension of
dynamic NAT. Rather than assigning a unique public IP address to each device, PAT assigns
a unique port number to each connection. PAT allows multiple devices to share a single
public IP address by keeping track of port numbers. This technique is widely used in home
networks and small businesses.
NAT plays a crucial role in enhancing network security. By hiding devices’ internal IP
addresses, it acts as a barrier against potential attacks from the Internet. External threats find
it harder to identify and target individual devices within a private network. NAT acts as a
shield, providing additional security to the network infrastructure.

PBR At the WAN Edge


Understanding Policy-Based Routing
Policy-based Routing (PBR) allows network administrators to control the path of network
traffic based on specific policies or criteria. Unlike traditional routing protocols, PBR offers a
more granular and flexible approach to directing network traffic, enabling fine-grained
control over routing decisions.
PBR offers many features and functionalities that empower network administrators to
optimize network traffic flow. Some key aspects include:
1. Traffic Classification: PBR allows the classification of network traffic based on various
attributes such as source IP, destination IP, protocol, port numbers, or even specific packet
attributes. This flexibility enables administrators to create customized policies tailored to
their network requirements.
2. Routing Decision Control: With PBR, administrators can define specific routing
decisions for classified traffic. Traffic matching certain criteria can be directed towards a
specific next-hop or exit interface, bypassing the regular routing table.
3. Load Balancing and Traffic Engineering: PBR can distribute traffic across multiple
paths, leveraging load balancing techniques. By intelligently distributing traffic,
administrators can optimize resource utilization and enhance network performance.

Performance at the WAN Edge


Understanding TCP MSS
TCP MSS refers to the maximum amount of data encapsulated in a single TCP segment. It
determines the payload size within each TCP packet, excluding the TCP/IP headers. By
limiting the MSS, TCP ensures that data is transmitted in manageable chunks, preventing
fragmentation and improving overall network performance.
Several factors influence the determination of TCP MSS. One crucial aspect is the underlying
network’s Maximum Transmission Unit (MTU). The MTU represents the largest packet size
transmitted over a network without fragmentation. TCP MSS is typically set to match the
MTU to avoid packet fragmentation and subsequent retransmissions.
By appropriately configuring TCP MSS, network administrators can optimize network
performance. Matching the TCP MSS to the MTU size reduces the chances of packet
fragmentation, which can lead to delays and retransmissions. Moreover, a properly sized TCP
MSS can prevent unnecessary overhead and improve bandwidth utilization.
Adjusting the TCP MSS to suit specific network requirements is possible. Network
administrators can configure the TCP MSS value on routers, firewalls, and end devices. This
flexibility allows for fine-tuning network performance based on the specific characteristics
and constraints of the network infrastructure.

WAN – The desired benefits


Businesses often want to replace or augment premium bandwidth services and switch from
active/standby to active/active WAN transport models. This will reduce their costs. The
challenge, however, is that augmentation can increase operational complexity. Creating a
consistent operational model and simplifying IT requires businesses to avoid complexity.
The importance of maintaining remote site uptime for business continuity goes beyond
simply preventing blackouts. Latency, jitter, and loss can affect critical applications and
render them inoperable. As a result, the applications are entirely unavailable. The term
―brownout‖ refers to these situations. Businesses today are focused on providing a consistent,
high-quality application experience.
Ensuring connectivity
To ensure connectivity and make changes, there is a shift towards retaking control. It extends
beyond routing or quality of service to include application experience and availability. The
Internet edge is still not familiar to many businesses regarding remote sites. Software as a
Service (SaaS) and productivity applications can be rolled out more effectively with this
support.
Better access to Infrastructure as a Service (IaaS) is also necessary. Offloading guest traffic to
branches with direct Internet connectivity is also possible. However, many businesses are
interested in doing so. This is because offloading this traffic locally is more efficient than
routing it through a centralized data center to consume WAN bandwidth. WAN bandwidth is
wasted and is not efficient.
The shift to application-centric architecture
Business requirements are changing rapidly, and today’s networks cannot cope. It is
traditionally more expensive and has a fixed capacity for hardware-centric networks. In
addition, the box-by-box configuration approach, siloed management tools, and lack of
automated provisioning make them more challenging to support.
They are inflexible, static, expensive, and difficult to maintain due to conflicting policies
between domains and different configurations between services. As a result, security
vulnerabilities and misconfigurations are more likely to occur. An application- or service-
centric architecture focusing on simplicity and user experience should replace a connectivity-
centric architecture.
Understanding Virtualization
Virtualization is a technology that allows the creation of virtual versions of various IT
resources, such as servers, networks, and storage devices. These virtual resources operate
independently from physical hardware, enabling multiple operating systems and applications
to run simultaneously on a single physical machine. Virtualization opens possibilities by
breaking the traditional one-to-one relationship between hardware and software. Now,
virtualization has moved to the WAN.
WAN Virtualization and SD-WAN
Organizations constantly seek innovative solutions in modern networking to enhance their
network infrastructure and optimize connectivity. One such solution that has gained
significant attention is WAN virtualization. In this blog post, we will delve into the concept
of WAN virtualization, its benefits, and how it revolutionizes how businesses connect and
communicate.
WAN virtualization, also known as Software-Defined WAN (SD-WAN), is a technology that
enables organizations to abstract their wide area network (WAN) connections from the
underlying physical infrastructure. It leverages software-defined networking (SDN)
principles to decouple network control and data forwarding, providing a more flexible,
scalable, and efficient network solution.
VPN and SDN Components
WAN virtualization is an essential technology in the modern business world. It creates
virtualized versions of wide area networks (WANs) – networks spanning a wide geographic
area. The virtualized WANs can then manage and secure a company’s data, applications, and
services.
Regarding implementation, WAN virtualization requires using a virtual private network
(VPN), a secure private network accessible only by authorized personnel. This ensures that
only those with proper credentials can access the data. WAN virtualization also requires
software-defined networking (SDN) to manage the network and its components.
WAN Virtualization
Knowledge Check: Application-Aware Routing (AAR)
Understanding Application-Aware Routing (AAR)
Application-aware routing is a sophisticated networking technique that goes beyond
traditional packet-based routing. It considers the unique requirements of different
applications, such as video streaming, cloud-based services, or real-time communication, and
optimizes the network path accordingly. It ensures smooth and efficient data transmission by
prioritizing and steering traffic based on application characteristics.
Benefits of Application-Aware Routing
1- Enhanced Performance: Application-aware routing significantly improves overall
performance by dynamically allocating network resources to applications with high
bandwidth or low latency requirements. This translates into faster downloads, seamless video
streaming, and reduced response times for critical applications.
2- Increased Reliability: Traditional routing methods treat all traffic equally, often resulting
in congestion and potential bottlenecks. Application Aware Routing intelligently distributes
network traffic, avoiding congested paths and ensuring a reliable and consistent user
experience. In network failure or congestion, it can dynamically reroute traffic to alternative
paths, minimizing downtime and disruptions.
Implementation Strategies
1- Deep Packet Inspection: A key component of Application-Aware Routing is deep packet
inspection (DPI), which analyzes the content of network packets to identify specific
applications. DPI enables routers and switches to make informed decisions about handling
each packet based on its application, ensuring optimal routing and resource allocation.
2- Quality of Service (QoS) Configuration: Implementing QoS parameters alongside
Application Aware Routing allows network administrators to allocate bandwidth, prioritize
specific applications over others, and enforce policies to ensure the best possible user
experience. QoS configurations can be customized based on organizational needs and
application requirements.
Future Possibilities
As the digital landscape continues to evolve, the potential for Application-Aware Routing is
boundless. With emerging technologies like the Internet of Things (IoT) and 5G networks,
the ability to intelligently route traffic based on specific application needs will become even
more critical. Application-aware routing has the potential to optimize resource utilization,
enhance security, and support the seamless integration of diverse applications and services.

WAN Challenges
Deploying and managing the Wide Area Network (WAN) has become more challenging.
Engineers face several design challenges, such as traffic flow decentralizing, inefficient
WAN link utilization, routing protocol convergence, and application performance issues with
active-active WAN edge designs. Active-active WAN designs that spray and pray over
multiple active links present technical and business challenges.
To do this efficiently, you have to understand application flows. There may also be
performance problems. When packets reach the other end, there may be out-of-order packets
as each link propagates at different speeds. The remote end has to be reassembled and put
back together, causing jitter and delay. Both high jitter and delay are bad for network

performance. Diagram: What is WAN


virtualization? Source Linkedin.
Knowledge Check: Control and Data Plane
Understanding the Control Plane
The control plane can be likened to a network’s brain. It is responsible for making high-level
decisions and managing network-wide operations. From routing protocols to network
management systems, the control plane ensures data is directed along the most optimal paths.
By analyzing network topology, the control plane determines the best routes to reach a
destination and establishes the necessary rules for data transmission.
Unveiling the Data Plane
In contrast to the control plane, the data plane focuses on the actual movement of data
packets within the network. It can be thought of as the hands and feet executing the control
plane’s instructions. The data plane handles packet forwarding, traffic classification, and
Quality of Service (QoS) enforcement tasks. It ensures that data packets are correctly
encapsulated, forwarded to their intended destinations, and delivered with the necessary
priority and reliability.
Use Cases and Deployment Scenarios
Distributed Enterprises:
For organizations with multiple branch locations, WAN virtualization offers a cost-effective
solution for connecting remote sites to the central network. It allows for secure and efficient
data transfer between branches, enabling seamless collaboration and resource sharing.
Cloud Connectivity:
WAN virtualization is ideal for enterprises adopting cloud-based services. It provides a
secure and optimized connection to public and private cloud environments, ensuring reliable
access to critical applications and data hosted in the cloud.
Disaster Recovery and Business Continuity:
WAN virtualization plays a vital role in disaster recovery strategies. Organizations can ensure
business continuity during a natural disaster or system failure by replicating data and
applications across geographically dispersed sites.
Challenges and Considerations:
Implementing WAN virtualization requires careful planning and consideration. Factors such
as network security, bandwidth requirements, and compatibility with existing infrastructure
need to be evaluated. It is essential to choose a solution that aligns with the specific needs and
goals of the organization.
SD-WAN vs. DMVPN
Two popular WAN solutions are DMVPN and SD-WAN.
DMVPN (Dynamic Multipoint Virtual Private Network) and SD-WAN (Software-Defined
Wide Area Network) are popular solutions to improve connectivity between distributed
branch offices. DMVPN is a Cisco-specific solution, and SD-WAN is a software-based
solution that can be used with any router. Both solutions provide several advantages, but
there are some differences between them.
DMVPN is a secure, cost-effective, and scalable network solution that combines underlying
technologies and DMVVPN phases (for example, the traditional DMVPN phase 1 ) to
connect multiple sites. It allows the customer to use existing infrastructure and provides easy
deployment and management. This solution is an excellent choice for businesses with many
branch offices because it allows for secure communication and the ability to deploy new sites
quickly.
SD-WAN is a software-based solution that is gaining popularity in the enterprise market. It
provides improved application performance, security, and network reliability. SD-WAN is an
excellent choice for businesses that require high-performance applications across multiple
sites. It provides an easy-to-use centralized management console that allows companies to
deploy new sites and manage the network quickly.

Diagram: Example with DMVPN. Source is Cisco


Guide: DMVPN operating over the WAN
The following shows DMVPN operating over the WAN. The SP node represents the WAN
network. Then we have R11 as the hub and R2, R3 as the spokes. Several protocols make the
DMVPM network over the WAN possible. We have GRE; in this case, the tunnel destination
is specified as a point-to-point GRE tunnel instead of a mGRE tunnel.
Then we have NHRP, which is used to help create a mapping as this is a nonbroadcast
network; we can not use ARP. So, we need to manually set this up on the spokes with the
command: ip nhrp NHS 192.168.100.11

Diagram: DMVPN Configuration.


Shift from network-centric to business intent.
The core of WAN virtualization involves shifting focus from a network-centric model to a
business intent-based WAN network. So, instead of designing the WAN for the network, we
can create the WAN for the application. This way, the WAN architecture can simplify
application deployment and management.
First, however, the mindset must shift from a network topology focus to an application
services topology. A new application style consumes vast bandwidth and is very susceptible
to variations in bandwidth quality. Things such as jitter, loss, and delay impact most
applications, which makes it essential to improve the WAN environment for these
applications.

Diagram: WAN virtualization.


The spray-and-pray method over two links increases bandwidth but decreases ―goodput.‖ It
also affects firewalls, as they will see asymmetric routes. When you want an active-active
model, you need application session awareness and a design that eliminates asymmetric
routing. It would help if you could slice the WAN properly so application flows can work
efficiently over either link.
What is WAN Virtualization: Decentralizing Traffic
Decentralizing traffic from the data center to the branch requires more bandwidth to the
network’s edges. As a result, we see many high-bandwidth applications running on remote
sites. This is what businesses are now trying to accomplish. Traditional branch sites usually
rely on hub sites for most services and do not host bandwidth-intensive applications. Today,
remote locations require extra bandwidth, which is not cheaper yearly.
Inefficient WAN utilization
Redundant WAN links usually require a dynamic routing protocol for traffic engineering and
failover. Routing protocols require complex tuning to load balance traffic between border
devices. Border Gateway Protocol (BGP) is the primary protocol for connecting sites to
external networks.
It relies on path attributes to choose the best path based on availability and distance.
Although these attributes allow granular policy control, they do not cover aspects relating to
path performance, such as Round Trip Time (RTT), delay, and jitter.
Furthermore, BGP
does not always choose the “best” path, which may have different meanings for customers.
For example, customer A might consider the path via provider A as the best due to the price
of links. Default routing does not take this into account. Packet-level routing protocols are not
designed to handle the complexities of running over multiple transport-agnostic links.
Therefore, a solution that eliminates the need for packet-level routing protocols must arise.

Diagram: BGP Path Attributes Source is Cisco.


Routing protocol convergence
WAN designs can also be active standby, which requires routing protocol convergence in
the event of primary link failure. However, routing convergence is slow, and to speed up,
additional features, such as Bidirectional Forwarding Detection (BFD), are implemented that
may stress the network’s control plane. Although mechanisms exist to speed up convergence
and failure detection, there are still several convergence steps, such as:
Branch office security
With traditional network solutions, branches connect back to the data center, which typically
provides Internet access. However, the application world has evolved, and branches directly
consume applications such as Office 365 in the cloud. This drives a need for branches to
access these services over the Internet without going to the data center for Internet access or
security scrubbing.
Extending the security diameter into the branches should be possible without requiring onsite
firewalls / IPS and other security paradigm changes. A solution must exist that allows you to
extend your security domain to the branch sites without costly security appliances at each
branch—essentially, building a dynamic security fabric.
WAN Virtualization
The solution to all these problems is SD-WAN ( software-defined WAN ). SD-WAN is
a transport-independent overlay software-based networking deployment. It uses software
and cloud-based technologies to simplify the delivery of WAN services to branch offices.
Similar to Software Defined Networking (SDN), SD-WAN works by abstraction. It abstracts
network hardware into a control plane with multiple data planes to make up one large WAN
fabric.
SD-WAN in a nutshell
When we consider the Wide Area Network (WAN) environment at a basic level, we connect
data centers to several branch offices to deliver packets between those sites, supporting the
transport of application transactions and services. The SD-WAN platform allows you to pull
Internet connectivity into those sites, becoming part of one large transport-independent
WAN fabric.
SD-WAN monitors the paths and the application performance on each link (Internet, MPLS,
LTE ) and chooses the best path based on performance.
There are many forms of Internet connectivity (cable, DSL, broadband, and Ethernet). They
are quick to deploy at a fraction of the cost of private MPLS circuits. SD-WAN provides the
benefit of using all these links and monitoring which applications are best for them.
Application performance is continuously monitored across all eligible paths-direct internet,
internet VPN, and private WAN. It creates an active-active network and eliminates the need
to use and maintain traditional routing protocols for active-standby setups—no reliance on
the active-standby model and associated problems.

Diagram: WAN virtualization. Source is Juniper


SD-WAN simplifies WAN management
SD-WAN simplifies managing a wide area network by providing a centralized platform for
managing and monitoring traffic across the network. This helps reduce the complexity of
managing multiple networks, eliminating the need for manual configuration of each site.
Instead, all of the sites are configured from a single management console.
SD-WAN also provides advanced security features such as encryption and firewalling, which
can be configured to ensure that only authorized traffic is allowed access to the network.
Additionally, SD-WAN can optimize network performance by automatically routing traffic
over the most efficient paths.

SD-WAN Packet Steering


SD-WAN packet steering is a technology that efficiently routes packets across a wide area
network (WAN). It is based on the concept of steering packets so that they can be delivered
more quickly and reliably than traditional routing protocols. Packet steering is crucial to SD-
WAN technology, allowing organizations to maximize their WAN connections.
SD-WAN packet steering works by analyzing packets sent across the WAN and looking for
patterns or trends. Based on these patterns, the SD-WAN can dynamically route the packets
to deliver them more quickly and reliably. This can be done in various ways, such as
considering latency and packet loss or ensuring the packets are routed over the most reliable
connections.
Spraying packets down both links can result in 20% drops or packet reordering. SD-WAN
makes packets better utilized, no reorder, and better ―goodput.” SD-WAN increases your
buying power and results in buying lower bandwidth links and running them more efficiently.
Over-provision is unnecessary as you are using the existing WAN bandwidth better.
Example WAN Security Technology: Suricata

A Final Note: WAN virtualization


Server virtualization and automation in the data center are prevalent, but WANs are
stalling in this space. It is the last bastion of hardware models that has complexity. Like
hypervisors have transformed data centers, SD-WAN aims to change how WAN
networks are built and managed. When server virtualization and hypervisor came along, we
did not have to worry about the underlying hardware. Instead, a virtual machine (VM) can be
provided and run as an application. Today’s WAN environment requires you to manage
details of carrier infrastructure, routing protocols, and encryption.
 SD-WAN pulls all WAN resources together and slices up the WAN to match the
applications on them.
The Role of WAN Virtualization in Digital Transformation:
In today’s digital era, where cloud-based applications and remote workforces are becoming
the norm, WAN virtualization is critical in enabling digital transformation. It empowers
organizations to embrace new technologies, such as cloud computing and unified
communications, by providing secure and reliable connectivity to distributed resources.
Summary: WAN Virtualization
In our ever-connected world, seamless network connectivity is necessary for businesses of all
sizes. However, traditional Wide Area Networks (WANs) often fall short of meeting the
demands of modern data transmission and application performance. This is where the concept
of WAN virtualization comes into play, promising to revolutionize network connectivity like
never before.
Understanding WAN Virtualization
WAN virtualization, also known as Software-Defined WAN (SD-WAN), is a technology that
abstracts the physical infrastructure of traditional WANs and allows for centralized control,
management, and optimization of network resources. By decoupling the control plane from
the underlying hardware, WAN virtualization enables organizations to dynamically allocate
bandwidth, prioritize critical applications, and ensure optimal performance across
geographically dispersed locations.
The Benefits of WAN Virtualization
Enhanced Flexibility and Scalability: With WAN virtualization, organizations can
effortlessly scale their network infrastructure to accommodate growing business needs. The
virtualized nature of the WAN allows for easy addition or removal of network resources,
enabling businesses to adapt to changing requirements without costly hardware upgrades.
Improved Application Performance: WAN virtualization empowers businesses to optimize
application performance by intelligently routing network traffic based on application type,
quality of service requirements, and network conditions. By dynamically selecting the most
efficient path for data transmission, WAN virtualization minimizes latency, improves
response times, and enhances overall user experience.
Cost Savings and Efficiency: By leveraging WAN virtualization, organizations can reduce
their reliance on expensive Multiprotocol Label Switching (MPLS) connections and embrace
more cost-effective broadband links. The ability to intelligently distribute traffic across
diverse network paths enhances network redundancy and maximizes bandwidth utilization,
providing significant cost savings and improved efficiency.
Implementation Considerations
Network Security: When adopting WAN virtualization, it is crucial to implement robust
security measures to protect sensitive data and ensure network integrity. Encryption
protocols, threat detection systems, and secure access controls should be implemented to
safeguard against potential security breaches.
Quality of Service (QoS): Organizations should prioritize critical applications and allocate
appropriate bandwidth resources through Quality of Service (QoS) policies to ensure optimal
application performance. By adequately configuring QoS settings, businesses can guarantee
mission-critical applications receive the necessary network resources, minimizing latency and
providing a seamless user experience.
Real-World Use Cases
Global Enterprise Networks
Large multinational corporations with a widespread presence can significantly benefit from
WAN virtualization. These organizations can achieve consistent performance across
geographically dispersed locations by centralizing network management and leveraging
intelligent traffic routing, improving collaboration and productivity.
Branch Office Connectivity
WAN virtualization simplifies connectivity and network management for businesses with
multiple branch offices. It enables organizations to establish secure and efficient connections
between headquarters and remote locations, ensuring seamless access to critical resources and
applications.
UNIT IV STORAGE VIRTUALIZATION
Memory Virtualization-Types of Storage Virtualization-Block, File-Address space Remapping-
Risks of Storage Virtualization-SAN-NAS-RAID

Memory Virtualization
Virtual memory virtualization is similar to the virtual memory support provided by modern operating
systems. In a traditional execution environment, the operating system maintains mappings of virtual
memory to machine memory using page tables, which is a one-stage mapping from virtual memory to
machine memory. All modern x86 CPUs include a memory management unit (MMU) and a translation
look aside buffer (TLB) to optimize virtual memory performance. However, in a virtual execution
environment, virtual memory virtualization involves sharing the physical system memory in RAM and
dynamically allocating it to the physical memory of the VMs.

That means a two-stage mapping process should be maintained by the guest OS and the VMM,
respectively: virtual memory to physical memory and physical memory to machine memory.
Furthermore, MMU virtualization should be supported, which is transparent to the guest OS. The guest
OS continues to control the mapping of virtual addresses to the physical memory addresses of VMs.
But the guest OS cannot directly access the actual machine memory. The VMM is responsible for
mapping the guest physical memory to the actual machine memory. Figure 3.12 shows the two-level
memory mapping procedure.

Since each page table of the guest OSes has a separate page table in the VMM corresponding to it,
the VMM page table is called the shadow page table. Nested page tables add another layer of
indirection to virtual memory. The MMU already handles virtual-to-physical translations as defined by
the OS. Then the physical memory addresses are translated to machine addresses using another set of
page tables defined by the hypervisor. Since modern operating systems maintain a set of page tables
for every process, the shadow page tables will get flooded. Consequently, the perfor-mance overhead
and cost of memory will be very high.

VMware uses shadow page tables to perform virtual-memory-to-machine-memory address translation.


Processors use TLB hardware to map the virtual memory directly to the machine memory to avoid the
two levels of translation on every access. When the guest OS changes the virtual memory to a physical
memory mapping, the VMM updates the shadow page tables to enable a direct lookup. The AMD
Barcelona processor has featured hardware-assisted memory virtualization since 2007. It provides
hardware assistance to the two-stage address translation in a virtual execution environment by using a
technology called nested paging.

When a virtual address needs to be translated, the CPU will first look for the L4 page table pointed
to by Guest CR3. Since the address in Guest CR3 is a physical address in the guest OS, the CPU needs
to convert the Guest CR3 GPA to the host physical address (HPA) using EPT. In this procedure, the
CPU will check the EPT TLB to see if the translation is there. If there is no required translation in the
EPT TLB, the CPU will look for it in the EPT. If the CPU cannot find the translation in the EPT, an
EPT violation exception will be raised.

When the GPA of the L4 page table is obtained, the CPU will calculate the GPA of the L3 page table
by using the GVA and the content of the L4 page table. If the entry corresponding to the GVA in the
L4

page table is a page fault, the CPU will generate a page fault interrupt and will let the guest OS kernel
handle the interrupt. When the PGA of the L3 page table is obtained, the CPU will look for the EPT to
get the HPA of the L3 page table, as described earlier. To get the HPA corresponding to a GVA, the
CPU needs to look for the EPT five times, and each time, the memory needs to be accessed four times.
There-fore, there are 20 memory accesses in the worst case, which is still very slow. To overcome this
short-coming, Intel increased the size of the EPT TLB to decrease the number of memory accesses.
What is Memory Virtualization?
Memory virtualization is like having a super smart organizer for your computer brain (Running
Memory -RAM). Imagine your computer brain is like a big bookshelf, and all the apps and
programs you installed or are running are like books.
Memory virtualization is the librarian who arranges these books so your computer can easily find
and use them quickly. It also ensures that each application gets a fair share of the memory to run
smoothly and prevents mess, which ultimately makes your computer brain (RAM) more org anized
(tidy) and efficient.
In technical language, memory virtualization is a technique that abstracts, manages, and optimizes
physical memory (RAM) used in computer systems. It creates a layer of abstraction between the
RAM and the software running on your computer. This layer enables efficient memory allocation
to different processes, programs, and virtual machines.
Memory virtualization helps optimize resource utilization and secures the smooth operations of
multiple applications on shared physical memory (RAM) by ensuring each application gets the
required memory to work flawlessly.
Memory virtualization also decouples the volatile RAM (Temporary memory) from various
individual systems and aggregates that data into a virtualized memory pool available to any
system in the cluster. The distributed memory pool will be used as a high-speed cache, messaging
layer, and shared memory for the CPU to increase system performance and efficiency.
*Note – Don’t confuse it with virtual memory! Virtual memory is like having a bigger workspace
(hard drive) to handle large projects, and memory virtualization is like an office manager dividing
up the shared resources, especially computer RAM, to keep things organized and seamless.
How is Memory Virtualization Useful in Our Daily Lives?
Basically, memory virtualization helps our computer systems to work fast and smoothly. It also
provides sufficient memory for all apps and programs to run seamlessly.
Memory virtualization, a personal computer assistant, ensures everything stays organized and
works properly, which is very important for the efficient working of our computers and
smartphones. Whether browsing the web, working on Google documents, or using complex
software, memory virtualization is the hero that provides us with a smooth and responsive
computing experience in our daily lives.
Memory virtualization is essential for modern computing, especially in cloud computing, where
multiple users and applications share the same physical hardware (Like RAM and System).
It helps in efficient memory management and allocation, isolation between applications (by
providing the required share of memory), and dynamic adjustment based on the running
workloads of various applications. Without memory virtualization, it would be cha llenging to run
multiple applications at the same time.
Applications you may have heard about! This critical technology enables more efficient and
flexible use of computing resources at both complex data centers and personal device levels.
Memory virtualization is integral to personal computers, mobile devices, web hosting, app
hosting, cloud computing, and data center operations.
How Does Memory Virtualization Work in Cloud Computing?
You may be thinking all that is fine, but how does memory virtualization work in cloud
computing? It’s just part of the broader concept of resource virtualization, which includes
internet, storage, network, and many other virtualization techniques.
When memory virtualization takes place in cloud infrastructure, it goes through a process.

Key Elements Involved in Memory Virtualization:


1. Abstraction of Physical Memory
Like virtual memory (Hard Drive) abstracts physical memory (RAM/Cache Memory) in
traditional computing, similarly, memory virtualization in cloud computing abstracts the physical
memory (RAM – Running Memory) of various Virtual Machines (VMs) to create a pool of
resources to allocate to a group of VMs.
For this abstraction of physical memory, Cloud service providers use a hypervisor known as a
Virtual Machine Monitor (VMM) that abstracts and manages VM memory in cloud Computing.
This abstraction process allows cloud users (VMs) to request and consume memory withou t
worrying about the storage limit. It allows users to scale their memory resources as required
without concern for the underlying physical memory.
2. Resource Pooling
In cloud computing, there is a Cloud Data Center where multiple physical servers host various
Virtual Machines (VMs) and manage their dynamic workloads.
Memory virtualization allows cloud providers to use physical memory resources efficiently,
which is critical for industries such as banking. In cloud computing for banking, this technique
helps optimize memory allocation to ensure smooth, secure handling of sensitive financial data
and transaction workloads.
This pool can be allocated to different VMs and cloud users per their dynamic needs and
workload.

3. Dynamic Allocation
Cloud service providers use memory virtualization to allocate virtual memory to VMs and Cloud
users instantly on demand (According to Workload). It means cloud memory can be dynamically
assigned and reassigned based on the fluctuating workload.
This elasticity of cloud computing enables effective use of available resources, and cloud users
can scale up or down their cloud memory as needed. Additionally, cloud migration services help
in ensuring the seamless transfer of data and applications to the cloud, enhancing the benefits of
memory virtualization.
4. Isolation and Data Security
Memory virtualization ensures that the virtual memory allocated to one cloud user or VM is
isolated from others. This isolation is vital for data security and prevents one individual from
accessing another’s data or memory.
That’s why many sensitive IT companies prefer to purchase private cloud services to prevent
hacking and data breaches.
Importance of Memory Virtualization in Cloud Computing
Memory Virtualization plays a critical role in cloud computing for several reasons. It contributes
to cloud services’ efficiency, scalability, effective resource utilization, and cost-effectiveness.
Here are some of the key points that show the importance of memory virtualization in cloud
computing:
1. Memory virtualization allows cloud providers to use physical memory resources in the most
efficient way. Overcommitting of memory allows the optimization of memory resources and
hardware.
2. This virtualization enables the dynamic allocation of cloud memory to cloud user instances.
This elasticity is crucial in cloud computing to manage varying workloads. It allows cloud users
to scale up and down memory resources as needed and promotes flexibility and cost savings.
3. Allocating separate cloud memory for every single user prevents unauthorized access and is a
must for data security.
4. Memory virtualization is vital for handling a large number of users and workloads. It ensures
that scaling up or down memory can be done without manual intervention whenever a VM is
required.
5. Migration and live migration are important for load balancing, hardware maintenance, and
disaster recovery in cloud computing. Transferring VM memory from one host to another is only
possible by live migration and feasible when memory is virtualized. Implementing
reliable software migration services is crucial for ensuring smooth transitions and maintaining
system stability during memory virtualization processes.
6. By optimizing virtual memory usage, memory virtualization maximizes physical memory
utilization and helps reduce the overall operational cost of the cloud.
How Does Memory Virtualization Differ From Other Virtualization Techniques?
Memory virtualization is one of the virtualization techniques used in modern computing. It’s
different from other virtualizations in terms of abstraction and management. Here are some key
differences between memory and other virtualization techniques.
Memory Virtualization vs. Server Virtualization
Memory Virtualization
 Abstracts and manages memory resources.
 Focus on optimizing memory uses, ensuring isolation.
 Enable dynamic allocation of Memory to VMs.
Server virtualization
 Abstracts and manages the entire server ( CPU, memory, and storage).
 Run multiple isolated VMs on a single physical server.
 Splits the physical server into multiple virtual servers.
Memory Virtualization vs. Storage Virtualization
Memory Virtualization
 Abstracts and manages memory resources (RAM and Cloud memory).
 Optimizes memory allocation and facilitates dynamic memory allocation.
Memory Virtualization
 Abstracts and manages memory resources (RAM and Cloud memory).
 Optimizes memory allocation and facilitates dynamic memory allocation.
Storage Virtualization
 Abstracts and centralizes storage resources.
 Allows users to manage storage capacity and data across multiple storage systems.
 Provides features like data redundancy and data migration.
 Help the system with consistent performance and maintaining smooth operations.
Memory Virtualization vs. Network Virtualization
Memory Virtualization
 Focus on managing the allocation and optimization of memory resources.
 Not directly dealing with network-related resources; it only deals with memory.
Network Virtualization
 Abstracts and separates network resources.
 Enable multiple virtual networks to coexist on the same physical network infrastructure.
 Provides isolation, segmentation, and management of network resources.
Memory Virtualization vs. Desktop Virtualization
Memory Virtualization
 Operates at the hardware level (RAM).
 Manage memory resources available to running processes, applications, or virtual machines.
 Used in almost every digital device, from laptops to smartphones.
Desktop Virtualization
 Abstracts the entire desktop system, including the operating system, applications, and user
data.
 Allow users to access their desktops virtually from any device while maintaining consistent
configurations and data.
 Commonly used in the IT industry and IoT companies.
Memory Virtualization vs. Application Virtualization
Memory Virtualization
 Primarily concerned with managing system memory.
 Ensures efficient allocation and usage of memory for running processes.
Application Virtualization
 Individual applications are abstracted from the underlying operating system, allowing them to
run independently and without conflict.
 Allows users to access and use the program from any connected system to the server.
 Frequently used for compatibility and security reasons.
Applications of Memory Virtualization in the Digital World
In the digital world, memory virtualization offers a diverse range of applications. This is a key
component of the internet technology to drive innovation and transform the landscape of modern
computing.
It enables more efficient resource utilization, improved system performance, and smooth user
experience across a range of technological domains.
Technical domains where memory virtualization plays a crucial role:
 Cloud Computing: In shared cloud environments, memory virtualization ensures that each
virtual machine (VM) has an isolated memory and gets the required memory whenever needed.
It plays a major role in efficient memory utilization and reducing running costs.
 High-Performance Computing (HPC): In HPC clusters, it ensures that memory is efficiently
allocated to multiple processes parallelly for seamless, complex scientific simulations and big
data analysis. Also helps in the allocation of memory resources based on the specific need of
each task in the HPC cluster.
 Data Centers: Large enterprises with heavy data load require memory virtualization to run
multiple applications on a shared server. It simplifies the resources where multiple teams and
departments have varying memory requirements and dynamic loads.
 Memory virtualization is crucial for database management to efficiently allocate memory to
various databases when multiple databases run on a single server.
 Resource-Constrained Environment: When computers have limited physical space(RAM),
memory virtualization helps optimize memory usage and prevent resource contention. This
process helps in better memory balancing and system performance.
 Help in Disaster Recovery: Memory virtualization enables the transfer of memory between
two data centers and maintains services during failure.
 Testing and Development of Applications: Utilized in simulated real-world conditions and
tested application performance under various regulations.
 IoT and Edge Computing: New edge applications and devices use memory virtualization for
efficient RAM allocation and isolation of cache for different apps and websites. For example,
when you use two different apps on your mobile device, one app can’t access another app’s
data without your permission.
 For those interested in creating interconnected devices, exploring IoT application
development can provide insights into building efficient and innovative solutions.
Future of Memory Virtualization in Cloud Computing
The future of memory virtualization in cloud computing holds significant promise as cloud
technology continues to evolve and become more integral to our digital world. Several trends and
developments are likely to shape the future of memory virtualization:
 Increasing Demand for Memory Efficiency
In upcoming years, cloud workloads will become more diverse and intense, which will require
efficient memory management. Memory virtualization will play a crucial role in optimizin g
memory allocation and achieving higher performance.
 Enhancing Data Security and Memory Isolation
Data breaches and hacking threats are rising daily, and countries concerned about data security
and privacy around the globe make this more vulnerable. So, it will be essential for cloud
providers to offer improved isolation and security features in cloud computing.
Memory virtualization will play a key role in data security, where multiple cloud users share the
same cloud storage.
 AI and Machine Learning Integration
Memory virtualization will be used to support AI and machine learning workloads that require
huge storage capacity, like ChatGPT, Bard, and AI-powered automation applications. It’s used in
memory allocation and storage utilization to enhance user experience.
 Quantum Computing Solutions
With the advancement of quantum computing, memory virtualization will adapt to unique
memory requirements in complex quantum algorithms and programs. That’s why several
companies are working on specialized memory management solutions based on memory
virtualization for quantum computing.
 For Blockchain Technologies Integration
We all know that blockchain is a futuristic technology adopted by every sector, from banking to
healthcare to IoT. Memory virtualization will be used to manage blockchain networks and
decentralized applications.
 Reduce Energy Consumption
Most nations are currently focusing on effective energy utilization and high efficiency. This led
to the development of memory virtualization solutions that minimize energy consumption. Big
data centers and cloud infrastructure companies currently use this technology to minimize energy
consumption.
 Other Future Applications
Memory virtualization will be used in various other sectors in upcoming years, such as edge
computing expansion, custom memory allocation, distributed cloud architecture, and serverless
computing.
What is Storage Virtualization?
Storage virtualization is a process of pooling physical storage devices so that IT may address a single
"virtual" storage unit. It offered considerable economic and operational savings over bare metal storage
but is now mostly overshadowed by the cloud paradigm.
What is Storage Virtualization?
Storage virtualization is functional RAID levels and controllers are made desirable, which is an
important component of storage servers. Applications and operating systems on the device can directly
access the discs for writing. Local storage is configured by the controllers in RAID groups, and the
operating system sees the storage based on the configuration. The controller, however, is in charge of
figuring out how to write or retrieve the data that the operating system requests because the storage is
abstracted.
Types of Storage Virtualization
Below are some types of Storage Virtualization.
 Kernel-level virtualization: In hardware virtualization, a different version of the Linux
kernel functions. One host may execute several servers thanks to the kernel level.
 Hypervisor Virtualization: Installed between the operating system and the hardware is a section
known as a hypervisor. It enables the effective operation of several operating systems.
 Hardware-assisted Virtualization: Hardware-assisted virtualization is similar to complete para-
virtualization, however, it needs hardware maintenance.
 Para-virtualization: The foundation of para-virtualization is a hypervisor, which handles software
emulation and trapping.
Methods of Storage Virtualization
 Network-based storage virtualization: The most popular type of virtualization used by
businesses is network-based storage virtualization. All of the storage devices in an FC or iSCSI
SAN are connected to a network device, such as a smart switch or specially designed server, which
displays the network's storage as a single virtual pool.
 Host-based storage virtualization: Host-based storage virtualization is software-based and most
often seen in HCI systems and cloud storage. In this type of virtualization, the host, or a hyper-
converged system made up of multiple hosts, presents virtual drives of varying capacity to the
guest machines, whether they are VMs in an enterprise environment, physical servers or computers
accessing file shares or cloud storage.
 Array-based storage virtualization: Storage using arrays The most popular use of virtualization
is when a storage array serves as the main storage controller and is equipped with virtualization
software. This allows the array to share storage resources with other arrays and present various
physical storage types that can be used as storage tiers.
How Storage Virtualization Works?
 Physical storage hardware is replicated in a virtual volume during storage virtualization.
 A single server is utilized to aggregate several physical discs into a grouping that creates a basic
virtual storage system.
 Operating systems and programs can access and use the storage because a virtualization layer
separates the physical discs from the virtual volume.
 The physical discs are separated into objects called logical volumes (LV), logical unit numbers
(LUNs), or RAID groups, which are collections of tiny data blocks.
 RAID arrays can serve as virtual storage in a more complex setting. many physical drives simulate
a single storage device that copies data to several discs in the background while stripping it.
 The virtualization program has to take an extra step in order to access data from the physical discs.
 Block-level and file-level storage environments can both be used to create virtual storage.
Advantages of Storage Virtualization
Below are some Advantages of Storage Virtualization.
 Advanced features like redundancy, replication, and disaster recovery are all possible with the
storage devices.
 It enables everyone to establish their own company prospects.
 Data is kept in more practical places that are farther from the particular host. Not always is the data
compromised in the event of a host failure.
 IT operations may now provision, divide, and secure storage in a more flexible way by abstracting
the storage layer.
Disadvantages of Storage Virtualization
Below are some Disadvantages of Storage Virtualization.
 Storage Virtualization still has limitations which must be considered.
 Data security is still a problem. Virtual environments can draw new types of cyberattacks, despite
the fact that some may contend that virtual computers and servers are more secure than physical
ones.
 The deployment of storage virtualization is not always easy. There aren't many technological
obstacles, including scalability.
 Your data's end-to-end perspective is broken by virtualization. Integrating the virtualized storage
solution with current tools and systems is a requirement.
4.1. Storage Virtualization:

 Storage virtualization is the process of presenting a logical view of the physical


storage resources to a host. This logical storage appears and behaves as physical
storage directly connected to the host.
 Some examples of storage virtualization are host-based volume management, LUN
creation, tape storage virtualization, and disk addressing (CHS to LBA).

 The key benefits of storage virtualization include increased storage utilization,


adding or deleting storage without affecting an application’s availability, and
nondisruptive data migration (access to files and storage while migrationsare in
progress).
 Figure. 4.2. illustrates a virtualized storage environment.
Figure 4.2. Storage Virtualization
 At the top are four servers, each of which has one virtual volume assigned, which is
currently in use by an application. These virtual volumes are mapped to the actual
storage in the arrays, as shown at the bottom of the figure.

 When I/O is sent to a virtual volume, it is redirected through the virtualization at the
storage network layer to the mapped physical array.

4.1.1. How Does Storage Virtualization Work?


Storage virtualization abstracts the underlying physical storage resources and presents
them as virtualized storage to the applications, operating systems, and other components
within a computing environment. As a result, this allows for centralized management and
logical grouping of storage resources, providing a unified view and control over the storage
infrastructure.
By leveraging storage virtualization, organizations can achieve improved storage
utilization, simplified management, enhanced scalability, better data mobility, and increased
flexibility in deploying and managing storage resources.

Additionally, virtualization allows for decoupling storage from physical hardware,


providing a more efficient and agile approach to storage management in modern data center
environments.

4.1.2. How Storage Virtualization Apply?


Following are the different ways for storage applies to the virtualization:
● Host-Based
● Network-Based
● Array-Based
i. Host-Based Storage Virtualization
Here, all the virtualizations and management is done at the host level with the help of
software and physical storage, it can be any device or array.
The host is made up of multiple hosts which present virtual drives of a set to the guest
machines. Doesn’t matter whether they are VMs in an enterprise or PCs.
ii. Network-Based Storage Virtualization
Network-based storage virtualization is the most common form which are using
nowadays. Devices such as a smart switch or purpose-built server connect to all the
storage device in a fibre channel storage network and present the storage as a virtual
pool.
iii. Array-Based Storage Virtualization
Here the storage array provides different types of storage which are physical and used as
storage tiers. The software is available which handles the amount of storage tier made up
of solid-state drives hard drives.

4.1.3. Advantages of Storage Virtualization


Below are some Advantages of Storage Virtualization.
 Advanced features like redundancy, replication, and disaster recovery are all
possible with the storage devices.
 It enables everyone to establish their own company prospects.
 Data is kept in more practical places that are farther from the particular host. Not
always is the data compromised in the event of a host failure.
 IT operations may now provision, divide, and secure storage in a more flexible
way by abstracting the storage layer.

4.1.4. Disadvantages of Storage Virtualization


Below are some Disadvantages of Storage Virtualization.
 Storage Virtualization still has limitations which must be considered.
 Data security is still a problem. Virtual environments can draw new types
of cyberattacks, despite the fact that some may contend that virtual computers
and servers are more secure than physical ones.
 The deployment of storage virtualization is not always easy. There aren’t many
technological obstacles, including scalability.
 Your data’s end-to-end perspective is broken by virtualization. Integrating the
virtualized storage solution with current tools and systems is a requirement.

4.2. Types of Storage Virtualization


Virtual storage is about providing logical storage to hosts and applications
independent of physical resources. Virtualization can be implemented in both SAN and NAS
storage environments. In a SAN, virtualization is applied at the block level, whereas in NAS,
it is applied at the file level.
There are majorly two types of storage virtualization, which are:
1. Block level storage virtualization
2. File level storage virtualization
4.2.1. Block-Level Storage Virtualization
Block-level storage virtualization provides a translation layer in the SAN, between the
hosts and the storage arrays, as shown in Figure. 4.3. Instead of being directed to the LUNs
on the individual storage arrays, the hosts are directed the virtualized LUNs on the
virtualization device.

Figure 4.3. Block-Level Storage Virtualization


The virtualization device translates between the virtual LUNs and the physical LUNs
on the individual arrays. This facilitates the use of arrays from different vendors
simultaneously, without any interoperability issues. For a host, all the arrays appear like a
single target device and LUNs can be distributed or even split across multiple arrays.
Block-level storage virtualization extends storage volumes online, resolves
application growth requirements, consolidates heterogeneous storage arrays, and enables
transparent volume access. It also provides the advantage of nondisruptive data migration.
In traditional SAN environments, LUN migration from one array to another was an
offline event because the hosts needed to be updated to reflect the new array configuration. In
other instances, host CPU cycles were required to migrate data from one array to the other,
especially in a multi-vendor environment.
With a block-level virtualization solution in place, the virtualization engine handles
the back-end migration of data, which enables LUNs to remain online and accessible while
data is being migrated. No physical changes are required because the host still points to the
same virtual targets on the virtualization device. However, the mappings on the virtualization
device should be changed. These changes can be executed dynamically and are transparent to
the end user.
Deploying heterogeneous arrays in a virtualized environment facilitates and
information lifecycle management (ILM) strategy, enabling significant cost and resource
optimization. Low-value data can be migrated from high- to low-performance arrays or disks.
4.2.2. File-Level Virtualization
File-level virtualization addresses the NAS challenges by eliminating the
dependencies between the data accessed at the file level and the location where the files are
physically stored. This provides opportunities to optimize storage utilization and server
consolidation and to perform nondisruptive file migrations.
Figure. 4.4. illustrates a NAS environment before and after the implementation of file-level
virtualization.
Figure. 4.4. NAS device before and after file-level virtualization
Before virtualization, each NAS device or file server is physically and logically
independent. Each host knows exactly where its file-level resources are located.
Underutilized storage resources and capacity problems result because files are bound to a
specific file server. It is necessary to move the files from one server to another because of
performance reasons or when the file server fills up. Moving files across the environment is
not easy and requires downtime for the file servers.
Moreover, hosts and applications need to be reconfigured with the new path, making
it difficult for storage administrators to improve storage efficiency while maintaining the
required service level.

File-level virtualization simplifies file mobility. It provides user or application


independence from the location where the files are stored. File-level virtualization creates a
logical pool of storage, enabling users to use a logical path, rather than a physical path, to
access files.
File-level virtualization facilitates the movement of file systems across the online file
servers. This means that while the files are being moved, clients can access their files no
disruptively.
Clients can also read their files from the old location and write them back to the new
location without realizing that the physical location has changed. Multiple clients connected
to multiple servers can perform online movement of their files to optimize utilization of their
resources. A global namespace can be used to map the logical path of a file to the physical
path names.
4.2.3. Comparison between File Level Storage Virtualization and Block Level Storage
Virtualization

4.3. Address Space Remapping


4.3.1. Introduction

 Address space remapping is a technique used to map logical addresses to physical


storage locations in a way that provides flexibility and efficiency in managing
memory or storage resources.

 In storage virtualization, address space remapping may be employed to manage


virtual storage volumes and provide features such as thin provisioning and dynamic
resizing. This allows storage resources to be allocated and managed flexibly, without
being tied to specific physical storage devices or locations.

 Virtualization of storage helps achieve location independence by abstracting the


physical location of the data. The virtualization system presents to the user a logical
space for data storage and handles the process of mapping it to the actual physical
location.

 It is possible to have multiple layers of virtualization or mapping. It is then possible


that the output of one layer of virtualization can then be used as the input for a higher
layer of virtualization.
 Virtualization maps space between back-end resources, to front-end resources. In this
instance, "back-end" refers to a logical unit number (LUN) that is not presented to a
computer, or host system for direct use. A "front-end" LUN or volume is presented to
a host or computer system for use.
 The actual form of the mapping will depend on the chosen implementation.

 Some implementations may limit the granularity of the mapping which may limit the
capabilities of the device.

 Typical granularities range from a single physical disk down to some small subset
(multiples of megabytes or gigabytes) of the physical disk.

 In a block-based storage environment, a single block of information is addressed


using a LUN identifier and an offset within that LUN – known as a logical block
addressing (LBA).

4.3.2. Benefits of Address Space Remapping

Address space remapping in storage virtualization provides several significant


benefits:
1. Optimized Storage Utilization: Address space remapping allows for dynamic
allocation of physical storage resources based on demand. This ensures that storage
capacity is efficiently utilized, with resources allocated as needed rather than being
statically provisioned. Remapping enables the system to allocate storage resources
more effectively, reducing wasted space and optimizing storage utilization.

2. Improved Performance: By remapping logical addresses to different physical


storage locations, storage virtualization can enhance performance. Frequently
accessed data can be placed on faster storage tiers, such as solid-state drives (SSDs),
while less frequently accessed data can be stored on lower-cost, higher-capacity
storage tiers, such as hard disk drives (HDDs). This optimization improves overall
system performance by ensuring that data is stored on the most appropriate storage
medium.
3. Flexibility and Scalability: Address space remapping enables seamless scalability of
storage resources. New storage devices or arrays can be added to the virtualized
storage environment, and logical addresses can be remapped to include these
additional resources without disrupting existing applications or users. This flexibility
allows organizations to easily expand their storage infrastructure to meet growing
storage demands.

4. Data Mobility and Migration: Address space remapping facilitates data mobility
and migration within the storage virtualization environment. Data can be moved or
migrated between different storage systems, arrays, or technologies, and logical
addresses are remapped to the new physical locations transparently to applications or
users. This capability simplifies data management tasks and allows for more efficient
storage resource utilization.
5. Abstraction and Simplification: Storage virtualization abstracts the underlying
physical storage infrastructure, providing a unified view of storage resources to
applications or users. Address space remapping ensures that applications or users can
access storage resources using logical addresses without needing to know the details
of the underlying physical storage configuration. This abstraction simplifies storage
management and reduces complexity for administrators.

6. Enhanced Data Protection and Redundancy: Address space remapping can


contribute to data protection and redundancy in storage virtualization environments.
The virtualization layer can remap logical addresses to redundant or mirrored copies
of data stored on alternative physical devices, providing resilience against hardware
failures or disruptions. This redundancy helps ensure data availability and reliability
in the event of storage device failures.

4.3.3. Working of Address Space Remapping


Address space remapping, specifically in the context of storage virtualization,
involves dynamically associating logical addresses used by applications or file systems with
physical storage locations managed by the storage virtualization layer. Here's how it typically
works:
1. Logical-to-Physical Mapping: When an application or file system requests access to
data, it uses logical addresses to specify the location of the data. These logical
addresses are abstract representations that do not directly correspond to physical
storage locations.
2. Virtualization Layer: The storage virtualization layer intercepts requests from
applications or file systems and translates logical addresses to physical addresses.
This translation process involves mapping logical addresses to specific physical
storage locations managed by the storage virtualization layer.
3. Mapping Table: The storage virtualization layer maintains a mapping table that
associates logical addresses with corresponding physical storage locations. This
mapping table is dynamic and can be updated as needed to reflect changes in the
storage environment, such as the addition or removal of storage devices.
4. Dynamic Allocation: Address space remapping enables dynamic allocation of
physical storage resources based on the logical addresses requested by applications or
file systems. The storage virtualization layer allocates physical storage from a pool of
available resources and maps logical addresses to these physical storage locations.
5. Optimization and Load Balancing: The storage virtualization layer may remap
logical addresses to different physical storage locations to optimize performance and
balance load. For example, frequently accessed data may be moved to faster storage
tiers, while less frequently accessed data may be moved to lower-cost, higher-capacity
storage tiers.
6. Data Migration and Mobility: Address space remapping facilitates data migration
and mobility within the storage virtualization environment. Data can be moved or
migrated between different storage systems, arrays, or technologies, and logical
addresses are remapped to the new physical locations transparently to applications or
file systems.
7. Abstraction and Transparency: Storage virtualization abstracts the underlying
physical storage infrastructure, providing a unified view of storage resources to
applications or file systems. Address space remapping ensures that applications or file
systems can access storage resources using logical addresses without needing to know
the details of the underlying physical storage configuration.
8. Fault Tolerance and Redundancy: Address space remapping may also contribute to
fault tolerance and redundancy in storage virtualization environments. The storage
virtualization layer can remap logical addresses to redundant or mirrored copies of
data stored on alternative physical devices, providing resilience against hardware
failures or disruptions.
4.3.4. Major Challenges of Memory Address Remapping
The major challenges associated with memory address remapping:

1. Limited Applicability:

The primary challenge is that address space remapping primarily targets memory
management within a computer system. Its direct impact on storage virtualization,
which deals with physical storage allocation and presentation, is minimal.

2. Increased Complexity:

Introducing another layer of remapping within storage controllers can add complexity
to the storage virtualization environment. This complexity can make troubleshooting
and debugging issues more challenging for administrators.

3. Potential Performance Overhead:

Remapping introduces an additional translation step between logical addresses used


by virtual machines and the physical locations on storage devices. This can lead to
slight performance overhead in I/O operations, especially for random access
patterns.Security Considerations:

While remapping with encryption can add some obfuscation, it's not a security
solution in itself. A sophisticated attacker could potentially exploit vulnerabilities in
the remapping process to gain access to encrypted data. Strong encryption algorithms
and proper key management practices remain essential for data security.

4. Limited Visibility and Control:

Since remapping typically happens within storage controller firmware, IT


administrators might have limited visibility and control over the specific remapping
mechanisms employed. This can make it difficult to fine-tune performance or
implement specific security policies related to remapping.

4.4. Risks of Storage virtualization


Storage virtualization offers numerous benefits, but it also comes with several risks and
challenges. Here are some of the main risks associated with storage virtualization:
1. Data Security: Virtualizing storage means that multiple logical storage units can
reside on the same physical storage infrastructure. If proper security measures are not
in place, there is a risk of unauthorized access to sensitive data. Malicious actors
could potentially compromise the virtualization layer and gain access to data from
multiple sources.
2. Performance Degradation: While storage virtualization can improve resource
utilization and flexibility, it can also introduce performance overhead. The additional
layer of abstraction between the physical storage and the applications accessing it can
lead to latency issues and reduced I/O performance, especially if not properly
managed.
3. Single Point of Failure: The centralization of storage management in a virtualized
environment means that there is a single point of failure—the storage virtualization
layer itself. If this layer experiences a failure or becomes unavailable, it can result in
widespread data loss or downtime for multiple applications and services.
4. Vendor Lock-In: Adopting a specific storage virtualization solution may lead to
vendor lock-in, where organizations become dependent on a particular vendor's
technology and find it challenging to switch to alternative solutions in the future. This
can limit flexibility and hinder the organization's ability to adapt to changing business
requirements or take advantage of emerging technologies.
5. Complexity and Management Overhead: Storage virtualization introduces
additional complexity to the storage infrastructure, requiring specialized skills and
knowledge to manage effectively. Administrators need to understand the
virtualization technology, as well as the underlying physical storage systems, to
troubleshoot issues and optimize performance. This can result in increased
management overhead and training costs.
6. Data Migration Challenges: Moving data between different storage systems or
migrating from one virtualization platform to another can be complex and time-
consuming. Data migration processes may disrupt normal operations and require
careful planning to minimize downtime and ensure data integrity.
7. Compatibility and Interoperability Issues: Integrating storage virtualization with
existing IT infrastructure and applications can be challenging, particularly if there are
compatibility or interoperability issues between different systems and components.
Ensuring seamless communication and data exchange between virtualized storage and
other IT resources may require additional configuration and testing.
4.5. Storage Area Network (SAN)

A Storage Area Network (SAN) is a specialized, high-speed network that provides


network access to storage devices. SANs are typically composed of hosts, switches, storage
elements, and storage devices that are interconnected using a variety of technologies,
topologies, and protocols. SANs may span multiple sites.

A SAN presents storage devices to a host such that the storage appears to be locally
attached. This simplified presentation of storage to a host is accomplished through the use of
different types of virtualization.

Figure.4.5. Storage Area Network


4.5.1. SAN Protocols
The most Frequent SAN protocols are:

● FCP: Fibre Channel Protocol is the most widely used SAN deployed in 70% to 80%
of the total SAN market. FCP uses Fibre Channel transfer protocols using embedded
SCSI commands.
● iSCSI: Internet Small Computer System Interface is the next biggest SAN or
block protocol, with roughly 10% to 15 percent of the marketplace. ISCSI
encapsulates SCSI commands within an Ethernet frame and uses an IP Ethernet
system for transportation.
● FCoE: Fibre Channel over Ethernet is less than 5 percent of the SAN marketplace.
It’s very similar to iSCSI because it encapsulates an FC frame within an Ethernet
datagram. Then like iSCSI, it utilizes an IP Ethernet system for transportation.
● NVMe: Non-Volatile Memory Express over Fibre Channel is a port protocol for
obtaining flash storage through a PCI Express (PCIe) bus. Unlike conventional all-
flash architectures, which can be restricted to one sequential or serial control queue,
NVMe supports tens of thousands of concurrent queues, each with the capability to
encourage thousands of concurrent controls.
4.5.2. Components of SAN
Each of the fibre channel devices is known as a node port like server storage and tape
libraries. You can understand the real-time concept of both SAN and NAS in the picture
given below:
Figure.4.6. SAN Components
● Node: Every node may be either an origin or a destination for a different host.
● Cables: Cabling of this system is performed using fiber optic cable and copper cable.
To cover short space the copper cable is used, for example, for backend connectivity.
● Interconnect Devices: Hubs, switches, and supervisors will be the interconnect
apparatus adopted for its SAN.
● Storage Arrays: The massive storage arrays are utilized for supplying host access to
the storage tools.
● SAN Management Software: The SAN management software is utilized to control
the ports between storage arrays, interconnect hosts, and devices.
4.5.3. How SAN works?

SAN storage solutions are block storage-based, meaning data is split into storage volumes
that can be formatted with different protocols, such as iSCSI or Fibre Channel Protocol
(FCP). A SAN can include hard disks or virtual storage nodes and cloud resources, known as
virtual SANs or vSANs.

SAN configurations are made up of three distinct layers:


● Storage layer: The physical data storage resources, such as drives in a data center,
are organized into storage pools and tiers. Because the data is stored using block-level
storage, built-in redundancy and automatic traffic rerouting, data is available even if a
server is down.
● Fabric layer: The fabric layer is how the storage connects to the user, such as via
network devices and cables. This connectivity could be via Fibre Channel or Fibre
Channel over Ethernet (FCoE). Both take pressure off the local area network (LAN)
by moving storage and associated data traffic to its own high-speed network.
● Host layer: The servers and applications that facilitate accessing the storage. Because
this layer recognizes the SAN storage as a local hard drive, it ensures quick
processing speeds and data transfers.
4.5.4. SAN use cases:
Low-latency and scalability make SANs the preferred choice in these cases:
● Video editing: Large files require high throughput and low-latency. SANs can
connect directly to the video editing desktop client, without the need for an extra
server layer, offering high-performance capabilities.
● Ecommerce: Today’s consumers expect shopping online to go smoothly and quickly.
Ecommerce companies need high-performance functionality, which makes SANs a
good choice.
● Backup/disaster recovery: Backups of networked devices can be executed quickly
and directly to SAN storage because traffic does not travel over the LAN.
Virtualization accelerates the processing and scalability of SANs with virtual
machines and cloud storage.
4.5.5. Overview of SAN Benefits
 High-speed data access
 Highly expandable
 OS-Level access to files
 A committed network for storage alleviates pressure on the LAN
4.5.6. Limitations of SAN
 The major limitation of SAN lies in the cost and management.
 Even though they provide high speed data access, different networks of Ethernet
are to be maintained- one for carrying out the Fibre Channel network and the
other for handling the requests of metadata files.

4.6. Network Attached Storage (NAS)

A Network Attached Storage (NAS) is a computer attached to a system or network


that offers file-based data storage solutions to other devices available on the network. The
NAS installation and deployment process is more straightforward. Network Attached Storage
volumes appear to the end-users as community-mounted volumes.

The data to be served is usually contained on single or multiple storage drives,


frequently arranged into logical, Redundant Arrays of Independent Disks. The device itself is
known as a system or network node, similar to computers or TCP/IP devices, all of which
maintain their particular IP address and efficiently communicate with other networked
devices.

NAS devices offer an easier way for many users in different locations to access data,
which can be valuable when working on the same project or sharing information.

Figure. 4.7. Network Attached Storage (NAS)


4.6.1. NAS Protocols
● SMB OR CIFS: Server Message Block or Common Internet File Services is the
protocol that Windows typically uses.
● NFS: Network File System was initially developed to be used with UNIX servers,
and it’s also a frequent Linux protocol.
4.6.2. Components of NAS
● NIC: Network Interface Card that permits connectivity to the system.

● Optimized Operating System: An optimized operating system that controls the


performance of NAS.
● Protocols: Protocols for sharing documents like NFS and CIFS.
● Storage Protocols: Storage protocols such as ATA, SCSI, or FC are used to connect
and handle physical disk tools.

Figure.4.7. NAS Components

4.6.3. How NAS works?

NAS storage systems are file storage-based, meaning the data is stored in files that are
organized in folders under a hierarchy of directories and subdirectories. Unlike direct
attached storage — which can be accessed by one device — the NAS file system provides
file storage and sharing capabilities between devices.

A NAS system is built using the following elements:

● Network: One or multiple networked NAS devices are connected to a local area
network (LAN) or an Ethernet network with an assigned IP address.
● NAS box: This hardware device with its own IP address includes a network interface
card (NIC), a power supply, processor, memory and drive bay for two to five disk
drives. A NAS box, or head, connects and processes requests between the user’s
computer and the NAS storage.
● Storage: The disk drives within the NAS box that store the data. Often storage uses a
RAID configuration, distributing and copying data across multiple drives. This
provides data redundancy as a fail-safe, and it improves performance and storage
capacity.
● Operating system: Unlike local storage, NAS storage is self-contained. It also
includes an operating system to run data management software and authorize file-
level access to authorized users.
● Software: Preconfigured software within the NAS box manages the NAS device and
handles data storage and file-sharing requests.
4.6.4. NAS use cases:

There are times when NAS is the better choice, depending on the company’s needs and
application:
● File collaboration and storage: This is the primary use case for NAS in mid- to
large-scale enterprises. With NAS storage in place, IT can consolidate multiple file
servers for ease of management and to save space.
● Archiving: NAS is a good choice for storing a large number of files, especially if you
want to create a searchable and accessible active archive.
● Big data: NAS is a common choice for storing and processing large unstructured
files, running analytics and using ETL (extract, transform, load) tools for integration.
4.6.5. Overview of NAS Benefits
● Relatively inexpensive
● 24/7 remote data accessibility
● Very Good expandability
● Redundant storage structure (RAID)
● Automatic backups to additional cloud and devices
● Flexibility
4.6.6. Limitations of NAS
 The area where NAS limits itself is in the scalability and performance. After crossing
a certain limit of users’ access of files over NAS, it will ask for scaling up of
horsepower of the server.
 Another major limitation of NAS lies in the Ethernet. The data over the Ethernet is
shared in the form of packets, which simply means that one source or file is divided
into a number of packets. If even one reaches late or goes out of sequence, the user
won’t be able to access that file until each and every packet is reached and converted
back into the sequence.
Comparison Table: SAN vs NAS:
4.7. RAID (Redundant Arrays of Independent Disks)
4.7.1. Introduction
RAID is a technique that makes use of a combination of multiple disks instead of
using a single disk for increased performance, data redundancy, or both. The term was coined
by David Patterson, Garth A. Gibson, and Randy Katz at the University of California,
Berkeley in 1987.

Why Data Redundancy?


Data redundancy, although taking up extra space, adds to disk reliability. This means
that in case of disk failure, if the same data is also backed up onto another disk, we can
retrieve the data and go on with the operation. On the other hand, if the data is spread across
multiple disks without the RAID technique, the loss of a single disk can affect the entire
data.
Key Evaluation Points for a RAID System
● Reliability: How many disk faults can the system tolerate?
● Availability: What fraction of the total session time is a system in uptime mode,
i.e. how available is the system for actual use?
● Performance: How good is the response time? How high is the throughput (rate
of processing work)? Note that performance contains a lot of parameters and not
just the two.
● Capacity: Given a set of N disks each with B blocks, how much useful capacity is
available to the user?
RAID is very transparent to the underlying system. This means that to the host system, it
appears as a single big disk presenting itself as a linear array of blocks. This allows older
technologies to be replaced by RAID without making too many changes to the existing code.

4.7.2. Terms used in RAID:

Here are some common terms used in RAID (Redundant Array of Independent Disks):
1. Striping: A technique used in RAID 0 and some other RAID levels where data is
divided into blocks and distributed across multiple disks. It improves performance by
allowing multiple disks to work in parallel.
2. Mirroring: Also known as RAID 1, mirroring involves creating an exact duplicate of
data on multiple disks. This provides redundancy and fault tolerance, as data remains
accessible even if one disk fails.
3. Parity: In RAID 5 and RAID 6 configurations, parity is a method used to provide
fault tolerance by generating and storing parity information. Parity data allows the
RAID array to reconstruct data in the event of disk failure.
4. Hot Spare: A spare disk drive that is kept in reserve and can automatically replace a
failed disk in a RAID array. Hot spares help minimize downtime and maintain data
redundancy.
5. RAID Level: Refers to the specific configuration or layout of a RAID array,
determining how data is distributed, duplicated, or parity is calculated across the
disks. Common RAID levels include RAID 0, RAID 1, RAID 5, RAID 6, RAID 10,
etc.
6. RAID Controller: A hardware or software component responsible for managing the
operation of a RAID array. Hardware RAID controllers are dedicated devices, while
software RAID controllers are implemented in software.
7. RAID Array: The logical grouping of multiple physical disk drives configured in a
RAID configuration. The RAID array appears as a single storage device to the
operating system.
4.7.3. Levels of RAID:
There are several levels of RAID, each with its own characteristics and benefits. Here
are some most common RAID levels:
1. RAID-0 (Stripping)
2. RAID-1 (Mirroring)
3. RAID-2 (Bit-Level Stripping with Dedicated Parity)
4. RAID-3 (Byte-Level Stripping with Dedicated Parity)
5. RAID-4 (Block-Level Stripping with Dedicated Parity)
6. RAID-5 (Block-Level Stripping with Distributed Parity)
7. RAID-6 (Block-Level Stripping with two Parity Bits)

Figure. 4.8. RAID Controller

4.7.3.1. RAID-0 (Stripping)


● Blocks are ―striped‖ across disks.

● In the figure, blocks ―0,1,2,3‖ form a stripe.

● Instead of placing just one block into a disk at a time, we can work with two (or more)
blocks placed into a disk before moving on to the next one.Evaluation:
● Reliability: 0
There is no duplication of data. Hence, a block once lost cannot be recovered.
● Capacity: N*B
The entire space is being used to store data. Since there is no duplication, N disks
each having B blocks are fully utilized.
Advantages:
1. It is easy to implement.
2. It utilizes the storage capacity in a better way.

Disadvantages:
1. A single drive loss can result in the complete failure of the system.
2. Not a good choice for a critical system.

4.7.3.2. RAID-1 (Mirroring)


● More than one copy of each block is stored in a separate disk. Thus, every block
has two (or more) copies, lying on different disks.

● The above figure shows a RAID-1 system with mirroring level 2.


● RAID 0 was unable to tolerate any disk failure. But RAID 1 is capable of
reliability.
Evaluation:
Assume a RAID system with mirroring level 2.
● Reliability: 1 to N/2
1 disk failure can be handled for certain because blocks of that disk would have
duplicates on some other disk. If we are lucky enough and disks 0 and 2 fail, then
again this can be handled as the blocks of these disks have duplicates on disks 1
and 3. So, in the best case, N/2 disk failures can be handled.
● Capacity: N*B/2
Only half the space is being used to store data. The other half is just a mirror of
the already stored data.
Advantages:
1. It covers complete redundancy.
2. It can increase data security and speed.
Disadvantages:
1. It is highly expensive.
2. Storage capacity is less.

4.7.3.3. RAID-2 (Bit-Level Striping with Dedicated Parity)


● In Raid-2, the error of the data is checked at every bit level. Here, we
use Hamming Code Parity Method to find the error in the data.
● It uses one designated drive to store parity.
● The structure of Raid-2 is very complex as we use two disks in this technique.
One word is used to store bits of each word and another word is used to store error
code correction.
● It is not commonly used.
Advantages
1. In case of Error Correction, it uses hamming code.
2. It Uses one designated drive to store parity.

Disadvantages
1. It has a complex structure and high cost due to extra drive.
2. It requires an extra drive for error detection.

4.7.3.4. RAID-3 (Byte-Level Striping with Dedicated Parity)


● It consists of byte-level striping with dedicated parity striping.
● At this level, we store parity information in a disc section and write to a dedicated
parity drive.
● Whenever failure of the drive occurs, it helps in accessing the parity drive, through

which we can reconstruct the data.


● Here Disk 3 contains the Parity bits for Disk 0, Disk 1, and Disk 2. If data loss
occurs, we can construct it with Disk 3.
Advantages:
1. Data can be transferred in bulk.
2. Data can be accessed in parallel.
Disadvantages:
1. It requires an additional drive for parity.
2. In the case of small-size files, it performs slowly.

4.7.3.5. RAID-4 (Block-Level Striping with Dedicated Parity)


● Instead of duplicating data, this adopts a parity-based approach.

● In the figure, we can observe one column (disk) dedicated to parity.


● Parity is calculated using a simple XOR function. If the data bits are
0,0,0,1 the parity bit is XOR(0,0,0,1) = 1. If the data bits are 0,1,1,0 the
parity bit is XOR(0,1,1,0) = 0. A simple approach is that an even number
of ones results in parity 0, and an odd number of ones results in parity 1.

● Assume that in the above figure, C3 is lost due to some disk failure.
Then, we can recompute the data bit stored in C3 by looking at the
values of all the other columns and the parity bit. This allows us to
recover lost data.
Evaluation:
● Reliability: 1
RAID-4 allows recovery of at most 1 disk failure (because of the
way parity works). If more than one disk fails, there is no way to recover
the data.
● Capacity: (N-1)*B
One disk in the system is reserved for storing the parity. Hence, (N-1)
disks are made available for data storage, each disk having B blocks.
Advantages:
1. It helps in reconstructing the data if at most one data is lost.
Disadvantages:
1. It can’t help in reconstructing when more than one data is lost.

4.7.3.6. RAID-5 (Block-Level Striping with Distributed Parity)


● This is a slight modification of the RAID-4 system where the only
difference is that the parity rotates among the drives.
● In the figure, we can notice how the parity bit ―rotates‖.
● This was introduced to make the random write performance better.
Evaluation:
● Reliability: 1
RAID-5 allows recovery of at most 1 disk failure (because of the way
parity works). If more than one disk fails, there is no way to recover the
data. This is identical to RAID-4.
● Capacity: (N-1)*B
Overall, space equivalent to one disk is utilized in storing the parity.
Hence, (N-1) disks are made available for data storage, each disk having
B blocks.
Advantages:
1. Data can be reconstructed using parity bits.
2. It makes the performance better.

Disadvantages:
1. Its technology is complex and extra space is required.
2. If both discs get damaged, data will be lost forever.
4.7.3.7. RAID-6 (Block-Level Striping with two Parity Bits)
● Raid-6 helps when there is more than one disk failure. A pair of
independent parties are generated and stored on multiple disks at this
level. Ideally, you need four disk drives for this level.
● There are also hybrid RAIDs, which make use of more than one RAID
level nested one after the other, to fulfill specific requirements.

Advantages:
1. Very high data Accessibility.
2. Fast read data transactions.
Disadvantages:
1. Due to double parity, it has slow write data transactions.
2. Extra space is required.

4.7.4. Advantages of RAID


● Data redundancy: By keeping numerous copies of the data on many
disks, RAID can shield data from disk failures.
● Performance enhancement: RAID can enhance performance by
distributing data over several drives, enabling the simultaneous
execution of several read/write operations.
● Scalability: RAID is scalable, therefore by adding more disks to the
array, the storage capacity may be expanded.
● Versatility: RAID is applicable to a wide range of devices, such as
workstations, servers, and personal PCs
4.7.5. Disadvantages of RAID
● Cost: RAID implementation can be costly, particularly for arrays with
large capacities.
● Complexity: The setup and management of RAID might be challenging.
● Decreased performance: The parity calculations necessary for some
RAID configurations, including RAID 5 and RAID 6, may result in a
decrease in speed.
● Single point of failure: RAID is not a comprehensive backup solution,
while offering data redundancy. The array’s whole contents could be lost
if the RAID controller malfunctions.
UNIT V VIRTUALIZATION TOOLS

VMWare-Amazon AWS-Microsoft HyperV- Oracle VM Virtual Box - IBM PowerVM-


Google Virtualization- Case study
Virtualization Tools

Virtualization tools are software solutions that enable the creation, management, and
utilization of virtualized environments, allowing multiple operating systems or applications to
run on a single physical server or machine. These tools provide various functionalities,
including virtual machine (VM) creation, provisioning, monitoring, and performance
management. Some popular virtualization tools include:

 VMware vSphere: VMware vSphere is a leading virtualization platform that provides a suite
of tools for creating and managing virtualized environments. It includes features such as
vCenter Server for centralized management, VMware ESXi hypervisor for virtualization,
vMotion for live migration of VMs, and High Availability (HA) for ensuring VM uptime.

 Microsoft Hyper-V: Hyper-V is a virtualization platform developed by Microsoft for


Windows-based systems. It allows users to create and manage virtual machines on Windows
Server operating systems. Hyper-V provides features such as live migration, replication, and
integration with other Microsoft technologies.

 Oracle VM VirtualBox: VirtualBox is an open-source virtualization tool developed by


Oracle. It allows users to create and run VMs on various host operating systems, including
Windows, macOS, Linux, and Oracle Solaris. VirtualBox supports features such as
snapshotting, cloning, and remote display.

 KVM (Kernel-based Virtual Machine): KVM is a virtualization solution for Linux- based
systems that leverages the Linux kernel to provide virtualization capabilities. It is integrated
into the Linux kernel and allows users to create and manage VMs on Linux servers. KVM
supports features such as live migration, resource allocation, and security isolation.

 Xen Project: Xen is an open-source hypervisor that provides virtualization capabilities for
both desktop and server environments. It allows users to create and manage VMs on various
operating systems, including Linux and Windows. Xen supports features such as
paravirtualization, live migration, and memory overcommitment.

 Proxmox Virtual Environment: Proxmox VE is an open-source virtualization platform that


combines the KVM hypervisor and LXC containers to provide a comprehensive
virtualization solution. It includes features such as high availability, backup and restore, and
web-based management interface.
5.1. VMWare

 VMware is a company that provides cloud computing and virtualization software and
services. They are a pioneer in virtualization technology, which allows you to run multiple
virtual machines (VMs) on a single physical server.
 VMware offers a suite of virtualization products including VMware vSphere for
servervirtualization, VMware Workstation for desktop virtualization, and VMware Fusion for
Mac desktops.
 It's known for its robust features, stability, and wide adoption in enterprise environments.

1
 The VMware cloud takes advantage of this transition from one virtualization era to the other
with its products and services.

 These VMware resources may be split over several virtual servers that act much like a single
physical machine in the appropriate configurations – for example, storing data, developing
and distributing programs, maintaining a workspace, and much more.

5.1.1. VMware Key Features

1. Easy Installation: Installs like an application, with simple, wizard-driven installation and
virtual machine creation process
2. Seamless migration to vSphere: Protect your investment and use the free web-based service
VMware Go to seamlessly migrate your virtual machines to VMware vSphere.
3. Hardware Support: Runs on any standard x86 hardware, including Intel and AMD
hardware virtualization assisted systems. Also supports two-processor Virtual SMP, enabling
a single virtual machine to span multiple physical processors
4. Operating system support: The broadest operating system support of any host-based
virtualization platform currently available, including support for Windows Server 2008,
Windows Vista Business Edition and Ultimate Edition (guest only), Red Hat Enterprise
Linux 5 and Ubuntu 8.04.
5. 64-bit operating system support: Use 64-bit guest operating systems on 64-bit hardware to
enable more scalable and higher performing computing solutions. In addition, Server 2 runs
natively on 64-bit Linux host operating systems.
6. VMware Infrastructure (VI) Web Access management interface: VI Web Access
management interface provides a simple, flexible, secure, intuitive and productive
management experience. Plus, access thousands of pre-built, pre-configured, ready-to-run
enterprise applications packaged with an operating system inside a virtual machine at the
Virtual Appliance Marketplace.
7. Independent virtual machine console: With the VMware Remote Console, you can access
your virtual machine consoles independent of the VI Web Access management interface.
8. More scalable virtual machines: Support for up to 8 GB of RAM and up to 10 virtual
network interface cards per virtual machine, transfer data at faster data rates from USB2.0
devices plus add new SCSI hard disks and controllers to a running virtual machine.
9. Volume Shadow Copy Service (VSS): Properly backup the state of the Windows virtual
machines when using the snapshot feature to maintain data integrity of the applications
running inside the virtual machine.
10. Support for Virtual Machine Interface (VMI): This feature enables transparent
paravirtualization, in which a single binary version of the operating system can run either on
native hardware or in paravirtualized mode to improve performance in specific Linux
environments.
11. Virtual Machine Communication Interface (VMCI): Support for fast and efficient
communication between a virtual machine and the host operating system and between two or
more virtual machines on the same host.

5.1.2. VMware Infrastructure Architecture


VMware Infrastructure is a full infrastructure virtualization suite that provides
comprehensive virtualization, management, resource optimization, application availability,
and operational automation capabilities in an integrated offering.
VMware Infrastructure virtualizes and aggregates the underlying physical hardware
resources across multiple systems and provides pools of virtual resources to the datacenter in

2
the virtual environment.
In addition, VMware Infrastructure brings about a set of distributed services that
enables fine‐grain, policy‐driven resource allocation, high availability, and consolidated
backup of the entire virtual datacenter. These distributed services enable an IT organization to
establish and meet their production Service Level Agreements with their customers in a cost-
effective manner.
The relationships among the various components of the VMware Infrastructure are
shown in Figure. 5.1.

Figure. 5.1. VMware Infrastructure


5.1.2.1. Components of VMWare

VMware Infrastructure includes the following components shown in Figure 5.1:


 VMware ESX Server – A robust, production‐proven virtualization layer run on
physical servers that abstracts processor, memory, storage, and networking resources into
multiple virtual machines. Two versions of ESX Server are available:
o ESX Server 3 contains a built‐in service console. It is available as an
installable CD‐ROM boot image.
o ESX Server 3i does not contain a service console. It is available in two
forms, ESX Server 3i Embedded and ESX Server 3i Installable.
o ESX Server 3i Embedded is firmware that is built into a server’s
physicalhardware. ESX Server 3i Installable is software that is available as an installable CD‐
ROM boot image. You install the ESX Server 3i Installablesoftware onto a server’s hard
drive.
 VirtualCenter Server – The central point for configuring, provisioning, and
managing virtualized IT environments.
 VMware Infrastructure Client (VI Client) – An interface that allows users to
connect remotely to the VirtualCenter Server or individual ESX Servers from any Windows
PC.
 VMware Infrastructure Web Access (VI Web Access) – A Web interface
that allows virtual machine management and access to remote consoles.
 VMware Virtual Machine File System (VMFS) – A high‐performance cluster
file system for ESX Server virtual machines.
 VMware Virtual Symmetric Multi‐Processing (SMP) – Feature that enables
a single virtual machine to use multiple physical processors simultaneously.
 VMware VMotion™ and VMware Storage VMotion – VMware VMotion
enables the live migration of running virtual machines from one physical server to another
with zero down time, continuous service availability, and complete transaction integrity.
VMware Storage VMotion enables the migration of virtual machine files from one datastore

3
to another without service interruption.
 VMware High Availability (HA) – Feature that provides easy‐to‐use, cost‐
effective high availability for applications running in virtual machines. In theevent of server
failure, affected virtual machines are automatically restarted on other production servers that
have spare capacity.
 VMware Distributed Resource Scheduler (DRS) – Feature that allocates and
balances computing capacity dynamically across collections of hardware resources for virtual
machines. This feature includes distributed power management (DPM) capabilities that
enable a datacenter to significantly reduce its power consumption.
 VMware Consolidated Backup (Consolidated Backup) – Feature that
provides an easy‐to‐use, centralized facility for agent‐free backup of virtual machines. It
simplifies backup administration and reduces the load on ESX Servers.
 VMware Infrastructure SDK – Feature that provides a standard interface for
VMware and third‐party solutions to access the VMware Infrastructure.
5.1.3. Advantages of VMWare
1. Cost
2. Redundancy
3. Scalability
4. Flexibility
5. Multiple OS Support

5.1.4. Disadvantages of VMWare:


1. Performance
2. User Friendliness
3. Reliability
4. Hardware Compatibility
5. Troubleshooting

5.2. Amazon AWS

 AWS (Amazon Web Services) is a comprehensive, evolving cloud computing


platform provided by Amazon.

 It includes a mixture of infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS)


and packaged software-as-a-service (SaaS) offerings.

 AWS offers tools such as compute power, database storage and content delivery services.

 With more than 200 services, AWS provides a range of offerings for individuals, as well as
public and private sector organizations to create applications and information services of all
kinds.

 The top 5 services provided by Amazon Web Services are:

 Amazon Elastic Cloud Compute (EC2)


 Amazon Simple Storage Service (S3)
 Amazon Virtual Private Cloud (VPC)
 Amazon CloudFront
 Amazon Relational Database Services (RDS)

4
5.2.1. Amazon AWS Key features:

Amazon Web Services (AWS) offers a wide range of features to help developers
build, deploy, and scale applications in the cloud. Some of the key features of
AWS include:

● Pay-as-you-go pricing: AWS charges customers based on their actual usage of


resources, rather than upfront costs or long-term contracts, hence making it a cost-effective
solution for businessesof all sizes.
● On-demand provisioning: AWS allows customers to quickly scale up or down as
needed, without having to make a long-term
commitment.
● Global infrastructure: AWS has a global network of data centers and edge locations
that provide low-latency access to resources around the
world.
● Wide range of services: AWS offers over 200 different services across various
categories, including computing, storage, database, networking, analytics, machine learning,
security, and more.
● Scalability: AWS allows developers to scale their applications up or down based on
demand, without the need to provision or manages physical
resources.
● Security: AWS provides a number of security features and compliance programs to help
protect customer data and ensure compliance with various
regulations.
● Management tools: AWS provides a range of tools and features to help developers
build, deploy, and manage their applications in the cloud, including the AWS Management
Console, the AWS Command Line Interface (CLI), and the
AWS SDKs.
● Integration with other services: AWS integrates with a wide range of other services and
technologies, including popular third-party tools and on-premises
resources.
● Community and support: AWS has a large community of developers and users, and

offers a variety of support options, including documentation, forums, and customer


support.

5
5.2.2. AWS Architecture

Figure. 5.2. AWS Architecture

 The above diagram is a simple AWS architecture diagram that shows the basic structure
of Amazon Web Services architecture.

 It shows the basic AWS services, such as Route 53, Elastic Load Balance

 By using S3 (Simple Storage Service), companies can easily store and retrieve data of
various types using Application Programming Interface calls.

 AWS comes with so many handy options such as configuration server, individual server
mapping, and pricing.
 As we can see in the AWS architecture diagram that a custom virtual private cloud is created
to secure the web application, and resources are spread across availability zones to provide
redundancy during maintenance.

 Web servers are deployed on AWS EC2 instances.

 External traffic to the servers is balanced by Elastic Load Balancer.

 We can add or remove instances and scale up or down on the basis of dynamic scaling
policies.

 Amazon CloudFront distribution helps us minimize latency. It also maintains the edge
locations across the globe—an edge location is a cache for web and streaming content.

 Route 53 domain name service, on the other hand, is used for the registration and
management of our Internet domain.

5.2.2.1. Components of AWS

1. Compute Services:

=>Amazon Elastic Compute Cloud (EC2): Virtual servers in the cloud, offering scalable
compute capacity for running applications, hosting websites, and processing data.

=> AWS Lambda: Serverless computing service that allows you to run code in response to
events without provisioning or managing servers.

2. Storage Services:

=>Amazon Simple Storage Service (S3): Scalable object storage for storing and retrieving
data, with high durability, availability, and security features.

=> Amazon Elastic Block Store (EBS): Block storage volumes for EC2 instances,
providing persistent storage that can be attached to instances.

6
3. Database Services:

=> Amazon Relational Database Service (RDS): Managed relational database service
supporting multiple database engines such as MySQL, PostgreSQL, Oracle, SQL Server, and
MariaDB.

=>Amazon DynamoDB: Fully managed NoSQL database service offering seamless


scalability, high performance, and low latency for applications requiring flexible data models.

4. Networking Services:

=> Amazon Virtual Private Cloud (VPC): Isolated virtual networks in the AWS cloud,
allowing you to define and control network settings, subnets, and access controls.
=> Amazon Route 53: Scalable domain name system (DNS) web service for routing traffic
Prepared
to resources, such as EC2 instances, S3 buckets, and load balancers.
by:

5. Security and Identity Services:

=>AWS Identity and Access Management (IAM): Identity management service for
securely controlling access to AWS resources, allowing you to create and manage users,
groups, and permissions.

=>Amazon GuardDuty: Managed threat detection service that continuously monitors for
malicious activity and unauthorized behavior in your AWS accounts.

6. Machine Learning and Artificial Intelligence Services:

=>Amazon SageMaker: Fully managed service for building, training, and deploying
machine learning models at scale.

=>Amazon Rekognition: Deep learning-based image and video analysis service for
identifying objects, scenes, and faces in images and videos.

7. Developer Tools:

=>AWS CodePipeline: Continuous integration and continuous delivery (CI/CD) service


for automating the build, test, and deployment of applications.

=>AWS CloudFormation: Infrastructure as code service for provisioning and managing


AWS resources using declarative templates.

8. Analytics Services:

=>Amazon Redshift: Fully managed data warehouse service for analyzing large datasets
using SQL queries.

=>Amazon Athena: Interactive query service that allows you to analyze data in Amazon
S3 using standard SQL syntax.

9. Internet of Things (IoT) Services:

7
=>AWS IoT Core: Managed cloud service for securely connecting and managing IoT
devices, collecting and processing data, and implementing IoT applications.

10. Containers and Kubernetes Services:

=> Amazon Elastic Container Service (ECS): Fully managed container orchestration
service for running and scaling containerized applications.

=> Amazon Elastic Kubernetes Service (EKS): Managed Kubernetes service for
deploying, managing, and scaling containerized applications using Kubernetes.
5.2.3. Advantages of AWS

 Scalability Simplified: Easily adjust resources, handling traffic


spikes effortlessly, and eliminating the need for heavy upfront investments.
 Budget-Friendly Flexibility: The pay-as-you-go model ensures cost-efficiency, allowing
you to pay only for what you use.
 Global Reach, Local Speed: AWS’s global network ensures your services reach customers
worldwide, providing a faster and more reliable experience.
 Top-Notch Security: Robust security measures, including regular audits and strong
encryption, safeguard your data like a stronghold.
 Continuous Innovation: Regular updates introduce new features, keeping you at the
forefront of technological advancements.

5.2.4. Disadvantages of AWS

 Cost Complexity: Consistent monitoring is essential since cost tracking can become
complex due to multiple providers with different pricing structures.
 Learning Curve: There may be a learning curve for the broad functionality and need efforts
on training and documentation for smoother adaption.
 Dependency Risks: It could be difficult to rely only on AWS infrastructure. So make a plan
to reduce the risk of dependence.
 Not Always Small-Business Friendly: For simpler needs, AWS might be more complex and
costly than necessary to assess the alignment with your project scale.
 Rare Outages: While not common, AWS can experience occasional interruptions. Reduce
potential impacts on essential operations through the use of redundancy and backup
protocols.

5.3. Microsoft HyperV

 Hyper-V is Microsoft's hardware virtualization product. It lets you create and run a software
version of a computer, called a virtual machine. Each virtual machine acts like a complete
computer, running an operating system and programs.
 When you need computing resources, virtual machines give you more flexibility, help save
time and money, and are a more efficient way to use hardware than just running one
operating system on physical hardware.
 Hyper-V runs each virtual machine in its own isolated space, which means you can run more
than one virtual machine on the same hardware at the same time.
5.3.1. Features of Microsoft HyperV:

8
Hyper-V offers many features. This is an overview, grouped by what the features
provide.

1. Computing environment - A Hyper-V virtual machine includes the same basic parts as a
physical computer, such as memory, processor, storage, and networking. All these parts have
features and options that you can configure different ways to meet different needs. Storage
and networking can each be considered categories of their own, because of the many ways
you can configure them.
2. Disaster recovery and backup - For disaster recovery, Hyper-V Replica creates copies of
virtual machines, intended to be stored in another physical location, so you can restore the
virtual machine from the copy. For backup, Hyper-V offers two types. One uses saved states
and the other uses Volume Shadow Copy Service (VSS) so you can make application-
consistent backups for programs that support VSS.
3. Optimization - Each supported guest operating system has a customized set of services and
drivers, called integration services, that make it easier to use the operating system in a Hyper-
V virtual machine.
4. Portability - Features such as live migration, storage migration, and import/export make it
easier to move or distribute a virtual machine.
5. Remote connectivity - Hyper-V includes Virtual Machine Connection, a remote connection
tool for use with both Windows and Linux. Unlike Remote Desktop, this tool gives you
console access, so you can see what's happening in the guest even when the operating system
isn't booted yet.
6. Security - Secure boot and shielded virtual machines help protect against malware and other
unauthorized access to a virtual machine and its data.

5.3.2. Microsoft Hyper-V Architecture

 Hyper-V features a Type 1 hypervisor-based architecture. The hypervisor virtualizes


processors and memory.
 Hyper-V implements isolation of virtual machines through the concept of partitions, which
are logical units supported by the hypervisor. Each partition executes a guest operating
system, with at least one parent partition running a supported version of Windows. The parent
partition creates child partitions to host the guest OSs.

Figure. 5.3. Microsoft Hyper-V Architecture

9
 In this setup, the Virtualization Service Provider and Virtual Machine Management Service
operate in the parent partition to assist child partitions. Child partitions lack direct access to
the physical processor and handle no real interrupts. Instead, they operate in a virtualized
processor environment and utilize Guest Virtual Address.
 Hyper-V configures the processor exposure to each partition and manages interrupts via a
Synthetic Interrupt Controller (SynIC). Hardware acceleration, like EPT on Intel or RVI on
AMD, assists in address translation for virtual address-spaces.
 Child partitions access hardware resources virtually, with requests redirected through the
VMBus to the parent partition's devices. The VMBus enables inter-partition communication
transparently to the guest OS.
 Parent partitions host a Virtualization Service Provider (VSP) connected to the VMBus,
handling device access requests from child partitions. Internally, child partition virtual
devices employ a Virtualization Service Client (VSC) to interact with VSPs via the VMBus.
 For efficient communication, virtual devices can leverage Enlightened I/O, a Windows
Server Virtualization feature. Enlightened I/O allows direct utilization of VMBus for
communication, bypassing emulation layers, but requires guest OS support.
5.3.2.1. Components of Microsoft Hyper-V

Hyper-V, Microsoft's hypervisor-based virtualization platform, consists of several key


components that enable the creation, management, and operation of virtual machines (VMs).
Here's a summary of these components:

 Parent-Child Partition: Hyper-V must have at least one host or parent partition, which runs
the virtualization stack and has direct access to the hardware. Guest VMs, or child partitions,
are created within the parent partition. The hypervisor manages interrupts to the processor
and establishes trust relationships between guest VMs, the parent partition, and the
hypervisor.
 VM Bus: The VM Bus is a communication protocol that facilitates inter-partition
communication between the Hyper-V host and guest VMs. It assists in machine enumeration
and avoids additional layers of communication.
 VSP - VSC: Virtual Service Provider (VSP) and Virtual Service Client (VSC) are critical
components that enable communication between the Hyper-V server and guest VMs. VSPs
run in the parent partition, while corresponding VSCs run in the child partitions. They
communicate via the VM Bus, with VSPs handling various requests from multiple VSCs
simultaneously.
 VM Management Service: The Virtual Machine Management Service (VMMS), also
known as vmms.exe, is a core component of Hyper-V that manages every aspect of the
virtualization environment. It runs under the system account and must be operational for
controlling, creating, or deleting virtual machines.
 VM Worker Process: Each virtual machine running on Hyper-V has its own VM Worker
Process (vmwp.exe), created by the Virtual Machine Management Service. This process
manages the VM's operation, including resource allocation and execution.
These components work together to provide a robust virtualization environment, enabling
organizations to create and manage virtualized infrastructure efficiently.

5.3.3. Advantages of Microsoft Hyper-V


1. Integration with Windows Ecosystem: Hyper-V is tightly integrated with the
Windows Server operating system, providing seamless management and interoperability with
other Microsoft products and services.

10
2. Cost-Effective: Hyper-V is included as a feature in Windows Server editions,
making it a cost-effective virtualization solution for organizations already invested in the
Microsoft ecosystem.

3. Scalability: Hyper-V supports large-scale virtualization deployments and can


scale to hundreds of VMs on a single host server, providing flexibility for organizations of all
sizes.

4. Enterprise-Class Features: Hyper-V offers enterprise-class features such as live


migration, high availability, and disaster recovery, making it suitable for mission-critical
workloads and business continuity requirements.

5. Integration with System Center: Hyper-V integrates with Microsoft System


Center suite for comprehensive management of virtualized infrastructure, including
monitoring, automation, and orchestration capabilities.

5.3.4. Disadvantages:

1. Limited Platform Support: Hyper-V primarily runs on Windows Server and


has limited support for non-Windows operating systems compared to other hypervisors like
VMware vSphere.

2. Complexity: While Hyper-V has improved over the years, some users may find it
more complex to configure and manage compared to other hypervisor solutions.

3. Hardware Compatibility: Hyper-V may have stricter hardware compatibility


requirements compared to other hypervisors, which could limit deployment options for
certain hardware configurations.

4. Third-Party Ecosystem: The third-party ecosystem around Hyper-V, including


management tools and integrations, may not be as extensive or mature as that of competitors
like VMware.

5. Performance Overhead: While Hyper-V has improved performance in recent


versions, some users may still experience higher overhead compared to bare-metal
performance, especially in I/O-intensive workloads.

5.4. Oracle VM Virtual Box

 Oracle VM VirtualBox, the world’s most popular open source, cross-platform, virtualization
software, enables developers to deliver code faster by running multiple operating systems on
a single device.
 Oracle VM VirtualBox is a hosted hypervisor for x86 virtualization developed by Oracle
Corporation.
 IT teams and solution providers use VirtualBox to reduce operational costs and shorten the
time needed to securely deploy applications on-premises and to the cloud.
 With lightweight and easy-to-use software, Oracle VM VirtualBox makes it easier for
organizations to develop, test, demo, and deploy new solutions across multiple platforms
from a single device.

11
5.4.1. Key Features of Oracle VM Virtual Box

Oracle VM VirtualBox offers a wide range of features, including:

 Portability: Compatible with numerous 64-bit host OSes, allowing for easy VM migration
across different platforms.
 Hosted Hypervisor: Functions as a type 2 hypervisor, running alongside existing
applications on the host system.
 Compatibility: Supports identical functionality across host platforms, facilitating seamless
VM transfer between different host OSes.
 No Hardware Virtualization Required: Can run on older hardware without requiring
specific processor features like Intel VT-x or AMD-V.
 Guest Additions: Enhances guest performance and integration with features like shared
folders, seamless windows, and 3D virtualization.
 Hardware Support: Offers extensive support for guest multiprocessing, USB devices,
virtual devices (IDE, SCSI, SATA, network cards, sound cards, etc.), ACPI, multiscreen
resolutions, iSCSI, and PXE network boot.
 Multigeneration Snapshots: Allows saving and managing snapshots of VM states,
facilitating easy rollback and configuration management.
 VM Groups: Provides features for organizing and controlling VMs collectively or
individually, including nested group hierarchies.
 Modular Architecture: Features a clean design with well-defined interfaces, allowing
control from multiple interfaces simultaneously.
 Software Development Kit (SDK): Offers a comprehensive SDK for exposing and
integrating VirtualBox functionality with other software systems.
 Remote Machine Display: Enables high-performance remote access to running VMs
through the VirtualBox Remote Desktop Extension (VRDE).
 Extensible RDP Authentication: Supports various authentication methods for RDP (Remote
Desktop Protocol) connections, with an SDK for creating custom authentication interfaces.
 USB over RDP: Allows connecting USB devices locally to a VM running remotely on a
VirtualBox RDP server.
5.4.2. Architecture of Oracle VM VirtualBox
 Oracle VM is a platform that provides a fully equipped environment with all the latest
benefits of virtualization technology.
 Oracle VM enables you to deploy operating systems and application software within a
supported virtualization environment.
 Oracle VM insulates users and administrators from the underlying virtualization technology
and allows daily operations to be conducted using goal-oriented GUI interfaces.

12
Figure.5.5. Oracle VM VirtualBox Architecture

5.4.4.1. Components of Oracle VM VirtualBox

The components of Oracle VM are shown in Figure 5.5, ―Oracle VM Architecture‖.

1. Client Applications:

 Various user interfaces to Oracle VM Manager are provided, either via the graphical user
interface (GUI) accessible using a web-browser; the command line interface (CLI) accessible
using an SSH client; custom built applications or scripts that use the Web Services API (WS-
API); or external applications, such as Oracle Enterprise Manager, or legacy utility scripts
that may still make use of the legacy API over TCPS on port 54322.
 The legacy API is due to be deprecated in the near future and applications that are using it
must be updated to use the new Web Services API instead. All communications with Oracle
VM Manager are secured using either a key or certificate-based technology.

2. Oracle VM Manager:
Oracle VM Manager serves as a comprehensive platform for managing Oracle VM
Servers, virtual machines, and associated resources. Key points include:

 Management Interfaces: It offers both a web browser-based user interface and a command
line interface (CLI) for managing infrastructure directly. These interfaces run as separate
applications to the Oracle VM Manager core and interact via the Web Services API.
 Core Architecture: The Oracle VM Manager core is an Oracle WebLogic Server application
running on Oracle Linux. The user interface is built on the Application Development
Framework (ADF), ensuring a consistent experience with other Oracle web-based
applications.
 GUI and CLI Functionality: While both interfaces utilize the Web Services API to interact
with the Oracle VM Manager core, the GUI can directly access the Oracle VM Manager
Database for read-only operations, enhancing performance and providing advanced filtering
options.
 Communication with VM Servers: Oracle VM Manager communicates with Oracle VM
Servers via the Oracle VM Agent, using XML-RPC over HTTPS on port 8899. This enables
seamless interaction, including triggering actions and receiving notifications, while ensuring
security through HTTPS.
 High Availability: Despite its critical role in configuring the Oracle VM infrastructure, the
virtualized environment can continue to operate effectively even during Oracle VM Manager
downtime. This ensures the maintenance of high availability and the ability to perform live
migration of virtual machines.

3. Oracle VM Manager Database:

 Used by Oracle VM Manager core to store and track configuration, status changes and
events. Oracle VM Manager uses a MySQL Enterprise database that is bundled in the
installer and which runs on the same host where Oracle VM Manager is installed.
 The database is configured for the exclusive use of Oracle VM Manager and must not be used
by any other applications.
 The database is automatically backed up on a regular schedule, and facilities are provided to

13
perform manual backups as well.

4. Oracle VM Server:
Oracle VM Server is a providing a lightweight, secure, server platform which runs
virtual machines, also known as domains. Key points include:

 Installation and Components: Installed on bare metal computers, it includes the Oracle VM
Agent for communication with Oracle VM Manager. It operates with dom0 (domain zero) as
the management domain and domU as the unprivileged domain for VMs.
 Architecture: On x86-based systems, it utilizes Xen hypervisor technology and a Linux
kernel running as dom0. VMs can run various operating systems, including Linux, Oracle
Solaris, or Microsoft Windows™. For SPARC systems, it leverages the built-in hypervisor
and Oracle Solaris as the primary domain.
 Clustering and Server Pools: Multiple Oracle VM Servers are clustered to form server
pools, facilitating load balancing and failover. VMs within a pool can be migrated between
servers, and server pools provide logical separation of resources.
 Database and High Availability: Each Oracle VM Server maintains its Berkeley Database
for local configuration and runtime information. Even if Oracle VM Manager is unavailable,
servers can function normally. Clustered servers share a cluster database, ensuring continued
functionality like High Availability, even without Oracle VM Manager.

5. External Shared Storage: Provides storage for a variety of purposes and is required to
enable high-availability options afforded through clustering. Storage discovery and
management is achieved using the Oracle VM Manager, which then interacts with Oracle
VM Servers via the storage connect framework to then interact with storage components.
Oracle VM provides support for a variety of external storage types including NFS, iSCSI and
Fibre Channel.

5.4.3. Advantages of Oracle VM VirtualBox

1. Free and Open Source: VirtualBox is available for free under the GNU General
Public License (GPL), making it accessible to users and organizations without licensing
costs.

2. Cross-Platform Compatibility: Its support for multiple host and guest operating
systems makes it suitable for a wide range of use cases and environments.

3. Ease of Use: VirtualBox features an intuitive graphical user interface (GUI) and
comprehensive documentation, making it easy for users to create, configure, and manage
virtual machines.

4. Community Support: Being an open-source project, VirtualBox benefits from a


large and active community of users and developers who contribute to its development,
provide support, and share knowledge.

5. Performance: VirtualBox offers good performance and resource utilization,


especially for desktop virtualization and development environments.

5.4.4. Disadvantages Oracle VM VirtualBox:

14
1. Performance Overhead: While VirtualBox provides decent performance, it may
have higher overhead compared to bare-metal performance, especially for resource-intensive
workloads.

2. Limited Enterprise Features: VirtualBox may lack some advanced enterprise


features found in commercial virtualization solutions, such as live migration, advanced
networking, and centralized management.

3. Less Integration with Cloud Services: Unlike some other virtualization platforms,
VirtualBox may offer limited integration with cloud services and infrastructure, making it
less suitable for cloud-based deployments.
4. Occasional Stability Issues: Some users may encounter stability issues or
compatibility issues, especially when running on certain host hardware configurations or with
specific guest OS versions.

5. Updates and Maintenance: While VirtualBox receives regular updates and


maintenance releases, the development pace may be slower compared to commercial
virtualization solutions, potentially leading to delays in addressing issues or adding new
features.

5.5. IBM PowerVM

 PowerVM is an enterprise-class virtualization solution that provides a secure, flexible, and


scalable virtualization for Power servers.
 PowerVM enables logical partitions (LPARs) and server consolidation.

 Clients can run AIX, IBM i, and Linux operating systems on Power servers with a world-class
reliability, high availability (HA), and serviceability capabilities together with the leading
performance of the Power platform.

 This solution provides workload consolidation that helps clients control costs and improves

overall performance, availability, flexibility, and energy efficiency.

 Power servers, which are combined with PowerVM technology, help consolidate and simplify
your IT environment.

5.5.1. IBM PowerVM Features

PowerVM, IBM's virtualization solution for Power Systems servers, offers several key
features:
 Hardware Virtualization: PowerVM provides hardware-level virtualization, allowing
multiple logical partitions (LPARs) to run on a single physical server.
Dynamic Resource Allocation: It enables dynamic allocation of CPU, memory, and I/O
resources to virtual machines, allowing for efficient resource utilization and performance
optimization.
 Live Partition Mobility (LPM): PowerVM supports LPM, allowing users to move running
virtual machines between physical servers without disrupting service, enhancing workload
flexibility and resiliency.
 Shared Processor Pools: PowerVM allows users to create shared processor pools, enabling

15
dynamic resource allocation and workload balancing across multiple LPARs.
 Micro-Partitioning: This feature enables fine-grained CPU allocation, allowing users to
allocate fractions of a CPU to virtual machines, optimizing resource utilization and reducing
costs.
 Virtual I/O Server (VIOS): PowerVM includes VIOS, which acts as a virtualization layer
for I/O devices, providing efficient and scalable I/O virtualization for virtual machines.
 Virtual Networking: PowerVM offers virtual networking capabilities, allowing users to
create virtual networks and connect virtual machines to them, providing flexibility and
isolation for network traffic.
 Security and Isolation: PowerVM provides robust security and isolation mechanisms,
ensuring that virtual machines remain isolated from each other and from the underlying
hardware.
 Advanced Management Tools: PowerVM includes management tools such as IBM Systems
Director and HMC (Hardware Management Console), which provide comprehensive
management capabilities for virtualized environments.
 Integration with IBM Ecosystem: PowerVM integrates with other IBM solutions and
ecosystem products, such as IBM Cloud PowerVC Manager, to provide enhanced
management and automation capabilities for virtualized environments on Power Systems
servers.
5.5.2. Architecture of IBM PowerVM

 Virtual SCSI (VSCSI), part of VIOS, enables the sharing of physical storage adapters (SCSI
and Fibre Channel) and storage devices (disk and optical) between logical partitions.
 Virtual SCSI is based on a client/server relationship. The Virtual I/O Server owns the
physical resources and acts as server or, in SCSI terms, target device. The logical partitions
access the virtual SCSI resources provided by the Virtual I/O Server as clients.
 VIOS virtual SCSI features include:

 Support for adapter and device sharing.


 Client boot from VSCSI devices.
 AIX multipath I/O support for VSCSI devices.
 Support for these SCSI peripheral device types:
o Disk backed by logical volume.
o Disk backed by physical volume.
o Optical (DVD-RAM and DVD-ROM).

16
Figure. 5.5. Virtual SCSI architecture in IBM PowerVM Technologies
5.5.2.1. Components of IBM PowerVM
Below are some of the components of PowerVM.
 PowerVM Hypervisor (PHYP): This functionality is made available by the hardware
platform in combination with system firmware for the POWER server. The hypervisor is
ultimately the basis for any virtualization on a POWER system.

 Logical Partition (LPAR): LPARs are provided through the hypervisor. Originally, only
dedicated hardware components and complete processors could be allocated to an LPAR;
only the memory was shared. In the course of the Power Systems generations, the
possibilities have been expanded further and further (micro-partition, dynamic logical
partition), although the term LPAR has been retained.

 Micro Partition: The micro partition allows a processor to be shared between different
partitions. The micro partitions are assigned parts of a processor, which is also referred to as
shared processor partitions.

 Dynamic Logical Partition (DLPAR): Virtual resources (CPU, memory, physical adapters
and virtual adapters) can be added to or removed from the partition at runtime (provided that
the operating system supports it). This means that resources can be dynamically adapted to
the needs of a partition.

 Shared Processor Pools (SPP): Partitions can be assigned to shared processor pools, so that
the consumption of processor resources by partitions can be limited to the resources available
in the pool.

 Virtual I/O Server (VIOS): This is a special service partition with an AIX-based, specially
extended operating system for supporting a range of virtualization functions.
Network adapters (Virtual Ethernet) and I/O adapters (Virtual SCSI and Virtual FC) can be
virtualized via virtual I/O servers.

 Virtual Ethernet (VETH): Client partitions can communicate in the network with the help
of virtual Ethernet adapters without having their own physical Ethernet adapters.

 Virtual SCSI (VSCSI): With the help of the virtual I/O server, client partitions can access
disks via a virtual SCSI adapter without having their own physical I/O adapter. The necessary
physical adapters belong to the virtual I/O servers and can therefore be shared by many
partitions. The disks must be assigned to the virtual SCSI adapters.

 Virtual FC (VFC): In contrast to Virtual SCSI, Virtual FC allows a virtual FC adapter to be


assigned directly to a physical FC adapter. Unlike with VSCSI, the individual disks no longer
have to be assigned to the virtual adapters, which makes administration much easier.

 Live Partition Mobility (LPM): This feature allows an active partition to be moved online
from one power system to another power system. All applications and the operating system
simply continue to run during the online move. From the point of view of the applications,
the move is transparent.

 Active Memory Expansion (AME): By compressing main memory, additional available

17
main memory can be obtained. The desired compression can be specified. With this, for
example, from 32 GB of physical main memory and a compression factor (AME factor) of
1.5, 48 GB of main memory can be obtained for one partition. The operating system and all
applications see 48 GB of available main memory.

 Single Root I/O Virtualization (SR-IOV): With this type of virtualization, a virtual I/O
server is no longer required. The virtualization takes place in hardware directly on the
physical adapter. With PowerVM this is currently limited to SR-IOV capable network
adapters. The bandwidth of the SR-IOV Ethernet ports can be divided between the individual
partitions.

 Virtual Network Interface Controller (vNIC): Allows automatic failover to another SR-
IOV Ethernet port if one SR-IOV Ethernet port fails. For this, however, the support of virtual
I/O servers is required again.

5.5.3. Advantages of IBM PowerVM


1. Performance and Scalability: PowerVM leverages the performance and
scalability of IBM Power Systems servers, providing high levels of performance, throughput,
and scalability for mission-critical workloads.
2. Resource Efficiency: PowerVM offers advanced resource management
capabilities, such as micro-partitioning and shared processor pools, enabling efficient
utilization and optimization of CPU and memory resources.
3. Integration with IBM Ecosystem: PowerVM is tightly integrated with IBM
Power Systems hardware and software ecosystem, including AIX, IBM i, and PowerVM
Editions, providing a comprehensive virtualization solution for IBM environments.
4. Security and Compliance: PowerVM includes features for security and
compliance, such as Trusted Execution Technology (TXT), Secure Boot, and compliance
with industry standards and regulations, making it suitable for secure and regulated
environments.
5. Enterprise-Class Support: PowerVM is backed by IBM's enterprise-class
support and services, providing organizations with access to technical expertise, updates, and
maintenance for their virtualized environments.
5.5.4. Disadvantages IBM PowerVM
1. Cost: PowerVM may involve higher initial acquisition and deployment costs
compared to x86-based virtualization solutions, especially for smaller deployments or
organizations without existing investments in IBM Power Systems infrastructure.
2. Complexity: PowerVM configuration and management may require specialized
skills and expertise, particularly for optimizing performance, configuring advanced features,
and troubleshooting issues in complex environments.
3. Limited Platform Support: PowerVM is specific to IBM Power Systems servers
and may not support as wide a range of operating systems and applications as x86-based
virtualization platforms, limiting its flexibility for heterogeneous environments.
4. Vendor Lock-In: Adopting PowerVM may result in vendor lock-in to IBM's
hardware and software ecosystem, potentially limiting options for migration, interoperability,
and flexibility in the long term.

5.6. Google Virtualization

 Google offers various virtualization solutions as part of its cloud platform to enable users to
create, deploy, and manage virtualized environments and workloads.

18
 These virtualization solutions include Google Compute Engine (GCE) for virtual machines,
Google Kubernetes Engine (GKE) for container orchestration, Anthos for hybrid and multi-
cloud management, Google Cloud VMware Engine (GCVE) for running VMware workloads,
and more.

5.6.1. Google Virtualization Features


Google offers various virtualization features and services through Google
Cloud Platform (GCP) to facilitate efficient cloud computing. Here are some key features of
Google virtualization:

1. Compute Engine: Google Compute Engine is the Infrastructure as a Service


(IaaS) offering from Google Cloud Platform, providing virtual machines (VMs) that run on
Google's
infrastructure. Users can create and manage VM instances in the cloud with options for
customization, scalability, and flexibility.
2. Google Kubernetes Engine (GKE): GKE is a managed Kubernetes service that
simplifies the deployment, management, and scaling of containerized applications using
Kubernetes. It enables users to orchestrate containerized workloads across a cluster of VMs
on Google Cloud Platform.
3. App Engine: Google App Engine is a platform as a service (PaaS) offering that
allows developers to build and deploy scalable web applications and APIs without managing
underlying infrastructure. App Engine abstracts away the complexities of infrastructure
management, allowing developers to focus on application development.
4. Cloud Functions: Google Cloud Functions is a serverless compute service that
allows developers to run event-driven functions in response to cloud events without
provisioning or managing servers. It enables users to build and deploy lightweight, scalable
applications and microservices in a serverless environment.
5. Anthos: Anthos is Google's hybrid and multi-cloud platform that enables
organizations to build, deploy, and manage applications across multiple environments,
including on-premises data centers and other cloud providers, using Kubernetes and related
technologies.
6. Virtual Private Cloud (VPC): Google VPC allows users to create and manage
isolated virtual networks on Google Cloud Platform. It provides control over network
settings, IP addressing, routing, and firewall rules, enabling secure communication between
VMs and services within the cloud environment.
7. Google Cloud VMware Engine: This service provides a fully managed VMware
environment on Google Cloud Platform, allowing customers to migrate and run VMware-
based workloads in the cloud without refactoring or rearchitecting applications.
8. Nested Virtualization: Google Cloud Platform supports nested virtualization,
allowing users to run virtual machines (VMs) within VMs. This feature is useful for various
use cases such as testing, development, and running specific workloads that require
virtualization within the cloud environment.
5.6.2. Google Virtualization Architecture
Google Virtualization Architecture encompasses the underlying framework and
components that enable virtualization services within Google Cloud Platform (GCP).
The following Figure. 5.6. illustrates the Google Cloud Architecture.

Figure. 5. 6. Google Cloud Architecture

19
5.6.2.1 Components of Google Virtualization

Google Virtualization encompasses various components and services within Google Cloud
Platform (GCP) that enable users to create, manage, and deploy virtualized environments.
Some key components of Google Virtualization include:

1. Google Compute Engine (Virtual Machines)


 Google Cloud provides managed virtual machines (VMs). Even though there are many other
options for running compute workloads, including containers, serverless, and App Engine,
VMs are still a popular option.
 Google Cloud provides four machine families:
 General-purpose
 Compute-optimized
 Memory-optimized
 Accelerator optimized
 In addition, Google Cloud is the only public cloud provider that allows users to create their
own custom VMs with the hardware of their choice.

 Google Compute Engine (GCE) supports both Linux and Windows virtual machines. You
can run VMs based on Google-provided machine images or pull images from your existing
infrastructure.
2. Storage

Google Cloud provides three main services offering different types of storage:
 Persistent disks- provides high-performance block storage, can be attached to VMs as
collocated persistent storage.
 File storage- officially known as Google Filestore, providing fully managed file storage with
99.99% regional availability SLA, backups, snapshots, and ability to scale to high throughput
and IOPS.
 Object storage- officially known as Google Cloud Storage, providing highly durable
storage buckets, similar to Amazon S3 Storage.
3. Database

Google Cloud offers several managed database services both relational and non-relational,
as a platform as a service (PaaS) offering built on its storage services:
 Google Cloud SQL- relational database service compatible with SQL Server, MySQL, and

20
PostgreSQL. Provides automatic backup, replication, and disaster recovery.
 Cloud Spanner- relational database that supports SQL on the one hand, but enables the
same level of scalability as non-relational databases.
 Google Cloud BigQuery- serverless data warehouse, which supports large-scale data
analysis and streaming data querying via SQL. BigQuery provides a built-in data transfer
service for migrating large data volumes.
 Cloud Bigtable- NoSQL database service designed for large-scale operational data and
analytics workloads. Provides high availability, zero downtime for configuration changes,
and request latency under 10 milliseconds.

 Cloud Firestore- NoSQL database service designed for serverless applications. Can be
integrated seamlessly with web, mobile, and IoT applications, with real- time
synchronization and built-in security.
 Memorystore- managed in-memory datastore designed for security, high availability, and
scalability.

4. Load Balancing and Scaling

 Google Cloud provides server-side load balancing, allowing incoming traffic to be distributed
across multiple virtual machine (VM) instances.
 It uses forwarding rule resources to match and forward certain types of traffic to the
load balancer - for example, it can forward traffic according to protocol, port, IP address
or range.
 Google Cloud Load Balancing is a managed service, in which components are redundant and
highly available. If a load balancing component fails, it is automatically restarted or replaced.
 Google Compute Engine also provides autoscaling, which automatically adds or removes
VM instances from a managed instance group (MIG) as its load increases or decreases.
Serverless

Serverless computing dynamically runs workloads when they are required, with no need to
manage the underlying server resources. Google Cloud provides three key serverless
options that allow you to run serverless workloads:
 Google Cloud Functions- lets you provide code in multiple programming languages and
allow Google to run it when triggered by an event.
 Google App Engine- a serverless platform that can run web applications and mobile
backends in any programming language.

 Google Cloud Run- deploys containerized applications on a fully managed serverless


platform (similar to Amazon Fargate).

5. Containers

Google offers several technologies that you can use to run containers in the Google Cloud
environment:
 Google Kubernetes Engine (GKE) - the world’s first managed Kubernetes service,
which lets you run Kubernetes clusters on Google Cloud infrastructure, with control over
individual Kubernetes nodes.
 GKE Autopilot - a new operating mode for GKE that lets you optimize clusters for
production environments, improve availability, and dynamically adjust computing power

21
available to Kubernetes clusters.
 Google Anthos - a cloud-agnostic hybrid container management platform. This service
allows you to replace virtual machines (VMs) with container clusters, creating a unified
environment between the public cloud and an on-premises data center.

5.6.3. Advantages of Google Virtualization:


 Scalability: Easily scale resources up or down based on demand.
 Cost-effectiveness: Pay only for the resources you use, reducing operational costs.
 Flexibility: Support for various operating systems and applications.
 Reliability: High availability and redundancy ensure uptime for critical workloads.
 Security: Google's robust security measures protect data and infrastructure.
 Integration: Seamlessly integrates with other Google Cloud services for enhanced
functionality.

5.6.4. Disadvantages of Google Virtualization:


 Learning Curve: Requires training and expertise to effectively utilize the platform.
 Vendor Lock-in: Dependency on Google's ecosystem may limit flexibility in the long
term.
 Network Latency: Performance may be impacted by network latency, especially for
geographically dispersed deployments.
 Compliance Challenges: Meeting regulatory and compliance requirements may be complex.
 Internet Dependency: Relies on internet connectivity, which can be a concern in certain
environments.
 Service Outages: Downtime or disruptions in Google services can impact operations.
5.7. Case Study
5.7.1. Case Study: Optimizing IT Infrastructure with Virtualization Tools Overview:
A global software development company, XYZ Solutions, faced challenges with its
traditional IT infrastructure, including high costs, resource underutilization, and management
complexity. To address these issues, XYZ Solutions implemented virtualization tools to
optimize its IT environment and enhance operational efficiency.
Challenges:
 High Hardware Costs: XYZ Solutions struggled with escalating hardware costs due to the
need for additional servers to support growing workloads.
 Resource Underutilization: The existing infrastructure suffered from resource
underutilization, with servers running at low capacity, resulting in wasted resources.
 Management Complexity: Managing a large number of physical servers was complex and
time-consuming, requiring significant administrative effort.
 Scalability Concerns: The lack of scalability in the traditional infrastructure limited XYZ
Solutions' ability to quickly provision resources to meet changing business demands.
Solution:
 Virtualization Deployment: XYZ Solutions deployed VMware vSphere, a leading
virtualization platform, to consolidate its physical servers into virtual machines (VMs). This
allowed for better resource utilization and reduced the number of physical servers required.
 Dynamic Resource Allocation: With vSphere's dynamic resource allocation features, XYZ
Solutions optimized resource utilization by automatically allocating computing resources
based on workload demands.
 High Availability and Fault Tolerance: XYZ Solutions implemented vSphere's high
availability (HA) and fault tolerance (FT) features to enhance system reliability and minimize
downtime in the event of hardware failures.
 Scalability and Flexibility: Virtualization provided scalability and flexibility, allowing XYZ

22
Solutions to quickly scale resources up or down to meet changing business requirements.
 Centralized Management: The centralized management interface provided by vSphere
simplified IT management tasks, allowing XYZ Solutions' administrators to monitor,
provision, and manage VMs more efficiently.
 Backup and Disaster Recovery: XYZ Solutions leveraged vSphere's backup and disaster
recovery capabilities to protect critical data and ensure business continuity in case of system
failures or disasters.
Benefits:
 Cost Savings: Virtualization resulted in significant cost savings for XYZ Solutions by
reducing hardware expenses, lowering operational costs, and minimizing the need for
physical infrastructure.
 Improved Resource Utilization: By consolidating servers into VMs and dynamically
allocating resources, XYZ Solutions optimized resource utilization and reduced wastage.
 Enhanced Reliability: vSphere's HA and FT features improved system reliability,
minimizing downtime and ensuring uninterrupted service availability.
 Streamlined Management: The centralized management interface simplified IT
management tasks, reducing administrative overhead and improving operational efficiency.
 Scalability and Agility: Virtualization provided scalability and agility, allowing XYZ
Solutions to quickly adapt to changing business needs and scale resources as required.
Conclusion:
By leveraging virtualization tools like VMware vSphere, XYZ Solutions successfully
optimized its IT infrastructure, improved operational efficiency, and achieved cost savings.
The adoption of virtualization enabled XYZ Solutions to build a more reliable, scalable, and
flexible IT environment, empowering the company to innovate faster and stay competitive in
the rapidly evolving technology landscape.
5.7.2. Case Study: Fidelity National Information Services (FIS) Overview:
Fidelity National Information Services (FIS) is a global leader in financial technology
solutions, providing a wide range of services to banks, financial institutions, and businesses
worldwide. This case study examines how FIS leveraged virtualization technology to
optimize its IT infrastructure and improve operational efficiency.
Challenges:
 Legacy Infrastructure: FIS operated on a legacy IT infrastructure consisting of multiple
physical servers, which were costly to maintain and lacked scalability.
 Resource Underutilization: The traditional infrastructure suffered from resource
underutilization, with servers running at low capacity, leading to inefficient use of hardware
resources.
 High Operational Costs: Maintaining a large number of physical servers resulted in high
operational costs associated with power consumption, cooling, and hardware maintenance.
 Complexity and Management Overhead: Managing a diverse array of physical servers
added complexity to the IT environment, requiring significant administrative effort and
resources.
Solution:
 Virtualization Deployment: FIS implemented a virtualization solution, such as VMware
vSphere or Microsoft Hyper-V, to consolidate its physical servers into virtual machines
(VMs). This allowed for better resource utilization and reduced the number of physical
servers required.
 Centralized Management: The virtualization platform provided a centralized management
interface, enabling FIS's IT administrators to monitor, provision, and manage VMs more
efficiently.
 Dynamic Resource Allocation: Through features like dynamic resource allocation and load

23
balancing, FIS optimized resource utilization, ensuring that computing resources were
allocated dynamically based on workload demands.
 High Availability and Disaster Recovery: FIS implemented high availability (HA) and
disaster recovery (DR) solutions within the virtualization platform to enhance data protection
and ensure business continuity in the event of hardware failures or disasters.
 Automation and Orchestration: FIS leveraged automation and orchestration tools to
streamline IT operations, automate routine tasks, and improve overall operational efficiency.
Benefits:
 Cost Savings: Virtualization resulted in significant cost savings for FIS by reducing
hardware expenses, optimizing resource utilization, and lowering operational costs associated
with power consumption and maintenance.
 Improved Scalability: The virtualized infrastructure provided scalability and flexibility to
accommodate FIS's growing IT demands and adapt to changing business requirements.
 Enhanced Reliability: Features like HA and DR enhanced the reliability and availability of
FIS's IT services, minimizing downtime and ensuring continuous operations.
 Streamlined Management: Centralized management and automation capabilities
streamlined IT operations, reducing management overhead and improving productivity.
 Agility and Innovation: Virtualization enabled FIS to respond more quickly to market
changes, innovate faster, and deliver new services and solutions to its customers more
efficiently.
Conclusion:
By embracing virtualization technology, Fidelity National Information Services (FIS)
successfully addressed its infrastructure challenges, improved operational efficiency, and
achieved cost savings. The adoption of virtualization tools allowed FIS to build a more agile,
reliable, and scalable IT infrastructure, enabling the company to stay competitive in the
rapidly evolving financial technology industry.

24

You might also like