Virtualization 119 Pages
Virtualization 119 Pages
Virtualization
VI Sem
2024
Self-Study Topics:
1. Tencent Cloud
2. Oracle Cloud Infrastructure
Faculty HOD
UNIT I INTRODUCTION TO VIRTUALIZATION 7
Virtualization and cloud computing - Need of virtualization – cost, administration, fast
deployment, reduce infrastructure cost – limitations- Types of hardware virtualization: Full
virtualization - partial virtualization – Para virtualization-Types of Hypervisors
Introduction to Virtualization
Virtualization Concept
Creating a virtual machine over existing operating system and hardware is referred as
Hardware Virtualization. Virtual Machines provide an environment that is logically separated
from the underlying hardware.
The machine on which the virtual machine is created is known as host machine and virtual
machine is referred as a guest machine. This virtual machine is managed by a software or
firmware, which is known as hypervisor.
Hypervisor
The hypervisor is a firmware or low-level program that acts as a Virtual Machine Manager.
There are two types of hypervisor:
Type 1 hypervisor executes on bare system. LynxSecure, RTS Hypervisor, Oracle VM, Sun
xVM Server, VirtualLogic VLX are examples of Type 1 hypervisor. The following diagram
shows the Type 1 hypervisor.
The type1 hypervisor does not have any host operating system because they are installed on
a bare system.
Type 2 hypervisor is a software interface that emulates the devices with which a system
normally interacts. Containers, KVM, Microsoft Hyper V, VMWare Fusion, Virtual Server
2005 R2, Windows Virtual PC and VMWare workstation 6.0 are examples of Type 2
hypervisor. The following diagram shows the Type 2 hypervisor.
Types of Hardware Virtualization
Full Virtualization
Emulation Virtualization
Paravirtualization
Full Virtualization
Paravirtualization
In Paravirtualization, the hardware is not simulated. The guest software run their own
isolated domains.
VIRTUALIZATION FOR CLOUD
Types of Virtualization
Today the term virtualization is widely applied to a number of concepts, some of which are
described below −
Server Virtualization
Client & Desktop Virtualization
Services and Applications Virtualization
Network Virtualization
Storage Virtualization
Let us now discuss each of these in detail.
Server Virtualization
It is virtualizing your server infrastructure where you do not have to use any more physical
servers for different purposes.
This is similar to server virtualization, but this time is on the user’s site where you virtualize
their desktops. We change their desktops with thin clients and by utilizing the datacenter
resources.
The virtualization technology isolates applications from the underlying operating system and
from other applications, in order to increase compatibility and manageability. For example –
Docker can be used for that purpose.
Network Virtualization
Storage Virtualization
This is widely used in datacenters where you have a big storage and it helps you to create,
delete, allocated storage to different hardware. This allocation is done through network
connection. The leader on storage is SAN. A schematic illustration is given below −
A hypervisor is a thin software layer that intercepts operating system calls to the hardware. It
is also called as the Virtual Machine Monitor (VMM). It creates a virtual platform on the
host computer, on top of which multiple guest operating systems are executed and monitored.
Hypervisors are two types −
Native hypervisors are software systems that run directly on the host's hardware to control
the hardware and to monitor the Guest Operating Systems. The guest
operating system runs on a separate level above the hypervisor. All of them have a Virtual
Machine Manager.
Examples of this virtual machine architecture are Oracle VM, Microsoft Hyper-V,
VMWare ESX and Xen.
Hosted Hypervisor
Hosted hypervisors are designed to run within a traditional operating system. In other words,
a hosted hypervisor adds a distinct software layer on top of the host operating system. While,
the guest operating system becomes a third software level above the hardware.
A well-known example of a hosted hypervisor is Oracle VM VirtualBox. Others include
VMWare Server and Workstation, Microsoft Virtual PC, KVM, QEMUand
Parallels.
Currently, the end user system i.e. PC is sufficiently powerful to fulfill all the basic
computation requirements of the user, with various additional capabilities which are rarely
used by the user. Most of their systems have sufficient resources which can host a virtual
machine manager and can perform a virtual machine with acceptable performance so far.
2. LIMITED USE OF HARDWARE AND SOFTWARE RESOURCES-
The limited use of the resources leads to under-utilization of hardware and software
resources. As all the PCs of the user are sufficiently capable to fulfill their regular
computational needs that’s why many of their computers are used often which can be used
24/7 continuously without any interruption. The efficiency of IT infrastructure could be
increase by using these resources after hours for other purposes. This environment is possible
to attain with the help of Virtualization.
3. SHORTAGE OF SPACE-
The regular requirement for additional capacity, whether memory storage or compute power,
leads data centers raise rapidly. Companies like Google, Microsoft and Amazon develop their
infrastructure by building data centers as per their needs. Mostly, enterprises unable to pay to
build any other data center to accommodate additional resource capacity. This heads to the
diffusion of a technique which is known as server consolidation.
4. ECO-FRIENDLY INITIATIVES-
At this time, corporations are actively seeking for various methods to minimize their
expenditures on power which is consumed by their systems. Data centers are main power
consumers and maintaining a data center operations needs a continuous power supply as well
as a good amount of energy is needed to keep them cool for well-functioning. Therefore,
server consolidation drops the power consumed and cooling impact by having a fall in
number of servers. Virtualization can provide a sophisticated method of server
consolidation.
5. ADMINISTRATIVE COSTS-
Furthermore, the rise in demand for capacity surplus, that convert into more servers in a data
center, accountable for a significant increase in administrative costs. Hardware monitoring,
server setup and updates, defective hardware replacement, server resources monitoring, and
backups are included in common system administration tasks. These are personnel-intensive
operations. The administrative costs is increased as per the number of servers. Virtualization
decreases number of required servers for a given workload, hence reduces the cost of
administrative employees.
2. Process Virtual Machine : While process virtual machines, unlike system virtual
machine, does not provide us with the facility to install the virtual operating system
completely. Rather it creates virtual environment of that OS while using some app or
program and this environment will be destroyed as soon as we exit from that app. Like in
below image, there are some apps running on main OS as well some virtual machines are
created to run other apps. This shows that as those programs required different OS, process
virtual machine provided them with that for the time being those programs are
running. Example – Wine software in Linux helps to run Windows applications.
Virtual Machine Language : It’s type of language which can be understood by different
operating systems. It is platform-independent. Just like to run any programming language (C,
python, or java) we need specific compiler that actually converts that code into system
understandable code (also known as byte code). The same virtual machine language works. If
we want to use code that can be executed on different types of operating systems like
(Windows, Linux, etc) then virtual machine language will be helpful.
OS-Level Virtualization: Unlike full and para-virtualization, OS-level visualization does not
use a hypervisor. Instead, the virtualization capability, which is part of the physical server
operating system, performs all the tasks of a hypervisor. However, all the virtual servers must
run that same operating system in this server virtualization method.
Why Server Virtualization?
Server virtualization is a cost-effective way to provide web hosting services and effectively
utilize existing resources in IT infrastructure. Without server virtualization, servers only use a
small part of their processing power. This results in servers sitting idle because the workload
is distributed to only a portion of the network’s servers. Data centers become overcrowded
with underutilized servers, causing a waste of resources and power.
By having each physical server divided into multiple virtual servers, server virtualization
allows each virtual server to act as a unique physical device. Each virtual server can run its
own applications and operating system. This process increases the utilization of resources by
making each virtual server act as a physical server and increases the capacity of each physical
machine.
Server virtualization has its fair share of benefits for a business – maximizing the IT
capabilities, saving physical spaces, and cutting costs on energy and new equipment. But for
a company that’s just starting to explore the realm of server virtualization, choosing one from
the three types of server virtualization can be daunting.
So what are the three types of server virtualization and how do companies utilize them? Most
companies use either full virtualization, para-virtualization, and OS-level virtualization. The
difference lies in the OS modification and hypervisor each type employs.
Understanding Server Virtualization: What it is and How it Works
Server computers are powerful – they manage computer networks, store files, and host
applications. But most of the time, these powerful processing units are not utilized to their
full potential because businesses tend to purchase more computers and other hardware
instead, which is not always the wise decision because it occupies more physical space and
consumes more energy.
Server virtualization offers one solution to these two problems by creating multiple virtual
servers in one physical server. This method ensures that each processing unit is maximized to
its full capacity, preventing the need for more computer units in a data center. The adoption
of different virtualization technologies, including server virtualization is expected to rise up
to 56% by 2021.
Currently, there are three types of virtualization used for sharing resources, memory, and
processing.
Full Virtualization
This type of virtualization is widely utilized in the IT community because it only involves
simple virtualization. It makes use of hypervisors to emulate an artificial hardware device
along with everything it needs to host operating systems.
In full virtualization, separate hardware emulations are created to cater to individual guest
operating systems. This makes each guest server fully functional and isolated from the other
servers in a single physical hardware unit.
What’s great about this type is that you can run different operating systems in one server,
since they are independent of each other. Modification of each OS also isn’t necessary for the
full virtualization to be effective.
Currently, enterprises make use of two types of full virtualization:
1. Software Assisted Full Virtualization
Physical Servers, Virtualization Software, and Virtual Servers make up the three primary
parts of the server consolidation architecture.
Physical Servers: The server consolidation environment’s hardware consists of
physical servers. These servers are usually powerful machines with high processing
speeds that are built to manage massive volumes of data. They are utilized to run virtual
servers and host virtualization software.
Virtualization: A single physical server can run several virtual servers thanks to
virtualization software. Multiple virtual servers can share the resources of a single
physical server thanks to the software’s creation of an abstraction layer between the real
hardware and virtual servers.
Virtual Servers: Physical servers are virtualized into virtual servers. They run on top
of the physical servers and are produced and controlled by the virtualization software.
Each virtual server can execute its own programs and services and is a separate instance
of an operating system.
Server consolidation creates virtual servers that share the resources of the physical servers
by fusing a number of physical servers into a single virtualized environment utilizing
virtualization software. This makes it possible to use resources more effectively and save
money. Additionally, it makes it simple to manage existing servers, set up new ones, and
scale resources up or down as necessary.
Types of Server Consolidation
1. Logical Consolidation: In logical server consolidation, multiple virtual servers are
consolidated onto a single physical server. Each virtual server is isolated from the
others and has its own operating system and applications, but shares the same physical
resources such as CPU, RAM, and storage. This allows organizations to run multiple
virtual servers on a single physical server, which can lead to significant cost savings
and improved performance. Virtual servers can be easily added or removed as needed,
which allows organizations to more easily adjust to changing business needs.
2. Physical Consolidation: Physical Consolidation is a type of server consolidation in
which multiple physical servers are consolidated into a single, more powerful server or
cluster of servers. This can be done by replacing multiple older servers with newer,
more powerful servers, or by adding additional resources such as memory and storage
to existing servers. Physical consolidation can help organizations to improve the
performance and efficiency of their cloud computing environment.
3. Rationalized Consolidation: Rationalized consolidation is a type of server
consolidation in which multiple servers are consolidated based on their workloads. This
process involves identifying and grouping servers based on the applications and
services they are running and then consolidating them onto fewer, more powerful
servers or clusters. The goal of rationalized consolidation is to improve the efficiency
and cost-effectiveness of the cloud computing environment by consolidating servers
that are running similar workloads.
How to Perform Server Consolidation?
Server consolidation in cloud computing typically involves several steps, including:
1. Assessing the Current Environment: The first step in server consolidation is to assess
the current environment to determine which servers are running similar workloads and
which ones are underutilized or over-utilized. This can be done by analyzing the usage
patterns and resource utilization of each server.
2. Identifying and Grouping Servers: Once the current environment has been assessed,
the next step is to identify and group servers based on their workloads. This can help to
identify servers that are running similar workloads and can be consolidated onto fewer,
more powerful servers or clusters.
3. Planning the Consolidation: After identifying and grouping servers, the next step is to
plan the consolidation. This involves determining the best way to consolidate the
servers, such as using virtualization technology, cloud management platforms, or
physical consolidation. It also involves determining the resources required to support
the consolidated servers, such as CPU, RAM, and storage.
4. Testing and Validation: Before consolidating the servers, it is important to test and
validate the consolidation plan to ensure that it will meet the organization’s needs and
that the servers will continue to function as expected.
5. Consolidating the Servers: Once the plan has been tested and validated, the servers
can be consolidated. This typically involves shutting down the servers to be
consolidated, migrating their workloads to the consolidated servers, and then bringing
the servers back online.
6. Monitoring and Maintenance: After the servers have been consolidated, it is
important to monitor the consolidated servers to ensure that they are performing as
expected and to identify any potential issues. Regular maintenance should also be
performed to keep the servers running smoothly.
7. Optimizing the Consolidated Environment: To keep the consolidated environment
optimal, it’s important to regularly evaluate the usage patterns and resource utilization
of the consolidated servers, and make adjustments as needed.
Benefits of Server Consolidation
Server consolidation in cloud computing can provide a number of benefits, including:
Cost savings: By consolidating servers, organizations can reduce the number of
physical servers they need to maintain, which can lead to cost savings on hardware,
power, and cooling.
Improved performance: Consolidating servers can also improve the performance of
the cloud computing environment. By using virtualization technology, multiple virtual
servers can run on a single physical server, which allows for better utilization of
resources. This can lead to faster processing times and better overall performance.
Scalability and flexibility: Server consolidation can also improve the scalability and
flexibility of the cloud environment. By using virtualization technology, organizations
can easily add or remove virtual servers as needed, which allows them to more easily
adjust to changing business needs.
Management simplicity: Managing multiple servers can be complex and time-
consuming. Consolidating servers can help to reduce the complexity of managing
multiple servers, by providing a single point of management. This can help
organizations to reduce the effort and costs associated with managing multiple servers.
Better utilization of resources: By consolidating servers, organizations can improve
the utilization of resources, which can lead to better performance and cost savings.
Server consolidation in cloud computing is a process of combining multiple servers into a
single, more powerful server or cluster of servers, in order to improve the efficiency and
cost-effectiveness of the cloud computing environment.
How to choose a virtualization platform
What is a virtualization platform?
A virtualization platform is a solution for managing virtual machines (VMs), enabling an IT
organization to support isolated computing environments that share a pool of hardware
resources.
Organizations use VMs for a variety of reasons, including to efficiently manage many
different kinds of computing environments, to support older operating systems and software,
and to run test environments. A virtualization platform brings together all the technologies
needed to support and manage large numbers of VMs.
VM platforms continue to evolve, prompting some enterprises to explore new virtualization
providers. A clear understanding of virtualization concepts can help inform these choices.
Learn about virtualization solutions at Red Hat
Important virtualization concepts and choices
Virtualization platforms take different approaches to the technologies that make VMs
possible. Here are some concepts to keep in mind when comparing platforms.
Type 1 or type 2 hypervisors
A hypervisor is software that pools computing resources—like processing, memory, and
storage—and reallocates them among VMs. This is the technology that enables users to
create and run multiple VMs on a single physical machine. Hypervisors fall into 2 categories.
Type 1 hypervisors run directly on the host’s hardware, and are sometimes called native or
bare metal hypervisors. A type 1 hypervisor assumes the role of a host operating system
(OS), scheduling and managing resources for each VM. This type of hypervisor is well suited
for enterprise data center or server-based environments. Popular type 1 hypervisors
include KVM (the open source foundation for Red Hat’s virtualization platforms), Microsoft
Hyper-V, and VMware vSphere.
Type 2 hypervisors run as a software layer on top of a conventional OS. The host OS
manages resources for the hypervisor like any other application running on the OS. Type 2
hypervisors are usually best for individuals who want to run multiple operating systems on a
personal workstation. Common examples of type 2 hypervisors include VMware Workstation
and Oracle VirtualBox.
Open source or proprietary technology
Open source software, such as the KVM virtualization technology built into Linux® and
the Kubernetes-based KubeVirt project, rely on community contributions and open standards.
One benefit to open source software, besides its transparency, is cross-platform
compatibility. Open standards and open application programming interfaces (APIs) lead to
flexible integration, making it possible to run virtual environments across different datacenter
and cloud infrastructures.
Conversely, proprietary technology can make it challenging to integrate with other
technologies and harder to switch vendors.
Container and cloud compatibility
Modern IT organizations need to support both VMs and containers. Containers group
together just what’s needed to run a single application or service and tend to be smaller than
VMs, making them lightweight and portable. Containers and VMs may need to operate
seamlessly across hybrid and multicloud environments.
Faced with all this complexity, IT organizations seek to simplify their application
development and deployment pipelines. A platform should support both containers and VMs
and help teams use computing resources efficiently, and ensure applications and services roll
out in an orderly, consistent way.
Traditional virtualization platforms can be separate from container platforms. Sometimes
they are meant to work in a single environment, rather than across multiple cloud
environments.
More modern virtualization platforms act as components of unified platforms that work
across different infrastructure, including on premise and cloud environments. This approach
can streamline deployment, management and monitoring of both VMs and containers. A
unified platform can eliminate duplicate work and improve flexibility, making it easier to
adapt to changes.
What to look for in a virtualization platform
Equipped with an understanding of virtualization concepts, you’ll want to list your
requirements for a virtualization platform and evaluate the benefits and drawbacks of
different choices in the marketplace. Your research should include important qualities like
costs and support levels, as well as features specific to virtualization platforms. Here are a
few such features to look for.
Ease of migration
When moving from one virtualization platform to another, administrators will seek to avoid
disruptions, incompatibilities, and degraded performance. Virtualization platforms can have
different deployment and management processes, and different tooling, especially across
different cloud providers.
Preparation can help avoid many migration pitfalls. Using tested and effective toolkits to
preemptively validate VM compatibility and move multiple VMs at once can help migrations
go quickly and smoothly.
Learn about Red Hat’s migration toolkit for virtualization
Automation
At enterprise scale, with hundreds or thousands of VMs, automation becomes a necessity.
Migrating and managing VMs can be repetitive, time-consuming work without an automation
system. Automation tools that follow infrastructure as code (IaC) and configuration as code
(CaC) methodologies can take over and replace manual processes. Automation helps out
beyond just migration and deployments. Automated workflows can inventory existing VMs,
apply patches, manage configurations, and more.
Explore how to automate VM migration and ops
Management capabilities
VM administrators and site reliability engineers might oversee deployments that span
multiple data centers, private clouds, and public clouds. They need tools and capabilities to
support, manage, and monitor VMs across these environments.
A virtualization platform should provide a single console with built-in security policies and
full visibility and control of VMs. This end-to-end visibility and control helps your teams
deliver new applications and services that comply with policies and regulations.
Security and stability
VM administrators have to protect systems from unauthorized access and service disruptions.
A virtualization platform should make it possible to apply security policies, isolation
technologies, and least privileges principles.
In platforms that combine VMs with container management, Kubernetes security standards
can help ensure virtual machines run without root privileges, complying with industry best
practices and mitigating risks.
Partner ecosystem
Migrating to a new virtualization platform shouldn’t require you to walk away from valued
vendor relationships or integrations. A platform should maintain relationships with partners
who have deep expertise in the virtualization technologies you choose. Specifically for
virtualization platforms, you should look for a strong network of partners who can provide
storage and network virtualization, and backup and disaster recovery. Partnerships with major
hardware providers and IT services providers may also be essential to the success of your
VM program.
fabric.
Introduction to VMware NSX
WATCH NOW
Why network virtualization?
Network virtualization is rewriting the rules for the way services are delivered, from the
software-defined data center (SDDC), to the cloud, to the edge. This approach moves
networks from static, inflexible, and inefficient to dynamic, agile, and optimized. Modern
networks must keep up with the demands for cloud-hosted, distributed apps, and the
increasing threats of cybercriminals while delivering the speed and agility you need for faster
time to market for your applications. With network virtualization, you can forget about
spending days or weeks provisioning the infrastructure to support a new application. Apps
can be deployed or updated in minutes for rapid time to value.
How does network virtualization work?
Network virtualization decouples network services from the underlying hardware and allows
virtual provisioning of an entire network. It makes it possible to programmatically create,
provision, and manage networks all in software, while continuing to leverage the underlying
physical network as the packet-forwarding backplane. Physical network resources, such as
switching, routing, firewalling, load balancing, virtual private networks (VPNs), and more,
are pooled, delivered in software, and require only Internet Protocol (IP) packet forwarding
from the underlying physical network.
Network and security services in software are distributed to a virtual layer (hypervisors, in
the data center) and ―attached‖ to individual workloads, such as your virtual machines (VMs)
or containers, in accordance with networking and security policies defined for each connected
application. When a workload is moved to another host, network services and security
policies move with it. And when new workloads are created to scale an application, necessary
policies are dynamically applied to these new workloads, providing greater policy
consistency and network agility.
Benefits of network virtualization
Network virtualization helps organizations achieve major advances in speed, agility, and
security by automating and simplifying many of the processes that go into running a data
center network and managing networking and security in the cloud. Here are some of the key
benefits of network virtualization:
Reduce network provisioning time from weeks to minutes
Achieve greater operational efficiency by automating manual processes
Place and move workloads independently of physical topology
Improve network security within the data center
Network Virtualization Example
One example of network virtualization is virtual LAN (VLAN). A VLAN is a subsection of a
local area network (LAN) created with software that combines network devices into one
group, regardless of physical location. VLANs can improve the speed and performance of
busy networks and simplify changes or additions to the network.
Another example is network overlays. There are various overlay technologies. One industry-
standard technology is called virtual extensible local area network (VXLAN). VXLAN
provides a framework for overlaying virtualized layer 2 networks over layer 3 networks,
defining both an encapsulation mechanism and a control plane. Another is generic network
virtualization encapsulation (GENEVE), which takes the same concepts but makes them
more extensible by being flexible to multiple control plane mechanisms.
VMware NSX Data Center – Network Virtualization Platform
VMware NSX Data Center is a network virtualization platform that delivers networking and
security components like firewalling, switching, and routing that are defined and consumed in
software. NSX takes an architectural approach built on scale-out network virtualization that
delivers consistent, pervasive connectivity and security for apps and data wherever they
reside, independent of underlying physical infrastructure.
Network Virtualization is a process of logically grouping physical networks and making
them operate as single or multiple independent networks called Virtual Networks.
General Architecture Of Network Virtualization
Tools for Network Virtualization :
1. Physical switch OS –
It is where the OS must have the functionality of network virtualization.
2. Hypervisor –
It is which uses third-party software or built-in networking and the functionalities of
network virtualization.
The basic functionality of the OS is to give the application or the executing process with a
simple set of instructions. System calls that are generated by the OS and executed through
the libc library are comparable to the service primitives given at the interface between the
application and the network through the SAP (Service Access Point).
The hypervisor is used to create a virtual switch and configuring virtual networks on it. The
third-party software is installed onto the hypervisor and it replaces the native networking
functionality of the hypervisor. A hypervisor allows us to have various VMs all working
optimally on a single piece of computer hardware.
Functions of Network Virtualization :
It enables the functional grouping of nodes in a virtual network.
It enables the virtual network to share network resources.
It allows communication between nodes in a virtual network without routing of frames.
It restricts management traffic.
It enforces routing for communication between virtual networks.
Network Virtualization in Virtual Data Center :
1. Physical Network
Physical components: Network adapters, switches, bridges, repeaters, routers and hubs.
Grants connectivity among physical servers running a hypervisor, between physical
servers and storage systems and between physical servers and clients.
2. VM Network
Consists of virtual switches.
Provides connectivity to hypervisor kernel.
Connects to the physical network.
Resides inside the physical server.
Network Virtualization In VDC
Advantages of Network Virtualization :
Improves manageability –
Grouping and regrouping of nodes are eased.
Configuration of VM is allowed from a centralized management workstation using
management software.
Reduces CAPEX –
The requirement to set up separate physical networks for different node groups is
reduced.
Improves utilization –
Multiple VMs are enabled to share the same physical network which enhances the
utilization of network resource.
Enhances performance –
Network broadcast is restricted and VM performance is improved.
Enhances security –
Sensitive data is isolated from one VM to another VM.
Access to nodes is restricted in a VM from another VM.
Disadvantages of Network Virtualization :
It needs to manage IT in the abstract.
It needs to coexist with physical devices in a cloud-integrated hybrid environment.
Increased complexity.
Upfront cost.
Possible learning curve.
Examples of Network Virtualization :
Virtual LAN (VLAN) –
The performance and speed of busy networks can be improved by VLAN.
VLAN can simplify additions or any changes to the network.
Network Overlays –
A framework is provided by an encapsulation protocol called VXLAN for overlaying
virtualized layer 2 networks over layer 3 networks.
The Generic Network Virtualization Encapsulation protocol (GENEVE) provides a new
way to encapsulation designed to provide control-plane independence between the
endpoints of the tunnel.
Network Virtualization Platform: VMware NSX –
VMware NSX Data Center transports the components of networking and security such
as switching, firewalling and routing that are defined and consumed in software.
It transports the operational model of a virtual machine (VM) for the network.
Applications of Network Virtualization :
Network virtualization may be used in the development of application testing to mimic
real-world hardware and system software.
It helps us to integrate several physical networks into a single network or separate
single physical networks into multiple analytical networks.
In the field of application performance engineering, network virtualization allows the
simulation of connections between applications, services, dependencies, and end-users
for software testing.
It helps us to deploy applications in a quicker time frame, thereby supporting a faster
go-to-market.
Network virtualization helps the software testing teams to derive actual results with
expected instances and congestion issues in a networked environment.
A VLAN WAN architecture, often associated with "WAN virtualization," is a network design
where multiple logical networks (VLANs) are created and managed over a single physical
Wide Area Network (WAN) infrastructure, allowing for efficient traffic segregation and
management across different locations while utilizing diverse underlying connections like
MPLS, broadband, or cellular networks, all seemingly as one unified network; essentially, it
abstracts the physical network complexity, enabling flexible and scalable network operations
across various sites.
Key points about VLAN WAN architecture and WAN virtualization:
Logical separation:
VLANs within the WAN architecture create virtual network segments, effectively isolating
traffic between different departments, users, or applications, even though they share the
same physical infrastructure.
Traffic management:
By defining VLANs, network administrators can prioritize specific traffic types, like voice
over IP (VoIP) or critical data, within the WAN, optimizing network performance.
Multi-link aggregation:
WAN virtualization allows organizations to combine multiple WAN connections (from
different providers) into a single logical network, enhancing redundancy and reliability.
Cost efficiency:
By leveraging a shared physical infrastructure, the need for dedicated physical connections
for each network segment is reduced, potentially lowering overall costs.
How it works:
Switch configuration:
Network switches at each site are configured to recognize and manage VLAN tags, which
identify the specific virtual network a packet belongs to.
Router configuration:
Routers at the network edge are configured to route traffic based on VLAN information,
ensuring data is directed to the correct destination across the WAN.
Benefits of VLAN WAN architecture:
Improved security:
Isolating network segments through VLANs helps prevent unauthorized access to sensitive
data between different network sections.
Scalability:
Easily add new VLANs as network needs evolve without major infrastructure changes.
Simplified management:
Centralized management of VLANs across multiple locations simplifies network
administration.
Important considerations:
Network design:
Careful planning is needed to define VLANs and their associated network segments based
on business requirements.
Device compatibility:
Ensure all network devices (switches, routers) support VLAN functionality and tagging.
WAN Virtualization
In today's fast-paced digital world, seamless connectivity is the key to success for businesses
of all sizes. WAN (Wide Area Network) virtualization has emerged as a game-changing
technology, revolutionizing the way organizations connect their geographically dispersed
branches and remote employees. In this blog post, we will explore the concept of WAN
virtualization, its benefits, implementation considerations, and its potential impact on
businesses.
Increased Flexibility and Scalability: WAN virtualization allows businesses to scale their
network resources on-demand, facilitating seamless expansion or contraction based on their
requirements. It provides flexibility to dynamically allocate bandwidth, prioritize critical
applications, and adapt to changing network conditions.
Multi-Site Connectivity: For organizations with multiple remote sites, WAN virtualization
offers a cost-effective solution. It enables seamless connectivity between sites, allowing
efficient data transfer, collaboration, and resource sharing. With centralized management,
network administrators can ensure consistent policies and security across all sites. Cloud
Connectivity:
As more businesses adopt cloud-based applications and services, WAN virtualization
becomes an essential component. It provides reliable and secure connectivity between on-
premises infrastructure and public or private cloud environments. By prioritizing critical
cloud traffic and optimizing routing, WAN virtualization ensures optimal performance for
cloud-based applications.
### The Basics of WAN
A WAN is a telecommunications network that extends over a large geographical area. It is
designed to connect devices and networks across long distances, using various
communication links such as leased lines, satellite links, or the internet. The primary purpose
of a WAN is to facilitate the sharing of resources and information across locations, making it
a vital component of modern business infrastructure. WANs can be either private, connecting
specific networks of an organization, or public, utilizing the internet for broader connectivity.
### The Role of Virtualization in WAN
Virtualization has revolutionized the way WANs operate, offering enhanced flexibility,
efficiency, and scalability. By decoupling network functions from physical hardware,
virtualization allows for the creation of virtual networks that can be easily managed and
adjusted to meet organizational needs. This approach reduces the dependency on physical
infrastructure, leading to cost savings and improved resource utilization. Virtualized WANs
can dynamically allocate bandwidth, prioritize traffic, and ensure optimal performance,
making them an attractive solution for businesses seeking agility and resilience.
Separating: Control and Data Plane:
1: – WAN virtualization can be defined as the abstraction of physical network resources into
virtual entities, allowing for more flexible and efficient network management. By separating
the control plane from the data plane, WAN virtualization enables the centralized
management and orchestration of network resources, regardless of their physical locations.
This simplifies network administration and paves the way for enhanced scalability and
agility.
2: – WAN virtualization optimizes network performance by intelligently routing traffic and
dynamically adjusting network resources based on real-time conditions. This ensures that
critical applications receive the necessary bandwidth and quality of service, resulting in
improved user experience and productivity.
3: – By leveraging WAN virtualization, organizations can reduce their reliance on expensive
dedicated circuits and hardware appliances. Instead, they can leverage existing network
infrastructure and utilize cost-effective internet connections without compromising security
or performance. This significantly lowers operational costs and capital expenditures.
4: – Traditional WAN architectures often struggle to meet modern businesses’ evolving
needs. WAN virtualization solves this challenge by providing a scalable and flexible network
infrastructure. With virtual overlays, organizations can rapidly deploy and scale their network
resources as needed, empowering them to adapt quickly to changing business requirements.
**Implementing WAN Virtualization**
Successful implementation of WAN virtualization requires careful planning and execution.
Start by assessing your current network infrastructure and identifying areas for improvement.
Choose a virtualization solution that aligns with your organization’s specific needs and
budget. Consider leveraging software-defined WAN (SD-WAN) technologies to simplify the
deployment process and enhance overall network performance.
There are several popular techniques for implementing WAN virtualization, each with its
unique characteristics and use cases. Let’s explore a few of them:
a. MPLS (Multi-Protocol Label Switching): MPLS is a widely used technique that
leverages labels to direct network traffic efficiently. It provides reliable and secure
connectivity, making it suitable for businesses requiring stringent service level agreements
(SLAs).
b. SD-WAN (Software-Defined Wide Area Network): SD-WAN is a revolutionary
technology that abstracts and centralizes the network control plane in software. It offers
dynamic path selection, traffic prioritization, and simplified network management, making it
ideal for organizations with multiple branch locations.
c. VPLS (Virtual Private LAN Service): VPLS extends the functionality of Ethernet-based
LANs over a wide area network. It creates a virtual bridge between geographically dispersed
sites, enabling seamless communication as if they were part of the same local network.
Example Technology: MPLS & LDP
**The Mechanics of MPLS: How It Works**
MPLS operates by assigning labels to data packets at the network’s entry point—an MPLS-
enabled router. These labels determine the path the packet will take through the network,
enabling quick and efficient routing. Each router along the path uses the label to make
forwarding decisions, eliminating the need for complex table lookups. This not only
accelerates data transmission but also allows network administrators to predefine optimal
paths for different types of traffic, enhancing network performance and reliability.
**Exploring LDP: The Glue of MPLS Systems**
The Label Distribution Protocol (LDP) is crucial for the functioning of MPLS networks. LDP
is responsible for the distribution of labels between routers, ensuring that each understands
how to handle the labeled packets appropriately. When routers communicate using LDP, they
exchange label information, which helps in building a label-switched path (LSP). This
process involves the negotiation of label values and the establishment of the end-to-end path
that data packets will traverse, making LDP the unsung hero that ensures seamless and
effective MPLS operation.
**Benefits of MPLS and LDP in Modern Networks**
MPLS and LDP together offer a range of benefits that make them indispensable in
contemporary networking. They provide a scalable solution that supports a wide array of
services, including VPNs, traffic engineering, and quality of service (QoS). This versatility
makes it easier for network operators to manage and optimize traffic, leading to improved
bandwidth utilization and reduced latency. Additionally, MPLS networks are inherently more
secure, as the label-switching mechanism makes it difficult for unauthorized users to
intercept or tamper with data.
Understanding SD-WAN
SD-WAN is a cutting-edge networking technology that utilizes software-defined principles to
manage and optimize network connections intelligently. Unlike traditional WAN, which
relies on costly and inflexible hardware, SD-WAN leverages software-based solutions to
streamline network management, improve performance, and enhance security.
Key Benefits of SD-WAN
a) Enhanced Performance: SD-WAN intelligently routes traffic across multiple network
paths, ensuring optimal performance and reduced latency. This results in faster data transfers
and improved user experience.
b) Cost Efficiency: With SD-WAN, businesses can leverage affordable broadband
connections rather than relying solely on expensive MPLS (Multiprotocol Label Switching)
links. This not only reduces costs but also enhances network resilience.
c) Simplified Management: SD-WAN centralizes network management through a user-
friendly interface, allowing IT teams to easily configure, monitor, and troubleshoot network
connections. This simplification saves time and resources, enabling IT professionals to focus
on strategic initiatives.
SD-WAN incorporates robust security measures to protect network traffic and sensitive data.
It employs encryption protocols, firewall capabilities, and traffic segmentation techniques to
safeguard against unauthorized access and potential cyber threats. These advanced security
features give businesses peace of mind and ensure data integrity.
WAN Virtualization with Network Connectivity Center
**Understanding Google Network Connectivity Center**
Google Network Connectivity Center (NCC) is a cloud-based service designed to simplify
and centralize network management. By leveraging Google’s extensive global infrastructure,
NCC provides organizations with a unified platform to manage their network connectivity
across various environments, including on-premises data centers, multi-cloud setups, and
hybrid environments.
Subnetting: With VPC subnetting, you can divide your IP address range into smaller
subnets, allowing for better resource allocation and network segmentation.
Firewall Rules: Google Cloud VPC networking provides robust firewall rules that enable
you to control inbound and outbound traffic, ensuring enhanced security for your applications
and data.
Route Tables: Route tables in VPC networking allow you to define the routing logic for your
network traffic, ensuring efficient communication between different subnets and external
networks.
VPN Connectivity: Google Cloud supports VPN connectivity, allowing you to establish
secure connections between your on-premises network and your cloud resources, creating a
hybrid infrastructure.
Load Balancing: VPC networking offers load balancing capabilities, distributing incoming
traffic across multiple instances, increasing availability and scalability of your applications.
What is MPLS?
MPLS, short for Multi-Protocol Label Switching, is a versatile and scalable protocol used in
modern networks. At its core, MPLS assigns labels to network packets, allowing for efficient
and flexible routing. These labels help streamline traffic flow, leading to improved
performance and reliability. To understand how MPLS works, we need to explore its key
components.
The basic building block is the Label Switched Path (LSP), a predetermined path that packets
follow. Labels are attached to packets at the ingress router, guiding them along the LSP until
they reach their destination. This label-based forwarding mechanism enables MPLS to offer
traffic engineering capabilities and support various network services.
Understanding Label Distributed Protocols
Label distributed protocols, or LDP, are fundamental to modern networking technologies.
They are designed to establish and maintain label-switched paths (LSPs) in a network. LDP
operates by distributing labels, which are used to identify and forward network traffic
efficiently. By leveraging labels, LDP enhances network scalability and enables faster packet
forwarding.
One key advantage of label-distributed protocols is their ability to support multiprotocol label
switching (MPLS). MPLS allows for efficient routing of different types of network traffic,
including IP, Ethernet, and ATM. This versatility makes label-distributed protocols highly
adaptable and suitable for diverse network environments. Additionally, LDP minimizes
network congestion, improves Quality of Service (QoS), and promotes effective resource
utilization.
What is MPLS LDP?
MPLS LDP, or Label Distribution Protocol, is a key component of Multiprotocol Label
Switching (MPLS) technology. It facilitates the establishment of label-switched paths (LSPs)
through the network, enabling efficient forwarding of data packets. MPLS LDP uses labels to
direct network traffic along predetermined paths, eliminating the need for complex routing
table lookups.
One of MPLS LDP’s primary advantages is its ability to enhance network performance. By
utilizing labels, MPLS LDP reduces the time and resources required for packet forwarding,
resulting in faster data transmission and reduced network congestion. Additionally, MPLS
LDP allows for traffic engineering, enabling network administrators to prioritize certain types
of traffic and allocate bandwidth accordingly.
Understanding MPLS VPNs
MPLS VPNs, or Multiprotocol Label
Switching Virtual Private Networks, are
network infrastructure that allows
multiple sites or branches of an
organization to communicate over a
shared service provider network
securely. Unlike traditional VPNs,
MPLS VPNs utilize labels to efficiently
route and prioritize data packets,
ensuring optimal performance and security. By encapsulating data within labels, MPLS
VPNs enable seamless communication between different sites while maintaining privacy and
segregation.
Understanding VPLS
VPLS, short for Virtual Private LAN Service, is a technology that enables the creation of a
virtual LAN (Local Area Network) over a shared or public network infrastructure. It allows
geographically dispersed sites to connect as if they are part of the same LAN, regardless of
their physical distance. This technology uses MPLS (Multiprotocol Label Switching) to
transport Ethernet frames across the network efficiently.
Key Features and Benefits
Scalability and Flexibility: VPLS offers scalability, allowing businesses to easily expand
their network as their requirements grow. It allows adding or removing sites without
disrupting the overall network, making it an ideal choice for organizations with dynamic
needs.
Seamless Connectivity: By extending the LAN across different locations, VPLS provides a
seamless and transparent network experience. Employees can access shared resources, such
as files and applications, as if they were all in the same office, promoting collaboration and
productivity across geographically dispersed teams.
Enhanced Security: VPLS ensures a high level of security by isolating each customer’s
traffic within their own virtual LAN. The data is encapsulated and encrypted, protecting it
from unauthorized access. This
makes VPLS a reliable solution for
organizations that handle sensitive
information and must comply with
strict security regulations.
Advanced WAN Designs
DMVPN Phase 2 Spoke to Spoke
Tunnels
Learning the mapping information
required through NHRP resolution
creates a dynamic spoke-to-spoke
tunnel. How does a spoke know how to perform such a task? As an enhancement to DMVPN
Phase 1, spoke-to-spoke tunnels were first introduced in Phase 2 of the network. Phase 2
handed responsibility for NHRP resolution requests to each spoke individually, which means
that spokes initiated NHRP resolution requests when they determined a packet needed a
spoke-to-spoke tunnel. Cisco Express Forwarding (CEF) would assist the spoke in making
this decision based on information contained in its routing table.
WAN Services
Network Address Translation:
In simple terms, NAT is a
technique for modifying IP
addresses while packets traverse
from one network to another. It
bridges private local networks
and the public Internet, allowing multiple devices to share a single public IP address. By
translating IP addresses, NAT enables private networks to communicate with external
networks without exposing their internal structure.
Types of Network Address Translation
There are several types of NAT, each serving a specific purpose. Let’s explore a few
common ones:
Static NAT: Static NAT, also known as one-to-one NAT, maps a private IP address to a
public IP address. It is often used when specific devices on a network require direct access to
the internet. With static NAT, inbound and outbound traffic can be routed seamlessly.
Dynamic NAT: On the other hand, Dynamic NAT allows a pool of public IP addresses to be
shared among several devices within a private network. As devices connect to the internet,
they are assigned an available public IP address from the pool. Dynamic NAT facilitates
efficient utilization of public IP addresses while maintaining network security.
Port Address Translation (PAT): PAT, also called NAT Overload, is an extension of
dynamic NAT. Rather than assigning a unique public IP address to each device, PAT assigns
a unique port number to each connection. PAT allows multiple devices to share a single
public IP address by keeping track of port numbers. This technique is widely used in home
networks and small businesses.
NAT plays a crucial role in enhancing network security. By hiding devices’ internal IP
addresses, it acts as a barrier against potential attacks from the Internet. External threats find
it harder to identify and target individual devices within a private network. NAT acts as a
shield, providing additional security to the network infrastructure.
WAN Challenges
Deploying and managing the Wide Area Network (WAN) has become more challenging.
Engineers face several design challenges, such as traffic flow decentralizing, inefficient
WAN link utilization, routing protocol convergence, and application performance issues with
active-active WAN edge designs. Active-active WAN designs that spray and pray over
multiple active links present technical and business challenges.
To do this efficiently, you have to understand application flows. There may also be
performance problems. When packets reach the other end, there may be out-of-order packets
as each link propagates at different speeds. The remote end has to be reassembled and put
back together, causing jitter and delay. Both high jitter and delay are bad for network
Memory Virtualization
Virtual memory virtualization is similar to the virtual memory support provided by modern operating
systems. In a traditional execution environment, the operating system maintains mappings of virtual
memory to machine memory using page tables, which is a one-stage mapping from virtual memory to
machine memory. All modern x86 CPUs include a memory management unit (MMU) and a translation
look aside buffer (TLB) to optimize virtual memory performance. However, in a virtual execution
environment, virtual memory virtualization involves sharing the physical system memory in RAM and
dynamically allocating it to the physical memory of the VMs.
That means a two-stage mapping process should be maintained by the guest OS and the VMM,
respectively: virtual memory to physical memory and physical memory to machine memory.
Furthermore, MMU virtualization should be supported, which is transparent to the guest OS. The guest
OS continues to control the mapping of virtual addresses to the physical memory addresses of VMs.
But the guest OS cannot directly access the actual machine memory. The VMM is responsible for
mapping the guest physical memory to the actual machine memory. Figure 3.12 shows the two-level
memory mapping procedure.
Since each page table of the guest OSes has a separate page table in the VMM corresponding to it,
the VMM page table is called the shadow page table. Nested page tables add another layer of
indirection to virtual memory. The MMU already handles virtual-to-physical translations as defined by
the OS. Then the physical memory addresses are translated to machine addresses using another set of
page tables defined by the hypervisor. Since modern operating systems maintain a set of page tables
for every process, the shadow page tables will get flooded. Consequently, the perfor-mance overhead
and cost of memory will be very high.
When a virtual address needs to be translated, the CPU will first look for the L4 page table pointed
to by Guest CR3. Since the address in Guest CR3 is a physical address in the guest OS, the CPU needs
to convert the Guest CR3 GPA to the host physical address (HPA) using EPT. In this procedure, the
CPU will check the EPT TLB to see if the translation is there. If there is no required translation in the
EPT TLB, the CPU will look for it in the EPT. If the CPU cannot find the translation in the EPT, an
EPT violation exception will be raised.
When the GPA of the L4 page table is obtained, the CPU will calculate the GPA of the L3 page table
by using the GVA and the content of the L4 page table. If the entry corresponding to the GVA in the
L4
page table is a page fault, the CPU will generate a page fault interrupt and will let the guest OS kernel
handle the interrupt. When the PGA of the L3 page table is obtained, the CPU will look for the EPT to
get the HPA of the L3 page table, as described earlier. To get the HPA corresponding to a GVA, the
CPU needs to look for the EPT five times, and each time, the memory needs to be accessed four times.
There-fore, there are 20 memory accesses in the worst case, which is still very slow. To overcome this
short-coming, Intel increased the size of the EPT TLB to decrease the number of memory accesses.
What is Memory Virtualization?
Memory virtualization is like having a super smart organizer for your computer brain (Running
Memory -RAM). Imagine your computer brain is like a big bookshelf, and all the apps and
programs you installed or are running are like books.
Memory virtualization is the librarian who arranges these books so your computer can easily find
and use them quickly. It also ensures that each application gets a fair share of the memory to run
smoothly and prevents mess, which ultimately makes your computer brain (RAM) more org anized
(tidy) and efficient.
In technical language, memory virtualization is a technique that abstracts, manages, and optimizes
physical memory (RAM) used in computer systems. It creates a layer of abstraction between the
RAM and the software running on your computer. This layer enables efficient memory allocation
to different processes, programs, and virtual machines.
Memory virtualization helps optimize resource utilization and secures the smooth operations of
multiple applications on shared physical memory (RAM) by ensuring each application gets the
required memory to work flawlessly.
Memory virtualization also decouples the volatile RAM (Temporary memory) from various
individual systems and aggregates that data into a virtualized memory pool available to any
system in the cluster. The distributed memory pool will be used as a high-speed cache, messaging
layer, and shared memory for the CPU to increase system performance and efficiency.
*Note – Don’t confuse it with virtual memory! Virtual memory is like having a bigger workspace
(hard drive) to handle large projects, and memory virtualization is like an office manager dividing
up the shared resources, especially computer RAM, to keep things organized and seamless.
How is Memory Virtualization Useful in Our Daily Lives?
Basically, memory virtualization helps our computer systems to work fast and smoothly. It also
provides sufficient memory for all apps and programs to run seamlessly.
Memory virtualization, a personal computer assistant, ensures everything stays organized and
works properly, which is very important for the efficient working of our computers and
smartphones. Whether browsing the web, working on Google documents, or using complex
software, memory virtualization is the hero that provides us with a smooth and responsive
computing experience in our daily lives.
Memory virtualization is essential for modern computing, especially in cloud computing, where
multiple users and applications share the same physical hardware (Like RAM and System).
It helps in efficient memory management and allocation, isolation between applications (by
providing the required share of memory), and dynamic adjustment based on the running
workloads of various applications. Without memory virtualization, it would be cha llenging to run
multiple applications at the same time.
Applications you may have heard about! This critical technology enables more efficient and
flexible use of computing resources at both complex data centers and personal device levels.
Memory virtualization is integral to personal computers, mobile devices, web hosting, app
hosting, cloud computing, and data center operations.
How Does Memory Virtualization Work in Cloud Computing?
You may be thinking all that is fine, but how does memory virtualization work in cloud
computing? It’s just part of the broader concept of resource virtualization, which includes
internet, storage, network, and many other virtualization techniques.
When memory virtualization takes place in cloud infrastructure, it goes through a process.
3. Dynamic Allocation
Cloud service providers use memory virtualization to allocate virtual memory to VMs and Cloud
users instantly on demand (According to Workload). It means cloud memory can be dynamically
assigned and reassigned based on the fluctuating workload.
This elasticity of cloud computing enables effective use of available resources, and cloud users
can scale up or down their cloud memory as needed. Additionally, cloud migration services help
in ensuring the seamless transfer of data and applications to the cloud, enhancing the benefits of
memory virtualization.
4. Isolation and Data Security
Memory virtualization ensures that the virtual memory allocated to one cloud user or VM is
isolated from others. This isolation is vital for data security and prevents one individual from
accessing another’s data or memory.
That’s why many sensitive IT companies prefer to purchase private cloud services to prevent
hacking and data breaches.
Importance of Memory Virtualization in Cloud Computing
Memory Virtualization plays a critical role in cloud computing for several reasons. It contributes
to cloud services’ efficiency, scalability, effective resource utilization, and cost-effectiveness.
Here are some of the key points that show the importance of memory virtualization in cloud
computing:
1. Memory virtualization allows cloud providers to use physical memory resources in the most
efficient way. Overcommitting of memory allows the optimization of memory resources and
hardware.
2. This virtualization enables the dynamic allocation of cloud memory to cloud user instances.
This elasticity is crucial in cloud computing to manage varying workloads. It allows cloud users
to scale up and down memory resources as needed and promotes flexibility and cost savings.
3. Allocating separate cloud memory for every single user prevents unauthorized access and is a
must for data security.
4. Memory virtualization is vital for handling a large number of users and workloads. It ensures
that scaling up or down memory can be done without manual intervention whenever a VM is
required.
5. Migration and live migration are important for load balancing, hardware maintenance, and
disaster recovery in cloud computing. Transferring VM memory from one host to another is only
possible by live migration and feasible when memory is virtualized. Implementing
reliable software migration services is crucial for ensuring smooth transitions and maintaining
system stability during memory virtualization processes.
6. By optimizing virtual memory usage, memory virtualization maximizes physical memory
utilization and helps reduce the overall operational cost of the cloud.
How Does Memory Virtualization Differ From Other Virtualization Techniques?
Memory virtualization is one of the virtualization techniques used in modern computing. It’s
different from other virtualizations in terms of abstraction and management. Here are some key
differences between memory and other virtualization techniques.
Memory Virtualization vs. Server Virtualization
Memory Virtualization
Abstracts and manages memory resources.
Focus on optimizing memory uses, ensuring isolation.
Enable dynamic allocation of Memory to VMs.
Server virtualization
Abstracts and manages the entire server ( CPU, memory, and storage).
Run multiple isolated VMs on a single physical server.
Splits the physical server into multiple virtual servers.
Memory Virtualization vs. Storage Virtualization
Memory Virtualization
Abstracts and manages memory resources (RAM and Cloud memory).
Optimizes memory allocation and facilitates dynamic memory allocation.
Memory Virtualization
Abstracts and manages memory resources (RAM and Cloud memory).
Optimizes memory allocation and facilitates dynamic memory allocation.
Storage Virtualization
Abstracts and centralizes storage resources.
Allows users to manage storage capacity and data across multiple storage systems.
Provides features like data redundancy and data migration.
Help the system with consistent performance and maintaining smooth operations.
Memory Virtualization vs. Network Virtualization
Memory Virtualization
Focus on managing the allocation and optimization of memory resources.
Not directly dealing with network-related resources; it only deals with memory.
Network Virtualization
Abstracts and separates network resources.
Enable multiple virtual networks to coexist on the same physical network infrastructure.
Provides isolation, segmentation, and management of network resources.
Memory Virtualization vs. Desktop Virtualization
Memory Virtualization
Operates at the hardware level (RAM).
Manage memory resources available to running processes, applications, or virtual machines.
Used in almost every digital device, from laptops to smartphones.
Desktop Virtualization
Abstracts the entire desktop system, including the operating system, applications, and user
data.
Allow users to access their desktops virtually from any device while maintaining consistent
configurations and data.
Commonly used in the IT industry and IoT companies.
Memory Virtualization vs. Application Virtualization
Memory Virtualization
Primarily concerned with managing system memory.
Ensures efficient allocation and usage of memory for running processes.
Application Virtualization
Individual applications are abstracted from the underlying operating system, allowing them to
run independently and without conflict.
Allows users to access and use the program from any connected system to the server.
Frequently used for compatibility and security reasons.
Applications of Memory Virtualization in the Digital World
In the digital world, memory virtualization offers a diverse range of applications. This is a key
component of the internet technology to drive innovation and transform the landscape of modern
computing.
It enables more efficient resource utilization, improved system performance, and smooth user
experience across a range of technological domains.
Technical domains where memory virtualization plays a crucial role:
Cloud Computing: In shared cloud environments, memory virtualization ensures that each
virtual machine (VM) has an isolated memory and gets the required memory whenever needed.
It plays a major role in efficient memory utilization and reducing running costs.
High-Performance Computing (HPC): In HPC clusters, it ensures that memory is efficiently
allocated to multiple processes parallelly for seamless, complex scientific simulations and big
data analysis. Also helps in the allocation of memory resources based on the specific need of
each task in the HPC cluster.
Data Centers: Large enterprises with heavy data load require memory virtualization to run
multiple applications on a shared server. It simplifies the resources where multiple teams and
departments have varying memory requirements and dynamic loads.
Memory virtualization is crucial for database management to efficiently allocate memory to
various databases when multiple databases run on a single server.
Resource-Constrained Environment: When computers have limited physical space(RAM),
memory virtualization helps optimize memory usage and prevent resource contention. This
process helps in better memory balancing and system performance.
Help in Disaster Recovery: Memory virtualization enables the transfer of memory between
two data centers and maintains services during failure.
Testing and Development of Applications: Utilized in simulated real-world conditions and
tested application performance under various regulations.
IoT and Edge Computing: New edge applications and devices use memory virtualization for
efficient RAM allocation and isolation of cache for different apps and websites. For example,
when you use two different apps on your mobile device, one app can’t access another app’s
data without your permission.
For those interested in creating interconnected devices, exploring IoT application
development can provide insights into building efficient and innovative solutions.
Future of Memory Virtualization in Cloud Computing
The future of memory virtualization in cloud computing holds significant promise as cloud
technology continues to evolve and become more integral to our digital world. Several trends and
developments are likely to shape the future of memory virtualization:
Increasing Demand for Memory Efficiency
In upcoming years, cloud workloads will become more diverse and intense, which will require
efficient memory management. Memory virtualization will play a crucial role in optimizin g
memory allocation and achieving higher performance.
Enhancing Data Security and Memory Isolation
Data breaches and hacking threats are rising daily, and countries concerned about data security
and privacy around the globe make this more vulnerable. So, it will be essential for cloud
providers to offer improved isolation and security features in cloud computing.
Memory virtualization will play a key role in data security, where multiple cloud users share the
same cloud storage.
AI and Machine Learning Integration
Memory virtualization will be used to support AI and machine learning workloads that require
huge storage capacity, like ChatGPT, Bard, and AI-powered automation applications. It’s used in
memory allocation and storage utilization to enhance user experience.
Quantum Computing Solutions
With the advancement of quantum computing, memory virtualization will adapt to unique
memory requirements in complex quantum algorithms and programs. That’s why several
companies are working on specialized memory management solutions based on memory
virtualization for quantum computing.
For Blockchain Technologies Integration
We all know that blockchain is a futuristic technology adopted by every sector, from banking to
healthcare to IoT. Memory virtualization will be used to manage blockchain networks and
decentralized applications.
Reduce Energy Consumption
Most nations are currently focusing on effective energy utilization and high efficiency. This led
to the development of memory virtualization solutions that minimize energy consumption. Big
data centers and cloud infrastructure companies currently use this technology to minimize energy
consumption.
Other Future Applications
Memory virtualization will be used in various other sectors in upcoming years, such as edge
computing expansion, custom memory allocation, distributed cloud architecture, and serverless
computing.
What is Storage Virtualization?
Storage virtualization is a process of pooling physical storage devices so that IT may address a single
"virtual" storage unit. It offered considerable economic and operational savings over bare metal storage
but is now mostly overshadowed by the cloud paradigm.
What is Storage Virtualization?
Storage virtualization is functional RAID levels and controllers are made desirable, which is an
important component of storage servers. Applications and operating systems on the device can directly
access the discs for writing. Local storage is configured by the controllers in RAID groups, and the
operating system sees the storage based on the configuration. The controller, however, is in charge of
figuring out how to write or retrieve the data that the operating system requests because the storage is
abstracted.
Types of Storage Virtualization
Below are some types of Storage Virtualization.
Kernel-level virtualization: In hardware virtualization, a different version of the Linux
kernel functions. One host may execute several servers thanks to the kernel level.
Hypervisor Virtualization: Installed between the operating system and the hardware is a section
known as a hypervisor. It enables the effective operation of several operating systems.
Hardware-assisted Virtualization: Hardware-assisted virtualization is similar to complete para-
virtualization, however, it needs hardware maintenance.
Para-virtualization: The foundation of para-virtualization is a hypervisor, which handles software
emulation and trapping.
Methods of Storage Virtualization
Network-based storage virtualization: The most popular type of virtualization used by
businesses is network-based storage virtualization. All of the storage devices in an FC or iSCSI
SAN are connected to a network device, such as a smart switch or specially designed server, which
displays the network's storage as a single virtual pool.
Host-based storage virtualization: Host-based storage virtualization is software-based and most
often seen in HCI systems and cloud storage. In this type of virtualization, the host, or a hyper-
converged system made up of multiple hosts, presents virtual drives of varying capacity to the
guest machines, whether they are VMs in an enterprise environment, physical servers or computers
accessing file shares or cloud storage.
Array-based storage virtualization: Storage using arrays The most popular use of virtualization
is when a storage array serves as the main storage controller and is equipped with virtualization
software. This allows the array to share storage resources with other arrays and present various
physical storage types that can be used as storage tiers.
How Storage Virtualization Works?
Physical storage hardware is replicated in a virtual volume during storage virtualization.
A single server is utilized to aggregate several physical discs into a grouping that creates a basic
virtual storage system.
Operating systems and programs can access and use the storage because a virtualization layer
separates the physical discs from the virtual volume.
The physical discs are separated into objects called logical volumes (LV), logical unit numbers
(LUNs), or RAID groups, which are collections of tiny data blocks.
RAID arrays can serve as virtual storage in a more complex setting. many physical drives simulate
a single storage device that copies data to several discs in the background while stripping it.
The virtualization program has to take an extra step in order to access data from the physical discs.
Block-level and file-level storage environments can both be used to create virtual storage.
Advantages of Storage Virtualization
Below are some Advantages of Storage Virtualization.
Advanced features like redundancy, replication, and disaster recovery are all possible with the
storage devices.
It enables everyone to establish their own company prospects.
Data is kept in more practical places that are farther from the particular host. Not always is the data
compromised in the event of a host failure.
IT operations may now provision, divide, and secure storage in a more flexible way by abstracting
the storage layer.
Disadvantages of Storage Virtualization
Below are some Disadvantages of Storage Virtualization.
Storage Virtualization still has limitations which must be considered.
Data security is still a problem. Virtual environments can draw new types of cyberattacks, despite
the fact that some may contend that virtual computers and servers are more secure than physical
ones.
The deployment of storage virtualization is not always easy. There aren't many technological
obstacles, including scalability.
Your data's end-to-end perspective is broken by virtualization. Integrating the virtualized storage
solution with current tools and systems is a requirement.
4.1. Storage Virtualization:
When I/O is sent to a virtual volume, it is redirected through the virtualization at the
storage network layer to the mapped physical array.
Some implementations may limit the granularity of the mapping which may limit the
capabilities of the device.
Typical granularities range from a single physical disk down to some small subset
(multiples of megabytes or gigabytes) of the physical disk.
4. Data Mobility and Migration: Address space remapping facilitates data mobility
and migration within the storage virtualization environment. Data can be moved or
migrated between different storage systems, arrays, or technologies, and logical
addresses are remapped to the new physical locations transparently to applications or
users. This capability simplifies data management tasks and allows for more efficient
storage resource utilization.
5. Abstraction and Simplification: Storage virtualization abstracts the underlying
physical storage infrastructure, providing a unified view of storage resources to
applications or users. Address space remapping ensures that applications or users can
access storage resources using logical addresses without needing to know the details
of the underlying physical storage configuration. This abstraction simplifies storage
management and reduces complexity for administrators.
1. Limited Applicability:
The primary challenge is that address space remapping primarily targets memory
management within a computer system. Its direct impact on storage virtualization,
which deals with physical storage allocation and presentation, is minimal.
2. Increased Complexity:
Introducing another layer of remapping within storage controllers can add complexity
to the storage virtualization environment. This complexity can make troubleshooting
and debugging issues more challenging for administrators.
While remapping with encryption can add some obfuscation, it's not a security
solution in itself. A sophisticated attacker could potentially exploit vulnerabilities in
the remapping process to gain access to encrypted data. Strong encryption algorithms
and proper key management practices remain essential for data security.
A SAN presents storage devices to a host such that the storage appears to be locally
attached. This simplified presentation of storage to a host is accomplished through the use of
different types of virtualization.
● FCP: Fibre Channel Protocol is the most widely used SAN deployed in 70% to 80%
of the total SAN market. FCP uses Fibre Channel transfer protocols using embedded
SCSI commands.
● iSCSI: Internet Small Computer System Interface is the next biggest SAN or
block protocol, with roughly 10% to 15 percent of the marketplace. ISCSI
encapsulates SCSI commands within an Ethernet frame and uses an IP Ethernet
system for transportation.
● FCoE: Fibre Channel over Ethernet is less than 5 percent of the SAN marketplace.
It’s very similar to iSCSI because it encapsulates an FC frame within an Ethernet
datagram. Then like iSCSI, it utilizes an IP Ethernet system for transportation.
● NVMe: Non-Volatile Memory Express over Fibre Channel is a port protocol for
obtaining flash storage through a PCI Express (PCIe) bus. Unlike conventional all-
flash architectures, which can be restricted to one sequential or serial control queue,
NVMe supports tens of thousands of concurrent queues, each with the capability to
encourage thousands of concurrent controls.
4.5.2. Components of SAN
Each of the fibre channel devices is known as a node port like server storage and tape
libraries. You can understand the real-time concept of both SAN and NAS in the picture
given below:
Figure.4.6. SAN Components
● Node: Every node may be either an origin or a destination for a different host.
● Cables: Cabling of this system is performed using fiber optic cable and copper cable.
To cover short space the copper cable is used, for example, for backend connectivity.
● Interconnect Devices: Hubs, switches, and supervisors will be the interconnect
apparatus adopted for its SAN.
● Storage Arrays: The massive storage arrays are utilized for supplying host access to
the storage tools.
● SAN Management Software: The SAN management software is utilized to control
the ports between storage arrays, interconnect hosts, and devices.
4.5.3. How SAN works?
SAN storage solutions are block storage-based, meaning data is split into storage volumes
that can be formatted with different protocols, such as iSCSI or Fibre Channel Protocol
(FCP). A SAN can include hard disks or virtual storage nodes and cloud resources, known as
virtual SANs or vSANs.
NAS devices offer an easier way for many users in different locations to access data,
which can be valuable when working on the same project or sharing information.
NAS storage systems are file storage-based, meaning the data is stored in files that are
organized in folders under a hierarchy of directories and subdirectories. Unlike direct
attached storage — which can be accessed by one device — the NAS file system provides
file storage and sharing capabilities between devices.
● Network: One or multiple networked NAS devices are connected to a local area
network (LAN) or an Ethernet network with an assigned IP address.
● NAS box: This hardware device with its own IP address includes a network interface
card (NIC), a power supply, processor, memory and drive bay for two to five disk
drives. A NAS box, or head, connects and processes requests between the user’s
computer and the NAS storage.
● Storage: The disk drives within the NAS box that store the data. Often storage uses a
RAID configuration, distributing and copying data across multiple drives. This
provides data redundancy as a fail-safe, and it improves performance and storage
capacity.
● Operating system: Unlike local storage, NAS storage is self-contained. It also
includes an operating system to run data management software and authorize file-
level access to authorized users.
● Software: Preconfigured software within the NAS box manages the NAS device and
handles data storage and file-sharing requests.
4.6.4. NAS use cases:
There are times when NAS is the better choice, depending on the company’s needs and
application:
● File collaboration and storage: This is the primary use case for NAS in mid- to
large-scale enterprises. With NAS storage in place, IT can consolidate multiple file
servers for ease of management and to save space.
● Archiving: NAS is a good choice for storing a large number of files, especially if you
want to create a searchable and accessible active archive.
● Big data: NAS is a common choice for storing and processing large unstructured
files, running analytics and using ETL (extract, transform, load) tools for integration.
4.6.5. Overview of NAS Benefits
● Relatively inexpensive
● 24/7 remote data accessibility
● Very Good expandability
● Redundant storage structure (RAID)
● Automatic backups to additional cloud and devices
● Flexibility
4.6.6. Limitations of NAS
The area where NAS limits itself is in the scalability and performance. After crossing
a certain limit of users’ access of files over NAS, it will ask for scaling up of
horsepower of the server.
Another major limitation of NAS lies in the Ethernet. The data over the Ethernet is
shared in the form of packets, which simply means that one source or file is divided
into a number of packets. If even one reaches late or goes out of sequence, the user
won’t be able to access that file until each and every packet is reached and converted
back into the sequence.
Comparison Table: SAN vs NAS:
4.7. RAID (Redundant Arrays of Independent Disks)
4.7.1. Introduction
RAID is a technique that makes use of a combination of multiple disks instead of
using a single disk for increased performance, data redundancy, or both. The term was coined
by David Patterson, Garth A. Gibson, and Randy Katz at the University of California,
Berkeley in 1987.
Here are some common terms used in RAID (Redundant Array of Independent Disks):
1. Striping: A technique used in RAID 0 and some other RAID levels where data is
divided into blocks and distributed across multiple disks. It improves performance by
allowing multiple disks to work in parallel.
2. Mirroring: Also known as RAID 1, mirroring involves creating an exact duplicate of
data on multiple disks. This provides redundancy and fault tolerance, as data remains
accessible even if one disk fails.
3. Parity: In RAID 5 and RAID 6 configurations, parity is a method used to provide
fault tolerance by generating and storing parity information. Parity data allows the
RAID array to reconstruct data in the event of disk failure.
4. Hot Spare: A spare disk drive that is kept in reserve and can automatically replace a
failed disk in a RAID array. Hot spares help minimize downtime and maintain data
redundancy.
5. RAID Level: Refers to the specific configuration or layout of a RAID array,
determining how data is distributed, duplicated, or parity is calculated across the
disks. Common RAID levels include RAID 0, RAID 1, RAID 5, RAID 6, RAID 10,
etc.
6. RAID Controller: A hardware or software component responsible for managing the
operation of a RAID array. Hardware RAID controllers are dedicated devices, while
software RAID controllers are implemented in software.
7. RAID Array: The logical grouping of multiple physical disk drives configured in a
RAID configuration. The RAID array appears as a single storage device to the
operating system.
4.7.3. Levels of RAID:
There are several levels of RAID, each with its own characteristics and benefits. Here
are some most common RAID levels:
1. RAID-0 (Stripping)
2. RAID-1 (Mirroring)
3. RAID-2 (Bit-Level Stripping with Dedicated Parity)
4. RAID-3 (Byte-Level Stripping with Dedicated Parity)
5. RAID-4 (Block-Level Stripping with Dedicated Parity)
6. RAID-5 (Block-Level Stripping with Distributed Parity)
7. RAID-6 (Block-Level Stripping with two Parity Bits)
● Instead of placing just one block into a disk at a time, we can work with two (or more)
blocks placed into a disk before moving on to the next one.Evaluation:
● Reliability: 0
There is no duplication of data. Hence, a block once lost cannot be recovered.
● Capacity: N*B
The entire space is being used to store data. Since there is no duplication, N disks
each having B blocks are fully utilized.
Advantages:
1. It is easy to implement.
2. It utilizes the storage capacity in a better way.
Disadvantages:
1. A single drive loss can result in the complete failure of the system.
2. Not a good choice for a critical system.
Disadvantages
1. It has a complex structure and high cost due to extra drive.
2. It requires an extra drive for error detection.
● Assume that in the above figure, C3 is lost due to some disk failure.
Then, we can recompute the data bit stored in C3 by looking at the
values of all the other columns and the parity bit. This allows us to
recover lost data.
Evaluation:
● Reliability: 1
RAID-4 allows recovery of at most 1 disk failure (because of the
way parity works). If more than one disk fails, there is no way to recover
the data.
● Capacity: (N-1)*B
One disk in the system is reserved for storing the parity. Hence, (N-1)
disks are made available for data storage, each disk having B blocks.
Advantages:
1. It helps in reconstructing the data if at most one data is lost.
Disadvantages:
1. It can’t help in reconstructing when more than one data is lost.
Disadvantages:
1. Its technology is complex and extra space is required.
2. If both discs get damaged, data will be lost forever.
4.7.3.7. RAID-6 (Block-Level Striping with two Parity Bits)
● Raid-6 helps when there is more than one disk failure. A pair of
independent parties are generated and stored on multiple disks at this
level. Ideally, you need four disk drives for this level.
● There are also hybrid RAIDs, which make use of more than one RAID
level nested one after the other, to fulfill specific requirements.
Advantages:
1. Very high data Accessibility.
2. Fast read data transactions.
Disadvantages:
1. Due to double parity, it has slow write data transactions.
2. Extra space is required.
Virtualization tools are software solutions that enable the creation, management, and
utilization of virtualized environments, allowing multiple operating systems or applications to
run on a single physical server or machine. These tools provide various functionalities,
including virtual machine (VM) creation, provisioning, monitoring, and performance
management. Some popular virtualization tools include:
VMware vSphere: VMware vSphere is a leading virtualization platform that provides a suite
of tools for creating and managing virtualized environments. It includes features such as
vCenter Server for centralized management, VMware ESXi hypervisor for virtualization,
vMotion for live migration of VMs, and High Availability (HA) for ensuring VM uptime.
KVM (Kernel-based Virtual Machine): KVM is a virtualization solution for Linux- based
systems that leverages the Linux kernel to provide virtualization capabilities. It is integrated
into the Linux kernel and allows users to create and manage VMs on Linux servers. KVM
supports features such as live migration, resource allocation, and security isolation.
Xen Project: Xen is an open-source hypervisor that provides virtualization capabilities for
both desktop and server environments. It allows users to create and manage VMs on various
operating systems, including Linux and Windows. Xen supports features such as
paravirtualization, live migration, and memory overcommitment.
VMware is a company that provides cloud computing and virtualization software and
services. They are a pioneer in virtualization technology, which allows you to run multiple
virtual machines (VMs) on a single physical server.
VMware offers a suite of virtualization products including VMware vSphere for
servervirtualization, VMware Workstation for desktop virtualization, and VMware Fusion for
Mac desktops.
It's known for its robust features, stability, and wide adoption in enterprise environments.
1
The VMware cloud takes advantage of this transition from one virtualization era to the other
with its products and services.
These VMware resources may be split over several virtual servers that act much like a single
physical machine in the appropriate configurations – for example, storing data, developing
and distributing programs, maintaining a workspace, and much more.
1. Easy Installation: Installs like an application, with simple, wizard-driven installation and
virtual machine creation process
2. Seamless migration to vSphere: Protect your investment and use the free web-based service
VMware Go to seamlessly migrate your virtual machines to VMware vSphere.
3. Hardware Support: Runs on any standard x86 hardware, including Intel and AMD
hardware virtualization assisted systems. Also supports two-processor Virtual SMP, enabling
a single virtual machine to span multiple physical processors
4. Operating system support: The broadest operating system support of any host-based
virtualization platform currently available, including support for Windows Server 2008,
Windows Vista Business Edition and Ultimate Edition (guest only), Red Hat Enterprise
Linux 5 and Ubuntu 8.04.
5. 64-bit operating system support: Use 64-bit guest operating systems on 64-bit hardware to
enable more scalable and higher performing computing solutions. In addition, Server 2 runs
natively on 64-bit Linux host operating systems.
6. VMware Infrastructure (VI) Web Access management interface: VI Web Access
management interface provides a simple, flexible, secure, intuitive and productive
management experience. Plus, access thousands of pre-built, pre-configured, ready-to-run
enterprise applications packaged with an operating system inside a virtual machine at the
Virtual Appliance Marketplace.
7. Independent virtual machine console: With the VMware Remote Console, you can access
your virtual machine consoles independent of the VI Web Access management interface.
8. More scalable virtual machines: Support for up to 8 GB of RAM and up to 10 virtual
network interface cards per virtual machine, transfer data at faster data rates from USB2.0
devices plus add new SCSI hard disks and controllers to a running virtual machine.
9. Volume Shadow Copy Service (VSS): Properly backup the state of the Windows virtual
machines when using the snapshot feature to maintain data integrity of the applications
running inside the virtual machine.
10. Support for Virtual Machine Interface (VMI): This feature enables transparent
paravirtualization, in which a single binary version of the operating system can run either on
native hardware or in paravirtualized mode to improve performance in specific Linux
environments.
11. Virtual Machine Communication Interface (VMCI): Support for fast and efficient
communication between a virtual machine and the host operating system and between two or
more virtual machines on the same host.
2
the virtual environment.
In addition, VMware Infrastructure brings about a set of distributed services that
enables fine‐grain, policy‐driven resource allocation, high availability, and consolidated
backup of the entire virtual datacenter. These distributed services enable an IT organization to
establish and meet their production Service Level Agreements with their customers in a cost-
effective manner.
The relationships among the various components of the VMware Infrastructure are
shown in Figure. 5.1.
3
to another without service interruption.
VMware High Availability (HA) – Feature that provides easy‐to‐use, cost‐
effective high availability for applications running in virtual machines. In theevent of server
failure, affected virtual machines are automatically restarted on other production servers that
have spare capacity.
VMware Distributed Resource Scheduler (DRS) – Feature that allocates and
balances computing capacity dynamically across collections of hardware resources for virtual
machines. This feature includes distributed power management (DPM) capabilities that
enable a datacenter to significantly reduce its power consumption.
VMware Consolidated Backup (Consolidated Backup) – Feature that
provides an easy‐to‐use, centralized facility for agent‐free backup of virtual machines. It
simplifies backup administration and reduces the load on ESX Servers.
VMware Infrastructure SDK – Feature that provides a standard interface for
VMware and third‐party solutions to access the VMware Infrastructure.
5.1.3. Advantages of VMWare
1. Cost
2. Redundancy
3. Scalability
4. Flexibility
5. Multiple OS Support
AWS offers tools such as compute power, database storage and content delivery services.
With more than 200 services, AWS provides a range of offerings for individuals, as well as
public and private sector organizations to create applications and information services of all
kinds.
4
5.2.1. Amazon AWS Key features:
Amazon Web Services (AWS) offers a wide range of features to help developers
build, deploy, and scale applications in the cloud. Some of the key features of
AWS include:
5
5.2.2. AWS Architecture
The above diagram is a simple AWS architecture diagram that shows the basic structure
of Amazon Web Services architecture.
It shows the basic AWS services, such as Route 53, Elastic Load Balance
By using S3 (Simple Storage Service), companies can easily store and retrieve data of
various types using Application Programming Interface calls.
AWS comes with so many handy options such as configuration server, individual server
mapping, and pricing.
As we can see in the AWS architecture diagram that a custom virtual private cloud is created
to secure the web application, and resources are spread across availability zones to provide
redundancy during maintenance.
We can add or remove instances and scale up or down on the basis of dynamic scaling
policies.
Amazon CloudFront distribution helps us minimize latency. It also maintains the edge
locations across the globe—an edge location is a cache for web and streaming content.
Route 53 domain name service, on the other hand, is used for the registration and
management of our Internet domain.
1. Compute Services:
=>Amazon Elastic Compute Cloud (EC2): Virtual servers in the cloud, offering scalable
compute capacity for running applications, hosting websites, and processing data.
=> AWS Lambda: Serverless computing service that allows you to run code in response to
events without provisioning or managing servers.
2. Storage Services:
=>Amazon Simple Storage Service (S3): Scalable object storage for storing and retrieving
data, with high durability, availability, and security features.
=> Amazon Elastic Block Store (EBS): Block storage volumes for EC2 instances,
providing persistent storage that can be attached to instances.
6
3. Database Services:
=> Amazon Relational Database Service (RDS): Managed relational database service
supporting multiple database engines such as MySQL, PostgreSQL, Oracle, SQL Server, and
MariaDB.
4. Networking Services:
=> Amazon Virtual Private Cloud (VPC): Isolated virtual networks in the AWS cloud,
allowing you to define and control network settings, subnets, and access controls.
=> Amazon Route 53: Scalable domain name system (DNS) web service for routing traffic
Prepared
to resources, such as EC2 instances, S3 buckets, and load balancers.
by:
=>AWS Identity and Access Management (IAM): Identity management service for
securely controlling access to AWS resources, allowing you to create and manage users,
groups, and permissions.
=>Amazon GuardDuty: Managed threat detection service that continuously monitors for
malicious activity and unauthorized behavior in your AWS accounts.
=>Amazon SageMaker: Fully managed service for building, training, and deploying
machine learning models at scale.
=>Amazon Rekognition: Deep learning-based image and video analysis service for
identifying objects, scenes, and faces in images and videos.
7. Developer Tools:
8. Analytics Services:
=>Amazon Redshift: Fully managed data warehouse service for analyzing large datasets
using SQL queries.
=>Amazon Athena: Interactive query service that allows you to analyze data in Amazon
S3 using standard SQL syntax.
7
=>AWS IoT Core: Managed cloud service for securely connecting and managing IoT
devices, collecting and processing data, and implementing IoT applications.
=> Amazon Elastic Container Service (ECS): Fully managed container orchestration
service for running and scaling containerized applications.
=> Amazon Elastic Kubernetes Service (EKS): Managed Kubernetes service for
deploying, managing, and scaling containerized applications using Kubernetes.
5.2.3. Advantages of AWS
Cost Complexity: Consistent monitoring is essential since cost tracking can become
complex due to multiple providers with different pricing structures.
Learning Curve: There may be a learning curve for the broad functionality and need efforts
on training and documentation for smoother adaption.
Dependency Risks: It could be difficult to rely only on AWS infrastructure. So make a plan
to reduce the risk of dependence.
Not Always Small-Business Friendly: For simpler needs, AWS might be more complex and
costly than necessary to assess the alignment with your project scale.
Rare Outages: While not common, AWS can experience occasional interruptions. Reduce
potential impacts on essential operations through the use of redundancy and backup
protocols.
Hyper-V is Microsoft's hardware virtualization product. It lets you create and run a software
version of a computer, called a virtual machine. Each virtual machine acts like a complete
computer, running an operating system and programs.
When you need computing resources, virtual machines give you more flexibility, help save
time and money, and are a more efficient way to use hardware than just running one
operating system on physical hardware.
Hyper-V runs each virtual machine in its own isolated space, which means you can run more
than one virtual machine on the same hardware at the same time.
5.3.1. Features of Microsoft HyperV:
8
Hyper-V offers many features. This is an overview, grouped by what the features
provide.
1. Computing environment - A Hyper-V virtual machine includes the same basic parts as a
physical computer, such as memory, processor, storage, and networking. All these parts have
features and options that you can configure different ways to meet different needs. Storage
and networking can each be considered categories of their own, because of the many ways
you can configure them.
2. Disaster recovery and backup - For disaster recovery, Hyper-V Replica creates copies of
virtual machines, intended to be stored in another physical location, so you can restore the
virtual machine from the copy. For backup, Hyper-V offers two types. One uses saved states
and the other uses Volume Shadow Copy Service (VSS) so you can make application-
consistent backups for programs that support VSS.
3. Optimization - Each supported guest operating system has a customized set of services and
drivers, called integration services, that make it easier to use the operating system in a Hyper-
V virtual machine.
4. Portability - Features such as live migration, storage migration, and import/export make it
easier to move or distribute a virtual machine.
5. Remote connectivity - Hyper-V includes Virtual Machine Connection, a remote connection
tool for use with both Windows and Linux. Unlike Remote Desktop, this tool gives you
console access, so you can see what's happening in the guest even when the operating system
isn't booted yet.
6. Security - Secure boot and shielded virtual machines help protect against malware and other
unauthorized access to a virtual machine and its data.
9
In this setup, the Virtualization Service Provider and Virtual Machine Management Service
operate in the parent partition to assist child partitions. Child partitions lack direct access to
the physical processor and handle no real interrupts. Instead, they operate in a virtualized
processor environment and utilize Guest Virtual Address.
Hyper-V configures the processor exposure to each partition and manages interrupts via a
Synthetic Interrupt Controller (SynIC). Hardware acceleration, like EPT on Intel or RVI on
AMD, assists in address translation for virtual address-spaces.
Child partitions access hardware resources virtually, with requests redirected through the
VMBus to the parent partition's devices. The VMBus enables inter-partition communication
transparently to the guest OS.
Parent partitions host a Virtualization Service Provider (VSP) connected to the VMBus,
handling device access requests from child partitions. Internally, child partition virtual
devices employ a Virtualization Service Client (VSC) to interact with VSPs via the VMBus.
For efficient communication, virtual devices can leverage Enlightened I/O, a Windows
Server Virtualization feature. Enlightened I/O allows direct utilization of VMBus for
communication, bypassing emulation layers, but requires guest OS support.
5.3.2.1. Components of Microsoft Hyper-V
Parent-Child Partition: Hyper-V must have at least one host or parent partition, which runs
the virtualization stack and has direct access to the hardware. Guest VMs, or child partitions,
are created within the parent partition. The hypervisor manages interrupts to the processor
and establishes trust relationships between guest VMs, the parent partition, and the
hypervisor.
VM Bus: The VM Bus is a communication protocol that facilitates inter-partition
communication between the Hyper-V host and guest VMs. It assists in machine enumeration
and avoids additional layers of communication.
VSP - VSC: Virtual Service Provider (VSP) and Virtual Service Client (VSC) are critical
components that enable communication between the Hyper-V server and guest VMs. VSPs
run in the parent partition, while corresponding VSCs run in the child partitions. They
communicate via the VM Bus, with VSPs handling various requests from multiple VSCs
simultaneously.
VM Management Service: The Virtual Machine Management Service (VMMS), also
known as vmms.exe, is a core component of Hyper-V that manages every aspect of the
virtualization environment. It runs under the system account and must be operational for
controlling, creating, or deleting virtual machines.
VM Worker Process: Each virtual machine running on Hyper-V has its own VM Worker
Process (vmwp.exe), created by the Virtual Machine Management Service. This process
manages the VM's operation, including resource allocation and execution.
These components work together to provide a robust virtualization environment, enabling
organizations to create and manage virtualized infrastructure efficiently.
10
2. Cost-Effective: Hyper-V is included as a feature in Windows Server editions,
making it a cost-effective virtualization solution for organizations already invested in the
Microsoft ecosystem.
5.3.4. Disadvantages:
2. Complexity: While Hyper-V has improved over the years, some users may find it
more complex to configure and manage compared to other hypervisor solutions.
Oracle VM VirtualBox, the world’s most popular open source, cross-platform, virtualization
software, enables developers to deliver code faster by running multiple operating systems on
a single device.
Oracle VM VirtualBox is a hosted hypervisor for x86 virtualization developed by Oracle
Corporation.
IT teams and solution providers use VirtualBox to reduce operational costs and shorten the
time needed to securely deploy applications on-premises and to the cloud.
With lightweight and easy-to-use software, Oracle VM VirtualBox makes it easier for
organizations to develop, test, demo, and deploy new solutions across multiple platforms
from a single device.
11
5.4.1. Key Features of Oracle VM Virtual Box
Portability: Compatible with numerous 64-bit host OSes, allowing for easy VM migration
across different platforms.
Hosted Hypervisor: Functions as a type 2 hypervisor, running alongside existing
applications on the host system.
Compatibility: Supports identical functionality across host platforms, facilitating seamless
VM transfer between different host OSes.
No Hardware Virtualization Required: Can run on older hardware without requiring
specific processor features like Intel VT-x or AMD-V.
Guest Additions: Enhances guest performance and integration with features like shared
folders, seamless windows, and 3D virtualization.
Hardware Support: Offers extensive support for guest multiprocessing, USB devices,
virtual devices (IDE, SCSI, SATA, network cards, sound cards, etc.), ACPI, multiscreen
resolutions, iSCSI, and PXE network boot.
Multigeneration Snapshots: Allows saving and managing snapshots of VM states,
facilitating easy rollback and configuration management.
VM Groups: Provides features for organizing and controlling VMs collectively or
individually, including nested group hierarchies.
Modular Architecture: Features a clean design with well-defined interfaces, allowing
control from multiple interfaces simultaneously.
Software Development Kit (SDK): Offers a comprehensive SDK for exposing and
integrating VirtualBox functionality with other software systems.
Remote Machine Display: Enables high-performance remote access to running VMs
through the VirtualBox Remote Desktop Extension (VRDE).
Extensible RDP Authentication: Supports various authentication methods for RDP (Remote
Desktop Protocol) connections, with an SDK for creating custom authentication interfaces.
USB over RDP: Allows connecting USB devices locally to a VM running remotely on a
VirtualBox RDP server.
5.4.2. Architecture of Oracle VM VirtualBox
Oracle VM is a platform that provides a fully equipped environment with all the latest
benefits of virtualization technology.
Oracle VM enables you to deploy operating systems and application software within a
supported virtualization environment.
Oracle VM insulates users and administrators from the underlying virtualization technology
and allows daily operations to be conducted using goal-oriented GUI interfaces.
12
Figure.5.5. Oracle VM VirtualBox Architecture
1. Client Applications:
Various user interfaces to Oracle VM Manager are provided, either via the graphical user
interface (GUI) accessible using a web-browser; the command line interface (CLI) accessible
using an SSH client; custom built applications or scripts that use the Web Services API (WS-
API); or external applications, such as Oracle Enterprise Manager, or legacy utility scripts
that may still make use of the legacy API over TCPS on port 54322.
The legacy API is due to be deprecated in the near future and applications that are using it
must be updated to use the new Web Services API instead. All communications with Oracle
VM Manager are secured using either a key or certificate-based technology.
2. Oracle VM Manager:
Oracle VM Manager serves as a comprehensive platform for managing Oracle VM
Servers, virtual machines, and associated resources. Key points include:
Management Interfaces: It offers both a web browser-based user interface and a command
line interface (CLI) for managing infrastructure directly. These interfaces run as separate
applications to the Oracle VM Manager core and interact via the Web Services API.
Core Architecture: The Oracle VM Manager core is an Oracle WebLogic Server application
running on Oracle Linux. The user interface is built on the Application Development
Framework (ADF), ensuring a consistent experience with other Oracle web-based
applications.
GUI and CLI Functionality: While both interfaces utilize the Web Services API to interact
with the Oracle VM Manager core, the GUI can directly access the Oracle VM Manager
Database for read-only operations, enhancing performance and providing advanced filtering
options.
Communication with VM Servers: Oracle VM Manager communicates with Oracle VM
Servers via the Oracle VM Agent, using XML-RPC over HTTPS on port 8899. This enables
seamless interaction, including triggering actions and receiving notifications, while ensuring
security through HTTPS.
High Availability: Despite its critical role in configuring the Oracle VM infrastructure, the
virtualized environment can continue to operate effectively even during Oracle VM Manager
downtime. This ensures the maintenance of high availability and the ability to perform live
migration of virtual machines.
Used by Oracle VM Manager core to store and track configuration, status changes and
events. Oracle VM Manager uses a MySQL Enterprise database that is bundled in the
installer and which runs on the same host where Oracle VM Manager is installed.
The database is configured for the exclusive use of Oracle VM Manager and must not be used
by any other applications.
The database is automatically backed up on a regular schedule, and facilities are provided to
13
perform manual backups as well.
4. Oracle VM Server:
Oracle VM Server is a providing a lightweight, secure, server platform which runs
virtual machines, also known as domains. Key points include:
Installation and Components: Installed on bare metal computers, it includes the Oracle VM
Agent for communication with Oracle VM Manager. It operates with dom0 (domain zero) as
the management domain and domU as the unprivileged domain for VMs.
Architecture: On x86-based systems, it utilizes Xen hypervisor technology and a Linux
kernel running as dom0. VMs can run various operating systems, including Linux, Oracle
Solaris, or Microsoft Windows™. For SPARC systems, it leverages the built-in hypervisor
and Oracle Solaris as the primary domain.
Clustering and Server Pools: Multiple Oracle VM Servers are clustered to form server
pools, facilitating load balancing and failover. VMs within a pool can be migrated between
servers, and server pools provide logical separation of resources.
Database and High Availability: Each Oracle VM Server maintains its Berkeley Database
for local configuration and runtime information. Even if Oracle VM Manager is unavailable,
servers can function normally. Clustered servers share a cluster database, ensuring continued
functionality like High Availability, even without Oracle VM Manager.
5. External Shared Storage: Provides storage for a variety of purposes and is required to
enable high-availability options afforded through clustering. Storage discovery and
management is achieved using the Oracle VM Manager, which then interacts with Oracle
VM Servers via the storage connect framework to then interact with storage components.
Oracle VM provides support for a variety of external storage types including NFS, iSCSI and
Fibre Channel.
1. Free and Open Source: VirtualBox is available for free under the GNU General
Public License (GPL), making it accessible to users and organizations without licensing
costs.
2. Cross-Platform Compatibility: Its support for multiple host and guest operating
systems makes it suitable for a wide range of use cases and environments.
3. Ease of Use: VirtualBox features an intuitive graphical user interface (GUI) and
comprehensive documentation, making it easy for users to create, configure, and manage
virtual machines.
14
1. Performance Overhead: While VirtualBox provides decent performance, it may
have higher overhead compared to bare-metal performance, especially for resource-intensive
workloads.
3. Less Integration with Cloud Services: Unlike some other virtualization platforms,
VirtualBox may offer limited integration with cloud services and infrastructure, making it
less suitable for cloud-based deployments.
4. Occasional Stability Issues: Some users may encounter stability issues or
compatibility issues, especially when running on certain host hardware configurations or with
specific guest OS versions.
Clients can run AIX, IBM i, and Linux operating systems on Power servers with a world-class
reliability, high availability (HA), and serviceability capabilities together with the leading
performance of the Power platform.
This solution provides workload consolidation that helps clients control costs and improves
Power servers, which are combined with PowerVM technology, help consolidate and simplify
your IT environment.
PowerVM, IBM's virtualization solution for Power Systems servers, offers several key
features:
Hardware Virtualization: PowerVM provides hardware-level virtualization, allowing
multiple logical partitions (LPARs) to run on a single physical server.
Dynamic Resource Allocation: It enables dynamic allocation of CPU, memory, and I/O
resources to virtual machines, allowing for efficient resource utilization and performance
optimization.
Live Partition Mobility (LPM): PowerVM supports LPM, allowing users to move running
virtual machines between physical servers without disrupting service, enhancing workload
flexibility and resiliency.
Shared Processor Pools: PowerVM allows users to create shared processor pools, enabling
15
dynamic resource allocation and workload balancing across multiple LPARs.
Micro-Partitioning: This feature enables fine-grained CPU allocation, allowing users to
allocate fractions of a CPU to virtual machines, optimizing resource utilization and reducing
costs.
Virtual I/O Server (VIOS): PowerVM includes VIOS, which acts as a virtualization layer
for I/O devices, providing efficient and scalable I/O virtualization for virtual machines.
Virtual Networking: PowerVM offers virtual networking capabilities, allowing users to
create virtual networks and connect virtual machines to them, providing flexibility and
isolation for network traffic.
Security and Isolation: PowerVM provides robust security and isolation mechanisms,
ensuring that virtual machines remain isolated from each other and from the underlying
hardware.
Advanced Management Tools: PowerVM includes management tools such as IBM Systems
Director and HMC (Hardware Management Console), which provide comprehensive
management capabilities for virtualized environments.
Integration with IBM Ecosystem: PowerVM integrates with other IBM solutions and
ecosystem products, such as IBM Cloud PowerVC Manager, to provide enhanced
management and automation capabilities for virtualized environments on Power Systems
servers.
5.5.2. Architecture of IBM PowerVM
Virtual SCSI (VSCSI), part of VIOS, enables the sharing of physical storage adapters (SCSI
and Fibre Channel) and storage devices (disk and optical) between logical partitions.
Virtual SCSI is based on a client/server relationship. The Virtual I/O Server owns the
physical resources and acts as server or, in SCSI terms, target device. The logical partitions
access the virtual SCSI resources provided by the Virtual I/O Server as clients.
VIOS virtual SCSI features include:
16
Figure. 5.5. Virtual SCSI architecture in IBM PowerVM Technologies
5.5.2.1. Components of IBM PowerVM
Below are some of the components of PowerVM.
PowerVM Hypervisor (PHYP): This functionality is made available by the hardware
platform in combination with system firmware for the POWER server. The hypervisor is
ultimately the basis for any virtualization on a POWER system.
Logical Partition (LPAR): LPARs are provided through the hypervisor. Originally, only
dedicated hardware components and complete processors could be allocated to an LPAR;
only the memory was shared. In the course of the Power Systems generations, the
possibilities have been expanded further and further (micro-partition, dynamic logical
partition), although the term LPAR has been retained.
Micro Partition: The micro partition allows a processor to be shared between different
partitions. The micro partitions are assigned parts of a processor, which is also referred to as
shared processor partitions.
Dynamic Logical Partition (DLPAR): Virtual resources (CPU, memory, physical adapters
and virtual adapters) can be added to or removed from the partition at runtime (provided that
the operating system supports it). This means that resources can be dynamically adapted to
the needs of a partition.
Shared Processor Pools (SPP): Partitions can be assigned to shared processor pools, so that
the consumption of processor resources by partitions can be limited to the resources available
in the pool.
Virtual I/O Server (VIOS): This is a special service partition with an AIX-based, specially
extended operating system for supporting a range of virtualization functions.
Network adapters (Virtual Ethernet) and I/O adapters (Virtual SCSI and Virtual FC) can be
virtualized via virtual I/O servers.
Virtual Ethernet (VETH): Client partitions can communicate in the network with the help
of virtual Ethernet adapters without having their own physical Ethernet adapters.
Virtual SCSI (VSCSI): With the help of the virtual I/O server, client partitions can access
disks via a virtual SCSI adapter without having their own physical I/O adapter. The necessary
physical adapters belong to the virtual I/O servers and can therefore be shared by many
partitions. The disks must be assigned to the virtual SCSI adapters.
Live Partition Mobility (LPM): This feature allows an active partition to be moved online
from one power system to another power system. All applications and the operating system
simply continue to run during the online move. From the point of view of the applications,
the move is transparent.
17
main memory can be obtained. The desired compression can be specified. With this, for
example, from 32 GB of physical main memory and a compression factor (AME factor) of
1.5, 48 GB of main memory can be obtained for one partition. The operating system and all
applications see 48 GB of available main memory.
Single Root I/O Virtualization (SR-IOV): With this type of virtualization, a virtual I/O
server is no longer required. The virtualization takes place in hardware directly on the
physical adapter. With PowerVM this is currently limited to SR-IOV capable network
adapters. The bandwidth of the SR-IOV Ethernet ports can be divided between the individual
partitions.
Virtual Network Interface Controller (vNIC): Allows automatic failover to another SR-
IOV Ethernet port if one SR-IOV Ethernet port fails. For this, however, the support of virtual
I/O servers is required again.
Google offers various virtualization solutions as part of its cloud platform to enable users to
create, deploy, and manage virtualized environments and workloads.
18
These virtualization solutions include Google Compute Engine (GCE) for virtual machines,
Google Kubernetes Engine (GKE) for container orchestration, Anthos for hybrid and multi-
cloud management, Google Cloud VMware Engine (GCVE) for running VMware workloads,
and more.
19
5.6.2.1 Components of Google Virtualization
Google Virtualization encompasses various components and services within Google Cloud
Platform (GCP) that enable users to create, manage, and deploy virtualized environments.
Some key components of Google Virtualization include:
Google Compute Engine (GCE) supports both Linux and Windows virtual machines. You
can run VMs based on Google-provided machine images or pull images from your existing
infrastructure.
2. Storage
Google Cloud provides three main services offering different types of storage:
Persistent disks- provides high-performance block storage, can be attached to VMs as
collocated persistent storage.
File storage- officially known as Google Filestore, providing fully managed file storage with
99.99% regional availability SLA, backups, snapshots, and ability to scale to high throughput
and IOPS.
Object storage- officially known as Google Cloud Storage, providing highly durable
storage buckets, similar to Amazon S3 Storage.
3. Database
Google Cloud offers several managed database services both relational and non-relational,
as a platform as a service (PaaS) offering built on its storage services:
Google Cloud SQL- relational database service compatible with SQL Server, MySQL, and
20
PostgreSQL. Provides automatic backup, replication, and disaster recovery.
Cloud Spanner- relational database that supports SQL on the one hand, but enables the
same level of scalability as non-relational databases.
Google Cloud BigQuery- serverless data warehouse, which supports large-scale data
analysis and streaming data querying via SQL. BigQuery provides a built-in data transfer
service for migrating large data volumes.
Cloud Bigtable- NoSQL database service designed for large-scale operational data and
analytics workloads. Provides high availability, zero downtime for configuration changes,
and request latency under 10 milliseconds.
Cloud Firestore- NoSQL database service designed for serverless applications. Can be
integrated seamlessly with web, mobile, and IoT applications, with real- time
synchronization and built-in security.
Memorystore- managed in-memory datastore designed for security, high availability, and
scalability.
Google Cloud provides server-side load balancing, allowing incoming traffic to be distributed
across multiple virtual machine (VM) instances.
It uses forwarding rule resources to match and forward certain types of traffic to the
load balancer - for example, it can forward traffic according to protocol, port, IP address
or range.
Google Cloud Load Balancing is a managed service, in which components are redundant and
highly available. If a load balancing component fails, it is automatically restarted or replaced.
Google Compute Engine also provides autoscaling, which automatically adds or removes
VM instances from a managed instance group (MIG) as its load increases or decreases.
Serverless
Serverless computing dynamically runs workloads when they are required, with no need to
manage the underlying server resources. Google Cloud provides three key serverless
options that allow you to run serverless workloads:
Google Cloud Functions- lets you provide code in multiple programming languages and
allow Google to run it when triggered by an event.
Google App Engine- a serverless platform that can run web applications and mobile
backends in any programming language.
5. Containers
Google offers several technologies that you can use to run containers in the Google Cloud
environment:
Google Kubernetes Engine (GKE) - the world’s first managed Kubernetes service,
which lets you run Kubernetes clusters on Google Cloud infrastructure, with control over
individual Kubernetes nodes.
GKE Autopilot - a new operating mode for GKE that lets you optimize clusters for
production environments, improve availability, and dynamically adjust computing power
21
available to Kubernetes clusters.
Google Anthos - a cloud-agnostic hybrid container management platform. This service
allows you to replace virtual machines (VMs) with container clusters, creating a unified
environment between the public cloud and an on-premises data center.
22
Solutions to quickly scale resources up or down to meet changing business requirements.
Centralized Management: The centralized management interface provided by vSphere
simplified IT management tasks, allowing XYZ Solutions' administrators to monitor,
provision, and manage VMs more efficiently.
Backup and Disaster Recovery: XYZ Solutions leveraged vSphere's backup and disaster
recovery capabilities to protect critical data and ensure business continuity in case of system
failures or disasters.
Benefits:
Cost Savings: Virtualization resulted in significant cost savings for XYZ Solutions by
reducing hardware expenses, lowering operational costs, and minimizing the need for
physical infrastructure.
Improved Resource Utilization: By consolidating servers into VMs and dynamically
allocating resources, XYZ Solutions optimized resource utilization and reduced wastage.
Enhanced Reliability: vSphere's HA and FT features improved system reliability,
minimizing downtime and ensuring uninterrupted service availability.
Streamlined Management: The centralized management interface simplified IT
management tasks, reducing administrative overhead and improving operational efficiency.
Scalability and Agility: Virtualization provided scalability and agility, allowing XYZ
Solutions to quickly adapt to changing business needs and scale resources as required.
Conclusion:
By leveraging virtualization tools like VMware vSphere, XYZ Solutions successfully
optimized its IT infrastructure, improved operational efficiency, and achieved cost savings.
The adoption of virtualization enabled XYZ Solutions to build a more reliable, scalable, and
flexible IT environment, empowering the company to innovate faster and stay competitive in
the rapidly evolving technology landscape.
5.7.2. Case Study: Fidelity National Information Services (FIS) Overview:
Fidelity National Information Services (FIS) is a global leader in financial technology
solutions, providing a wide range of services to banks, financial institutions, and businesses
worldwide. This case study examines how FIS leveraged virtualization technology to
optimize its IT infrastructure and improve operational efficiency.
Challenges:
Legacy Infrastructure: FIS operated on a legacy IT infrastructure consisting of multiple
physical servers, which were costly to maintain and lacked scalability.
Resource Underutilization: The traditional infrastructure suffered from resource
underutilization, with servers running at low capacity, leading to inefficient use of hardware
resources.
High Operational Costs: Maintaining a large number of physical servers resulted in high
operational costs associated with power consumption, cooling, and hardware maintenance.
Complexity and Management Overhead: Managing a diverse array of physical servers
added complexity to the IT environment, requiring significant administrative effort and
resources.
Solution:
Virtualization Deployment: FIS implemented a virtualization solution, such as VMware
vSphere or Microsoft Hyper-V, to consolidate its physical servers into virtual machines
(VMs). This allowed for better resource utilization and reduced the number of physical
servers required.
Centralized Management: The virtualization platform provided a centralized management
interface, enabling FIS's IT administrators to monitor, provision, and manage VMs more
efficiently.
Dynamic Resource Allocation: Through features like dynamic resource allocation and load
23
balancing, FIS optimized resource utilization, ensuring that computing resources were
allocated dynamically based on workload demands.
High Availability and Disaster Recovery: FIS implemented high availability (HA) and
disaster recovery (DR) solutions within the virtualization platform to enhance data protection
and ensure business continuity in the event of hardware failures or disasters.
Automation and Orchestration: FIS leveraged automation and orchestration tools to
streamline IT operations, automate routine tasks, and improve overall operational efficiency.
Benefits:
Cost Savings: Virtualization resulted in significant cost savings for FIS by reducing
hardware expenses, optimizing resource utilization, and lowering operational costs associated
with power consumption and maintenance.
Improved Scalability: The virtualized infrastructure provided scalability and flexibility to
accommodate FIS's growing IT demands and adapt to changing business requirements.
Enhanced Reliability: Features like HA and DR enhanced the reliability and availability of
FIS's IT services, minimizing downtime and ensuring continuous operations.
Streamlined Management: Centralized management and automation capabilities
streamlined IT operations, reducing management overhead and improving productivity.
Agility and Innovation: Virtualization enabled FIS to respond more quickly to market
changes, innovate faster, and deliver new services and solutions to its customers more
efficiently.
Conclusion:
By embracing virtualization technology, Fidelity National Information Services (FIS)
successfully addressed its infrastructure challenges, improved operational efficiency, and
achieved cost savings. The adoption of virtualization tools allowed FIS to build a more agile,
reliable, and scalable IT infrastructure, enabling the company to stay competitive in the
rapidly evolving financial technology industry.
24