0% found this document useful (0 votes)
13 views36 pages

Unit Ii

Uploaded by

saundharyajp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views36 pages

Unit Ii

Uploaded by

saundharyajp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

UNIT II CLOUD ENABLING TECHNOLOGIES

Service Oriented Architecture – REST and Systems of Systems – Web Services –


Publish-Subscribe Model – Basics of Virtualization – Types of Virtualizations –
Implementation Levels of Virtualization – Virtualization Structures – Tools and
Mechanisms – Virtualization of CPU – Memory – I/O Devices –Virtualization Support
and Disaster Recovery.

Web Service

A web service is a set of open protocols and standards that allow data to be
exchanged between different applications or systems.

Generic definition: Any application accessible to other applications over the Web.

Definition of the UDDI consortium: Web services are self-contained, modular


business applications that have open, Internetoriented, standards-based interfaces.

Definition of the World Wide Web Consortium (W3C): A Web service is a software
system designed to support interoperable machine-tomachine interaction over a network.
• It has an interface described in a machine-processable format (specifically
WSDL).
• Other systems interact with the Web service using SOAP messages.

SOA – Service Oriented Architecture


Service-oriented architecture (SOA) is a method of software development that uses
software components called services to create business applications. Each service
provides a business capability, and services can also communicate with each other across
platforms and languages.

Example of a SOA-based system is a set of customer services, like CRM, ERP, Product
Information Management System (PIM), etc. These services can be implemented
using different technologies and support diverse protocols of communication, data models,
etc.

What is Service?
A service is a well-defined, self-contained function that represents a unit of functionality.
A service can exchange information from another service. It is not dependent on the state
of another service. It uses a loosely coupled, message-based communication model to
communicate with applications and other services.

Why service oriented architecture?


SOA generates more reliable applications because it's easier to debug small
services than large code.
Scalability- SOA lets service run on different servers, increasing scalability. In
addition, using a standard communication protocol allows organizations to reduce the
level of interaction between clients and services.
Service-Oriented Architecture (SOA) is a stage in the evolution of application
development and/or integration. It defines a way to make software components reusable
using the interfaces.

Service Connections
Service consumer sends a service request to the service provider, and the service provider
sends the service response to the service consumer. The service connection is
understandable to both the service consumer and service provider.

Service-Oriented Terminologies

o Services - The services are the logical entities defined by one or more published
interfaces.
o Service provider - It is a software entity that implements a service specification.
o Service consumer - It can be called as a requestor or client that calls a service
provider. A service consumer can be another service or an end-user application.
o Service locator - It is a service provider that acts as a registry. It is responsible
for examining service provider interfaces and service locations.
o Service broker - It is a service provider that pass service requests to one or more
additional service providers.

The different characteristics of SOA are as follows :


o Provides interoperability between the services.
o Provides methods for service encapsulation, service discovery, service composition,
service reusability and service integration.
o Facilitates QoS (Quality of Services) through service contract based on Service Level
Agreement (SLA).
o Provides loosely couples services.
o o Ease of maintenance with reduced cost of application development and deployment.

Properties of SOA
• Logical view
• Message orientation
• Description orientation

Logical view

✓ The SOA is an abstracted, logical view of actual programs, databases, business


processes.
✓ Defined in terms of what it does, typically carrying out a business-level operation.
✓ The service is formally defined in terms of the messages exchanged between
provider agents and requester agents.
Message Orientation

✓ The internal structure of providers and requesters include the implementation


language, process structure, and even database structure.
✓ These features are deliberately abstracted away in the SOA.
✓ Using the SOA discipline one does not and should not need to know how an agent
implementing a service is constructed.
✓ By avoiding any knowledge of the internal structure of an agent, one can
incorporate any software component or application to adhere to the formal service
definition.

Description orientation

✓ A service is described by machine-executable metadata.


✓ The description supports the public nature of the SOA.
✓ Only those details that are exposed to the public and are important for the use of
the service should be included in the description.

Two major roles within Service-oriented Architecture:


1. Service provider: The service provider is the maintainer of the service and the
organization that makes available one or more services for others to use. To
advertise services, the provider can publish them in a registry, together with a
service contract that specifies the nature of the service, how to use it, the
requirements for the service, and the fees charged.

2. Service consumer: The service consumer can locate the service metadata in the
registry and develop the required client components to bind and use the service.

Components of SOA:
The service-oriented architecture stack can be categorized into two parts - functional
aspects and quality of service aspects.
Functional aspects
The functional aspect contains:
o Transport - It transports the service requests from the service consumer to the
service provider and service responses from the service provider to the service
consumer.
o Service Communication Protocol - It allows the service provider and the service
consumer to communicate with each other.
o Service Description - It describes the service and data required to invoke it.
o Service - It is an actual service.
o Business Process - It represents the group of services called in a particular
sequence associated with the particular rules to meet the business requirements.
o Service Registry - It contains the description of data which is used by service
providers to publish their services.
Quality of Service aspects
o Policy - It represents the set of protocols according to which a service provider
make and provide the services to consumers.
o Security - It represents the set of protocols required for identification and
authorization.
o Transaction - It provides the surety of consistent result. This means, if we use the
group of services to complete a business function, either all must complete or none
of the complete.
o Management - It defines the set of attributes used to manage the services.

Guiding Principles of SOA:


1. Standardized service contract: Specified through one or more service
description documents.

2. Loose coupling: Services are designed as self-contained components, maintain


relationships that minimize dependencies on other services.
3. Abstraction: A service is completely defined by service contracts and description
documents. They hide their logic, which is encapsulated within their
implementation.

4. Reusability: Designed as components, services can be reused more effectively,


thus reducing development time and the associated costs.

5. Autonomy: Services have control over the logic they encapsulate and, from a
service consumer point of view, there is no need to know about their
implementation.

6. Discoverability: Services are defined by description documents that constitute


supplemental metadata through which they can be effectively discovered. Service
discovery provides an effective means for utilizing third-party resources.

7. Composability: Using services as building blocks, sophisticated and complex


operations can be implemented. Service orchestration and choreography provide a
solid support for composing services and achieving business goals.

Advantages of SOA:
• Service reusability: Applications are made from existing services. Thus,
services can be reused to make many applications.
• Easy maintenance: As services are independent of each other they can be
updated and modified easily without affecting other services.
• Platform independent: SOA allows making a complex application by combining
services picked from different sources, independent of the platform.
• Availability: SOA facilities are easily available to anyone on request.
• Reliability: SOA applications are more reliable because it is easy to debug small
services rather than huge codes
• Scalability: Services can run on different servers within an environment, this
increases scalability

Disadvantages of SOA:

• High overhead: A validation of input parameters of services is done whenever


services interact this decreases performance as it increases load and response time.
• High investment: A huge initial investment is required for SOA.
• Complex service management: When services interact they exchange
messages. The number of messages may go in millions. It becomes a cumbersome
task to handle a large number of messages.

Practical applications of SOA:

1. SOA infrastructure is used by many armies and air forces to deploy situational
awareness systems.
2. SOA is used to improve healthcare delivery.
3. Nowadays many apps are games and they use inbuilt functions to run. For
example, an app might need GPS so it uses the inbuilt GPS functions of the device.
This is SOA in mobile solutions.
4. SOA helps maintain museums a virtualized storage pool for their information and
content.
REST and Systems of Systems
REST is a software architecture style for distributed systems, particularly distributed
hypermedia systems, such as the World Wide Web. It has recently gained popularity
among enterprises such as Google, Amazon, Yahoo!, and especially social networks such
as Facebook and Twitter because of its simplicity, and its ease of being published and
consumed by clients.

REpresentational State Transfer (REST) is a software architectural style that defines the
constraints to create web services. The web services that follows the REST architectural
style is called RESTful Web Services. It differentiates between the computer system and
web services. The REST architectural style describes the six barriers.

1. Uniform Interface : RESTful services use standard HTTP methods like GET, POST,
PUT, DELETE, and PATCH. This provides a uniform interface that simplifies and
decouples the architecture, which enables each part to evolve independently.

The Uniform Interface defines the interface between client and server. The Uniform
Interface has four guiding principles:

o Resource-based: Individual resources are identified using the URI as a resource


identifier. The resources themselves are different from the representations
returned to the customer. For example, the server cannot send the database but
represents some database records expressed to HTML, XML or JSON depending
on the server request and the implementation details.
o Manipulation of resources by representation: When a client represents a
resource associated with metadata, there is information on the server to modify or
delete it.
o Self-Descriptive Message: Each message contains enough information to
describe how the message is processed. For example, the parser can be specified
by the Internet media type (known as the MIME type).
o As the engine of Hypermedia Application State (HATEOAS): Customers
provide states by query-string parameters, body content, request headers, and
requested URIs. The services provide customers with the state by response
codes, response headers and body content. It is called hypermedia
(hyperlink within hypertext).

2. Client-server
A client-server interface separates the client from the server. Servers and clients
are independently replaced and developed until the interface is changed.

3. Stateless
Stateless means the state of the service doesn't persist between subsequent
requests and response. It means that the request itself contains the state required to
handle the request. It can be a query-string parameter, entity, or header as a part of
the URI. The URI identifies the resource and state (or state change) of that resource in
the unit. After the server performs the appropriate state or status piece (s) that matters
are sent back to the client through the header, status, and response body.

o In REST, the client may include all information to fulfil the server's request and
multiple requests in the state. Statelessness enables greater scalability because the
server does not maintain, update, or communicate any session state. The
resource state is the data that defines a resource representation.

Example, the data stored in a database. Consider the application state of having data
that may vary according to client and request. The resource state is constant for every
customer who requests it.

4. Layered system

It is directly connected to the end server or by any intermediary whether a client cannot
tell. Intermediate servers improve the system scalability by enabling load-
balancing and providing a shared cache. Layers can enforce security policies.

5. Cacheable

On the World Wide Web, customers can cache responses. Therefore, responses clearly
define themselves as unacceptable or prevent customers from reusing stale
or inappropriate data to further requests. Well-managed caching eliminates some
client-server interactions to improving scalability and performance.

6. Code on Demand (optional)

The server temporarily moves or optimizes the functionality of a client by logic that it
executes. Examples of compiled components are Java applets and client-side scripts.
Compliance with the constraints will enable any distributed hypermedia system with
desirable contingency properties such as performance, scalability, variability,
visibility, portability, and reliability.

RESTful Services in Cloud Computing:

• AWS (Amazon Web Services): AWS provides a variety of RESTful APIs for its
services, such as S3 (Simple Storage Service), EC2 (Elastic Compute Cloud), and
Lambda.
• Microsoft Azure: Azure offers REST APIs to interact with its services,including
Azure Storage, Azure Compute, and Azure SQL Database.
• Google Cloud Platform (GCP): GCP provides RESTful APIs for services like
Google Cloud Storage, Google Compute Engine, and Google BigQuery.

Benefits of REST in Cloud Computing:

• Scalability: Statelessness and layered system properties enable better


scalability.
• Flexibility: A uniform interface and client-server separation allow different
components to be developed and updated independently.
• Performance: Caching improves performance by reducing the need to
repeatedly fetch the same data.

Web Services
The Internet is the worldwide connectivity of hundreds of thousands of computers
belonging to many different networks.

A web service is a standardized method for propagating messages between client and
server applications on the World Wide Web. A web service is a software module that aims
to accomplish a specific set of tasks. Web services can be found and implemented over a
network in cloud computing.
A web service is a set of open protocols and standards that allow data exchange between
different applications or systems. Web services can be used by software programs written
in different programming languages and on different platforms to exchange data through
computer networks such as the Internet. In the same way, communication on a computer
can be inter-processed

Any software, application, or cloud technology that uses a standardized Web protocol
(HTTP or HTTPS) to connect, interoperate, and exchange data messages over the
Internet-usually XML (Extensible Markup Language) is considered as a Web service.

Web services allow programs developed in different languages to be connected between a


client and a server by exchanging data over a web service. A client invokes a web service
by submitting an XML request, to which the service responds with an XML response.

o Web services functions


o It is possible to access it via the Internet or intranet network.
o XML messaging protocol that is standardized.
o Operating system or programming language independent.
o Using the XML standard is self-describing.

Web Services- Technologies

i. Simple Object Access Protocol (SOAP)


ii. Web Services Description Language (WSDL)
iii. Universal Description, Discovery and Integration (UDDI)

i. Simple Object Access Protocol (SOAP)

➢ SOAP stands for "Simple Object Access Protocol". It is a transport-independent


messaging protocol. SOAP is built on sending XML data in the form of SOAP
messages. A document known as an XML document is attached to each message.
➢ Only the structure of an XML document, not the content, follows a pattern. The great
thing about web services and SOAP is that everything is sent through HTTP, the
standard web protocol.
➢ Every SOAP document requires a root element known as an element. In an XML
document, the root element is the first element.
➢ The "envelope" is divided into two halves. The header comes first, followed by the
body. Routing data, or information that directs the XML document to which client it
should be sent, is contained in the header. The real message will be in the body.
➢ SOAP-based web services are also referred to as “big web services”.

ii. UDDI (Universal Description, Search, and Integration)

➢ UDDI is a standard for specifying, publishing and searching online service providers.
It provides a specification that helps in hosting the data through web services.
➢ UDDI provides a repository where WSDL files can be hosted so that a client
application can search the WSDL file to learn about the various actions provided by
the web service.
➢ As a result, the client application will have full access to UDDI, which acts as the
database for all WSDL files.
➢ The UDDI Registry will keep the information needed for online services, such as a
telephone directory containing the name, address, and phone number of a certain
person so that client applications can find where it is.

iii. WSDL (Web Services Description Language)

➢ The client implementing the web service must be aware of the location of the web
service. If a web service cannot be found, it cannot be used. Second, the client
application must understand what the web service does to implement the correct web
service.
➢ WSDL is used to accomplish this. A WSDL file is another XML-based file that
describes what a web service does with a client application. The client application will
understand where the web service is located and how to access it using the WSDL
document.

How does web service work?

The diagram shows a simplified version of how a web service would function. The client
will use requests to send a sequence of web service calls to the server hosting the actual
web service.

Remote procedure calls are used to perform these requests. The calls to the methods
hosted by the respective web service are known as Remote Procedure Calls (RPC).
Example: Flipkart provides a web service that displays the prices of items offered on
Flipkart.com. The front end or presentation layer can be written in .NET or Java, but the
web service can be communicated using a programming language.

The data exchanged between the client and the server, XML, is the most important
part of web service design. XML (Extensible Markup Language) is a simple, intermediate
language understood by various programming languages. It is the equivalent of HTML.

As a result, when programs communicate with each other, they use XML. It forms
a common platform for applications written in different programming languages to
communicate with each other.

Web services employ SOAP (Simple Object Access Protocol) to transmit XML data
between applications. The data is sent using standard HTTP. A SOAP message is data
sent from a web service to an application. An XML document is all that is contained in a
SOAP message. The client application that calls the web service can be built in any
programming language as the content is written in XML.

Features of Web Service

(a) XML-based: A web service's information representation and record transport layers
employ XML. There is no need for networking, operating system, or platform bindings
when using XML.

(b) Loosely Coupled: The user interface for a web service provider may change over
time without affecting the user's ability to interact with the service provider. A loosely
connected architecture makes software systems more manageable and easier to integrate
between different structures.

(c) Ability to be synchronous or asynchronous: Synchronicity refers to the client's


connection to the execution of the function. Asynchronous operations allow the client to
initiate a task and continue with other tasks. The client is blocked, and the client must
wait for the service to complete its operation before continuing in synchronous
invocation.Asynchronous clients get their results later, but synchronous clients get their
effect immediately when the service is complete.

(d) Coarse Grain: Building a Java application from the ground up requires the
development of several granular strategies, which are then combined into a coarse grain
provider that is consumed by the buyer or service.

(e) Supports remote procedural calls: Consumers can use XML-based protocols to
call procedures, functions, and methods on remote objects that use web services. A web
service must support the input and output framework of the remote system.

(f) Supports document exchanges: One of the most attractive features of XML for
communicating with data and complex entities.

WS-I Protocol Stack

• Business Process Execution Language for Web Services (BPEL4WS): a


standard executable language for specifying interactions between web services.
• Web Service WS-Notification enables web services to use the publish and
subscribe messaging pattern.
• Web Services Security (WS-Security) are set of protocols that ensure security
for SOAP-based messages by implementing the principles of confidentiality,
integrity and authentication.
• Web Services Reliable Messaging (WS-Reliable Messaging) describes a
protocol that allows messages to be delivered reliably between distributed
applications in the presence of software component, system, or network failures.
• WS-Resource Lifetime specification standardizes the means by which a WS-
Resource can be destroyed.
• WS-Policy is a specification that allows web services to use XML to advertise their
policies (on security, quality of service, etc.) and for web service consumers to
specify their policy requirements.
• WS-Resource Properties defines a standard set of message exchanges that allow
a requestor to query or update the property values of the WS-Resource.
• WS-Addressing is a specification of transport-neutral mechanism that allows web
services to communicate addressing information.
• WS-Transaction WS - Transaction is a specification developed that indicates
how transactions will be handled and controlled in Web Services.
• The transaction specification is divided into two parts - short atomic transactions
(AT) and long business activity (BA).
• Web Services Coordination (WS-Coordination) describes an extensible
framework for providing protocols that coordinate the actions of distributed
applications.
• The Java Message Service (JMS) API is a messaging standard that allows
application components based on the Java Platform Enterprise Edition (Java EE)
to create, send, receive, and read messages.
• Internet Inter-ORB Protocol (IIOP) is an object-oriented protocol, used to
facilitate network interaction between distributed programs written in different
programming languages.
• IIOP is used to enhance Internet and intranet communication for applications and
services.

Publish Subscribe Model


Consider a scenario of synchronous message passing. You have two components in your
system that communicate with each other. Let’s call the sender and receiver. The
receiver asks for a service from the sender and the sender serves the request and waits
for an acknowledgment from the receiver.
• There is another receiver that requests a service from the sender. The sender is
blocked since it hasn’t yet received any acknowledgment from the first receiver.
• The sender isn’t able to serve the second receiver which can create problems. To
solve this drawback, the Pub-Sub model was introduced.
What is Pub/Sub Architecture?
The Pub/Sub (Publisher/Subscriber) model is a messaging pattern used in software
architecture to facilitate asynchronous communication between different components or
systems. In this model, publishers produce messages that are then consumed by
subscribers.

Key points of the Pub/Sub model include:


• Publishers: Entities that generate and send messages.
• Subscribers: Entities that receive and consume messages.
• Topics: Channels or categories to which messages are published.
• Message Brokers: Intermediaries that manage the routing of messages between
publishers and subscribers.

Components of Pub/Sub Architecture


1. Publisher
The Publisher is responsible for creating and sending messages to the Pub/Sub
system. Publishers categorize messages into topics or channels based on their content.
They do not need to know the identity of the subscribers.
2. Subscriber
The Subscriber is a recipient of messages in the Pub/Sub system. Subscribers
express interest in receiving messages from specific topics. They do not need to know the
identity of the publishers. Subscribers receive messages from topics to which they are
subscribed.
3. Topic
A Topic is a named channel or category to which messages are published.
Publishers send messages to specific topics, and subscribers can subscribe to one or more
topics to receive messages of interest. Topics help categorize messages and enable
targeted message delivery to interested subscribers.
4. Message Broker
The Message Broker is an intermediary component that manages the routing of
messages between publishers and subscribers. It receives messages from publishers and
forwards them to subscribers based on their subscriptions. The Message Broker ensures
that messages are delivered to the correct subscribers and can provide additional features
such as message persistence, scalability, and reliability.
5. Message
A Message is the unit of data exchanged between publishers and subscribers in
the Pub/Sub system. Messages can contain any type of data, such as text, JSON, or binary
data. Publishers create messages and send them to the Pub/Sub system, and subscribers
receive and process these messages.
6. Subscription
A Subscription represents a connection between a subscriber and a topic.
Subscriptions define which messages a subscriber will receive based on the topics to which
it is subscribed.

How does Pub/Sub Architecture work?


• Step: 1 – Publishers create and send messages to the Pub/Sub system. They
categorize messages into topics or channels based on their content.
• Step: 2 – Subscribers express interest in receiving messages from specific topics.
They receive messages from topics to which they are subscribed.
• Step: 3 – Topics are named channels to which messages are published. Publishers
send messages to specific topics, and subscribers can subscribe to one or more
topics to receive messages of interest.
• Step: 4 – Message brokers are intermediaries that manage the routing of messages
between publishers and subscribers. They receive messages from publishers and
forward them to subscribers based on their subscriptions.
• Step: 5 – When a publisher sends a message to a topic, the message broker receives
the message and forwards it to all subscribers that are subscribed to that topic.
• Step: 6 – Pub/Sub allows for asynchronous communication between publishers
and subscribers. Publishers can send messages without waiting for subscribers to
receive them, and subscribers can receive messages without the need for
publishers to be active.

Real-World Example of Pub/Sub Architecture


A real-life example of Pub/Sub architecture can be seen in the operation of a social media
platform like Twitter.
• Publishers: Users post tweets, which are published to the Twitter platform.
• Subscribers: Followers of a user subscribe to their tweets to receive updates.
• Topics: Each user’s tweets can be considered a topic, and subscribers receive
updates for the topics they are interested in.
• Message Broker: Twitter’s backend infrastructure acts as the message broker,
routing tweets from publishers to subscribers.
• Message: Each tweet is a message that is published by the user and received by
their followers.
In this example, the Pub/Sub architecture allows for scalable and efficient distribution of
tweets to followers. Publishers (users) can publish tweets without knowing who will
receive them, and subscribers (followers) receive updates in real-time without the need
for direct communication with the publishers.
When to Use the Pub/Sub Architecture
• Decoupling: Use Pub/Sub when you want to decouple the components of your
system. Publishers and subscribers do not need to know about each other, which
allows for more flexible and scalable systems.
• Scalability: Pub/Sub can be used to build highly scalable systems. You can easily
add more publishers or subscribers without affecting the existing components.
• Asynchronous Communication: If you need asynchronous communication
between components, Pub/Sub is a good choice. Publishers can send messages
without waiting for subscribers to receive them.
• Event-Driven Architecture: Pub/Sub is well-suited for event-driven
architectures. Publishers can emit events and subscribers can react to these events
without tight coupling between them.
• Dynamic Subscriptions: Pub/Sub allows for dynamic subscriptions. Subscribers
can subscribe to different topics or classes of messages at runtime, which adds
flexibility to the system.
When Not to Use the Pub/Sub Architecture
• Low Latency: If you require low-latency communication between components,
Pub/Sub might not be the best choice. The overhead of message routing and
subscription management can introduce latency.
• Complexity: Pub/Sub adds complexity to the system, especially in terms of
message routing and managing subscriptions. If the system is simple and does not
require this level of complexity, simpler communication patterns may be more
appropriate.
• Ordered Delivery: Pub/Sub does not guarantee message delivery in a specific
order. If your application requires strict ordering of messages, Pub/Sub may not be
suitable.
• Small Scale: For small-scale applications with a limited number of components
that communicate directly with each other, Pub/Sub may introduce unnecessary
complexity.
Benefits of Pub/Sub Architecture
• Scalability: Pub/Sub systems can easily scale to accommodate a large number of
publishers, subscribers, and messages.
• Decoupling: Pub/Sub decouples the publishers of messages from the subscribers,
allowing them to operate independently. This decoupling simplifies the design and
maintenance of the system and makes it easier to add or remove components.
• Asynchronous Communication: Pub/Sub enables asynchronous
communication between components, which improves system responsiveness and
efficiency. Publishers can send messages without waiting for subscribers to receive
them, and subscribers can process messages at their own pace.
• Reliability: Pub/Sub systems are designed to be reliable, with mechanisms in
place to ensure that messages are delivered successfully. This reliability is
achieved through features such as message acknowledgments, retries, and fault
tolerance.
• Real-time Data Streaming: Pub/Sub is well-suited for real-time data streaming
applications, where data is generated and processed in real-time. Pub/Sub systems
can handle high volumes of data and deliver it to subscribers in real-time, making
them ideal for use cases such as IoT, gaming, and financial services.

Challenges of Pub/Sub Architecture


• Message Ordering: Pub/Sub systems typically do not guarantee the order in
which messages are delivered to subscribers. This can be a challenge for
applications that require strict message ordering, as subscribers may receive
messages out of order.
• Exactly-once Message Delivery: Ensuring exactly-once message delivery can
be challenging in Pub/Sub systems, especially in the presence of failures or
network issues. Implementing mechanisms to guarantee exactly-once delivery
without introducing duplicates can be complex.
• Latency: Pub/Sub systems introduce latency due to the message routing and
delivery process. Minimizing latency while maintaining scalability and reliability
can be challenging, especially in real-time applications where low latency is
critical.
• Complexity: Implementing a Pub/Sub architecture can introduce complexity,
especially in large-scale deployments.
• Security: Securing Pub/Sub systems against unauthorized access, data breaches,
and message tampering requires implementing robust authentication,
authorization, and encryption mechanisms.

Basics of Virtualization
Virtualization is the "creation of a virtual (rather than actual) version of something,
such as a server, a desktop, a storage device, an operating system or network resources".
It is one of the main cost-effective, hardware-reducing, and energy-saving
techniques used by cloud providers.
Virtualization allows sharing of a single physical instance of a resource or an
application among multiple customers and organizations at one time. It does this by
assigning a logical name to physical storage and providing a pointer to that physical
resource on demand.
The term virtualization is often synonymous with hardware virtualization, which
plays a fundamental role in efficiently delivering Infrastructure-as-a-Service (IaaS)
solutions for cloud computing. Moreover, virtualization technologies provide a virtual
environment for not only executing applications but also for storage, memory, and
networking.

What is the concept behind the Virtualization?


Creation of a virtual machine over existing operating system and hardware is
known as Hardware Virtualization. A Virtual machine provides an environment that is
logically separated from the underlying hardware.
• The machine on which the virtual machine is going to create is known as Host
Machine and that virtual machine is referred as a Guest Machine.

How does virtualization work in cloud computing?


Virtualization plays a very important role in the cloud computing technology,
normally in the cloud computing, users share the data present in the clouds like
application etc, but actually with the help of virtualization users shares the
Infrastructure.
• The main usage of Virtualization Technology is to provide the applications
with the standard versions to their cloud users, suppose if the next version of that
application is released, then cloud provider has to provide the latest version to
their cloud users and practically it is possible because it is more expensive.
• To overcome this problem we use basically virtualization technology. By using
virtualization, all severs and the software application which are required by other
cloud providers are maintained by the third party people, and the cloud providers
has to pay the money on monthly or annual basis.

Benefits of Virtualization
• More flexible and efficient allocation of resources.
• Enhance development productivity.
• It lowers the cost of IT infrastructure.
• Remote access and rapid scalability.
• High availability and disaster recovery.
• Pay peruse of the IT infrastructure on demand.
• Enables running multiple operating systems.
Drawback of Virtualization
• High Initial Investment: Clouds have a very high initial investment, but it is
also true that it will help in reducing the cost of companies.
• Learning New Infrastructure: As the companies shifted from Servers to Cloud,
it requires highly skilled staff who have skills to work with the cloud easily, and
for this, you have to hire new staff or provide training to current staff.
• Risk of Data: Hosting data on third-party resources can lead to putting the data
at risk, it has the chance of getting attacked by any hacker or cracker very easily.

Types of Virtualization
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.
5. Application Virtualization
6. Network Virtualization
7. Desktop Virtualization
8. Data Virtualization

1) Hardware Virtualization:
o When the virtual machine software or virtual machine manager (VMM) is directly
installed on the hardware system is known as hardware virtualization.
o The main job of hypervisor is to control and monitoring the processor, memory and
other hardware resources.
o After virtualization of hardware system we can install different operating system
on it and run different applications on those OS.
Usage:
Hardware virtualization is mainly done for the server platforms, because controlling
virtual machines is much easier than controlling a physical server.

2) Operating System Virtualization:


When the virtual machine software or virtual machine manager (VMM) is installed on
the Host operating system instead of directly on the hardware system is known as
operating system virtualization.
Usage:
Operating System Virtualization is mainly used for testing the applications on different
platforms of OS.

3) Server Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly
installed on the Server system is known as server virtualization.
Usage:
Server virtualization is done because a single physical server can be divided into multiple
servers on the demand basis and for balancing the load.
4) Storage Virtualization:
Storage virtualization is an array of servers that are managed by a virtual storage system.
The servers aren’t aware of exactly where their data is stored and instead function more
like worker bees in a hive. It makes managing storage from multiple sources be managed
and utilized as a single repository. storage virtualization software maintains smooth
operations, consistent performance, and a continuous suite of advanced functions despite
changes, breaks down, and differences in the underlying equipment.
Usage:
Storage virtualization is mainly done for back-up and recovery purposes.

5) Application Virtualization:
Application virtualization helps a user to have remote access to an application from
a server. The server stores all personal information and other characteristics of the
application but can still run on a local workstation through the internet.
Usage:
When a user who needs to run two different versions of the same software, application
virtualization is used.
6) Network Virtualization:
The ability to run multiple virtual networks with each having a separate control
and data plan. It co-exists together on top of one physical network. It can be managed by
individual parties that are potentially confidential to each other.
Usage:
Network virtualization provides a facility to create and provision virtual networks, logical
switches, routers, firewalls, load balancers, Virtual Private Networks (VPN), and
workload security within days or even weeks.
7) Desktop Virtualization:
Desktop virtualization allows the users’ OS to be remotely stored on a server in
the data centre. It allows the user to access their desktop virtually, from any location by
a different machine. Users who want specific operating systems other than Windows
Server will need to have a virtual desktop.
Usage:
The main benefits of desktop virtualization are user mobility, portability, and easy
management of software installation, updates, and patches.
8) Data Virtualization:
This is the kind of virtualization in which the data is collected from various sources
and managed at a single place without knowing more about the technical information like
how data is collected, stored & formatted then arranged that data logically so that its
virtual view can be accessed by its interested people and stakeholders, and users through
the various cloud services remotely. Many big giant companies are providing their
services like Oracle, IBM, At scale, Cdata, etc.

Implementation Levels of Virtualization


Implementation Levels of Virtualization in Cloud Computing
➢ In the world of computing, using just one software instance is not enough anymore.
Professionals are looking to test their programs or software on multiple platforms.
➢ But constraints create challenges in doing so. The solution? Virtualization. Here,
users can create various platform instances such as operating systems,
applications, etc.
➢ It is not simple to set up virtualization. Your computer runs on an operating
system that gets configured on some particular hardware. It is not feasible or easy
to run a different operating system using the same hardware.
➢ To do this, you will need a hypervisor. Now, what is the role of the hypervisor? It
is a bridge between the hardware and the virtual operating system, which allows
smooth functioning.
➢ Virtualization has the capability to run multiple instances of computer systems on
the same hardware. The way hardware is being used can vary based on the
configuration of the virtual machine.
➢ The best example of this is your own desktop PC or laptop. You might be running
Windows on your system, but with virtualization, now you can also run Macintosh
or Linux Ubuntu on it.
➢ There are five levels of virtualizations available that are most commonly used in
the industry.

1) Instruction Set Architecture Level (ISA)

ISA virtualization can work through ISA emulation. This is used to run many legacy codes
written for a different hardware configuration. These codes run on any virtual machine
using the ISA. With this, a binary code that originally needed some additional layers to
run is now capable of running on the x86 machines. It can also be tweaked to run on the
x64 machine. With ISA, it is possible to make the virtual machine hardware agnostic.

For the basic emulation, an interpreter is needed. This interpreter interprets the source
code and converts it to a hardware readable format for processing.

2) Hardware Abstraction Level (HAL)

As the name suggests, this level helps perform virtualization at the hardware level. It
uses a bare hypervisor for its functioning.The virtual machine is formed at this level,
which manages the hardware using the virtualization process. It allows the virtualization
of each of the hardware components, which could be the input-output device, the memory,
the processor, etc.

Multiple users will not be able to use the same hardware and also use multiple
virtualization instances at the very same time. This is mostly used in the cloud-based
infrastructure.

✓ IBM had first implemented this on the IBM VM/370 back in 1960. It is more usable
for cloud-based infrastructure.
✓ Thus, it is no surprise that currently, Xen hypervisors are using HAL to run Linux
and other OS on x86 based machines
3) Operating System Level

At the level of the operating system, the virtualization model is capable of creating a layer
that is abstract between the operating system and the application. It is like an isolated
container on the operating system and the physical server, which uses the software and
hardware. Each of these then functions in the form of a server.

When the number of users is high, and no one is willing to share hardware, then this is
where the virtualization level is used. Every user will get his virtual environment using
a dedicated virtual hardware resource.

4) Library Level

OS system calls are lengthy and cumbersome, and this is when the applications use the
API from the libraries at a user level. These APIs are documented well, and this is why
the library virtualization level is preferred in these scenarios.

API hooks make it possible as it controls the link of communication from the application
to the system.

5) Application Level

Application-level virtualization comes handy when you wish to virtualize only an


application. It does not virtualize an entire platform or environment.

On an operating system, applications work as one process. Hence it is also known as


process-level virtualization.

• It is generally useful when running virtual machines with high-level languages.


Here, the application sits on top of the virtualization layer, which is above the
application program. The application program is, in turn, residing in the operating
system.
• Programs written in high-level languages and compiled for an application-level
virtual machine can run fluently here.

Virtualization Structures / Tools and Mechanisms

Before virtualization, the operating system manages the hardware. After virtualization,
a virtualization layer is inserted between the hardware and the OS. In such a case, the
virtualization layer is responsible for converting portions of the real hardware into virtual
hardware. Depending on the position of the virtualization layer, there are several classes
of VM architectures, namely

1. Hypervisor architecture
2. Para virtualization
3. Host-based virtualization.

The hypervisor is also known as the VMM (Virtual Machine Monitor). They both
perform the same virtualization operations.
1. Hypervisor and Xen Architecture

• The hypervisor supports hardware-level virtualization on bare metal devices like


CPU, memory, disk and network interfaces. The hypervisor software sits directly
between the physical hardware and its OS. This virtualization layer is referred to as
either the VMM or the hypervisor.
• The hypervisor provides hyper calls for the guest OSes and applications. Depending
on the functionality, a hypervisor can assume a micro-kernel architecture like the
Microsoft Hyper-V. Or it can assume a monolithic hypervisor architecture like the
VMware ESX for server virtualization.

A micro-kernel hypervisor includes only the basic and unchanging functions (such as
physical memory management and processor scheduling). The device drivers and other
changeable components are outside the hypervisor. A monolithic hypervisor implements
all the aforementioned functions, including those of the device drivers. Therefore, the size
of the hypervisor code of a micro-kernel hyper-visor is smaller than that of a monolithic
hypervisor. Essentially, a hypervisor must be able to convert physical devices into virtual
resources dedicated for the deployed VM to use.
❖ Xen Architecture

Xen is an open source hypervisor program developed by Cambridge University. Xen is a


microkernel hypervisor, which separates the policy from the mechanism. It just provides
a mechanism by which a guest OS can have direct access to the physical devices. As a
result, the size of the Xen hypervisor is kept rather small. Xen provides a virtual
environment located between the hardware and the OS.

The core components of a Xen system are the hypervisor, kernel, and applications. The
organization of the three components is important. Like other virtualization systems,
many guest OSes can run on top of the hypervisor.

However, not all guest OSes are created equal, and one in particular controls the others.
The guest OS, which has control ability, is called Domain 0, and the others are called
Domain U. Domain 0 is a privileged guest OS of Xen. It is first loaded when Xen boots
without any file system drivers being available. Domain 0 is designed to access hardware
directly and manage devices. Therefore, one of the responsibilities of Domain 0 is to
allocate and map hardware resources for the guest domains (the Domain U domains), or
rerun from the same point many times (e.g., as a means of distributing dynamic content
or circulating a “live” system image).
2. Binary Translation with Full Virtualization
Depending on implementation technologies, hardware virtualization can be classified into
two categories: full virtualization and host-based virtualization.

❖ Full virtualization
o Full virtualization does not need to modify the host OS. It relies on binary
translation to trap and to virtualize the execution of certain sensitive,
nonvirtualizable instructions. The guest OSes and their applications consist of
noncritical and critical instructions.
o In a host-based system, both a host OS and a guest OS are used. A virtualization
software layer is built between the host OS and guest OS.

❖ Binary Translation of Guest OS Requests Using a VMM

This approach was implemented by VMware and many other software companies. As
shown in above Figure, VMware puts the VMM at Ring 0 and the guest OS at Ring 1. The
VMM scans the instruction stream and identifies the privileged, control- and behavior-
sensitive instructions. When these instructions are identified, they are trapped into the
VMM, which emulates the behaviour of these instructions. The method used in this
emulation is called binary translation. Therefore, full virtualization combines binary
translation and direct execution. The guest OS is unaware that it is being virtualized.

The performance of full virtualization may not be ideal, because it involves binary
translation which is rather time-consuming.

Binary translation employs a code cache to store translated hot instructions to improve
performance, but it increases the cost of memory usage. At the time of this writing, the
performance of full virtualization on the x86 architecture is typically 80 percent to 97
percent that of the host machine.

❖ Host-Based Virtualization

An alternative VM architecture is to install a virtualization layer on top of the host OS.


This host OS is still responsible for managing the hardware. The guest OSes are installed
and run on top of the virtualization layer. Dedicated applications may run on the VMs.
Certainly, some other applications can also run with the host OS directly.
Host-based virtualization is a powerful technology that enables the virtualization of
servers within a data center. By utilizing a hypervisor, which is a software layer that
sits between the host server’s hardware and the virtual machines, host-based
virtualization enables multiple virtual machines to run simultaneously on a single
physical server.

One of the key advantages of host-based virtualization is the ability to consolidate


multiple servers onto a single physical server, leading to improved resource utilization
and cost savings. This consolidation also simplifies backup and disaster
recovery processes, as virtual machines can easily be replicated and moved between
physical hosts.

3. Para Virtualization with compiler support


Para virtualization is a type of virtualization where software instructions from the guest
operating system running inside a virtual machine can use “hypercalls” that communicate
directly with the hypervisor.

➢ Para-Virtualization Architecture

When the x86 processor is virtualized, a virtualization layer is inserted between the
hardware and the OS. According to the x86 ring definition, the virtualization layer should
also be installed at Ring 0. Different instructions at Ring 0 may cause some problems. In
Figure 3.8, we show that para-virtualization replaces nonvirtualizable instructions with
hypercalls that communicate directly with the hypervisor or VMM. However, when the
guest OS kernel is modified for virtualization, it can no longer run on the hardware
directly.

Compared with full virtualization, para-virtualization is relatively easy and more


practical. The main problem in full virtualization is its low performance in binary
translation. To speed up binary translation is difficult. Therefore, many virtualization
products employ the para-virtualization architecture. The popular Xen, KVM, and
VMware ESX are good examples.
➢ KVM (Kernel-Based VM)
This is a Linux para-virtualization system—a part of the Linux version 2.6.20 kernel.
Kernel-based Virtual Machine (KVM) is an open source virtualization technology built
into Linux®. Specifically, KVM lets you turn Linux into a hypervisor that allows a host
machine to run multiple, isolated virtual environments called guests or virtual machines
(VMs). KVM is part of Linux.
➢ Para-Virtualization with Compiler Support

ESX is a VMM or a hypervisor for bare-metal x86 symmetric multiprocessing (SMP)


servers.

Unlike the full virtualization architecture which intercepts and emulates privileged and
sensitive instructions at runtime, para-virtualization handles these instructions at
compile time. The guest OS kernel is modified to replace the privileged and sensitive
instructions with hypercalls to the hypervisor or VMM. Xen assumes such a para-
virtualization architecture.

Virtualization Of CPU, Memory, And I/O Devices


To support virtualization, processors such as the x86 employ a special running mode and
instructions, known as hardware-assisted virtualization. In this way, the VMM and guest
OS run in different modes and all sensitive instructions of the guest OS and its
applications are trapped in the VMM. To save processor states, mode switching is
completed by hardware. For the x86 architecture, Intel and AMD have proprietary
technologies for hardware-assisted virtualization.

➢ Hardware Support for Virtualization


• Modern operating systems and processors permit multiple processes to run
simultaneously. If there is no protection mechanism in a processor, all instructions
from different processes will access the hardware directly and cause a system crash.
• Therefore, all processors have at least two modes, user mode and supervisor mode,
to ensure controlled access of critical hardware.
• Instructions running in supervisor mode are called privileged instructions. Other
instructions are unprivileged instructions. In a virtualized environment, it is more
difficult to make OSes and applications run correctly because there are more layers in
the machine stack.
Example: Intel’s hardware support approach.
Hardware Support for Virtualization in the Intel x86 Processor Since software-based
virtualization techniques are complicated and incur performance overhead, Intel provides
a hardware-assist technique to make virtualization easy and improve performance.
Figure 3.10 provides an overview of Intel’s full virtualization techniques. For processor
virtualization, Intel offers the VT-x or VT-i technique. VT-x adds a privileged mode
(VMX Root Mode) and some instructions to processors. This enhancement traps all
sensitive instructions in the VMM automatically. For memory virtualization, Intel
offers the EPT, which translates the virtual address to the machine’s physical addresses
to improve performance. For I/O virtualization, Intel implements VT-d and VT-c to
support this.

CPU Virtualization
A single CPU can run numerous operating systems (OS) via CPU virtualization in cloud
computing. This is possible by creating virtual machines (VMs) that share the physical
resources of the CPU. Each Virtual Machine can’t see or interact with each other’s data
or processes.

CPU virtualization is very important in cloud computing. It enables cloud providers to


offer services like –
• Virtual private servers (VPSs)
• Cloud storage (EBS)
• Cloud computing platforms (AWS, Azure and Google Cloud)
Consider an example to understand CPU virtualization. Imagine we have a physical
server with a single CPU. We want to run two different operating systems on this
server,Windows & Linux. So it can easily be done by creating two Virtual Machines (VMs),
one for Windows and one for Linux.

The virtualization software will create a virtual CPU for each VM. The virtualization
software will create a virtual CPU for each VM. The virtual CPUs will execute on the
physical CPU but separately. This means the Windows Virtual Machine cannot view or
communicate with the Linux VM, and vice versa.

The virtualization software will also allocate memory and other resources to each VM.
This guarantees each VM has enough resources to execute. CPU virtualization is made
difficult but necessary for cloud computing.
How CPU Virtualization Works? In Step By Step Process

Step 1: Creating Virtual Machines (VMs)


• Let’s take an example you have a powerful computer with a CPU, memory, and
other resources.
• To start CPU virtualization, you use special software called a hypervisor. This is
like the conductor of a virtual orchestra.
• The hypervisor creates virtual machines (VMs) – these are like separate, isolated
worlds within your computer.
• The “virtual” resources of each VM include CPU, memory, and storage. It’s like
having multiple mini-computers inside your main computer.

Step 2: Allocating Resources


• The hypervisor carefully divides the real CPU’s processing power among the VMs.
It’s like giving each VM its own slice of the CPU pie.
• It also makes sure that each Virtual memory (VM) gets its share of memory,
storage, and other resources.

Step 3: Isolation and Independence


Each VM operates in its own isolated environment. It can’t see or interfere with
what’s happening in other VMs.

Step 4: Running Operating Systems and Apps


• Within each Virtual Machine, you can install & run different operating systems
(like Windows, Linux) and applications.
• The VM thinks it’s a real computer, even though it’s sharing the actual computer’s
resources with other VMs.

Step 5: Managing Workloads


• The hypervisor acts as a smart manager, deciding when each VM gets to use the
real CPU.
• It ensures that no VM takes up all the CPU time, making sure everyone gets their
turn to work.

Step 6: Efficient Use of Resources


• Even though there’s only one physical CPU, each VM believes it has its own
dedicated CPU.
• The hypervisor cleverly switches between VMs so that all the tasks appear to be
happening simultaneously.

Advantages Of CPU Virtualization In Cloud Computing

1) Efficient Resource Utilization: CPU virtualization lets one powerful machine


handle multiple tasks simultaneously. This maximizes the use of h/w resources and
reduces wastage.

2) Cost Savings: By running multiple virtual machines on a single physical server, cloud
providers save on hardware costs, energy consumption, and maintenance.

3) Scalability: CPU virtualization allows easy scaling, adding or removing virtual


machines according to demand. This flexibility helps businesses adapt to changing needs
as per requirements.
4) Isolation and Security: Each Virtual Machine (VM) is isolated from others, providing
a layer of security. If one VM has a problem, it’s less likely to affect others.

5) Compatibility and Testing: Different operating systems (OS) & applications can run
on the same physical hardware (h/w), making it easier to test new software without
affecting existing setups.

Disadvantages Of CPU Virtualization In Cloud Computing:

1) Overhead: The virtualization layer adds some overhead, which means a small portion
of CPU power is used to manage virtualization itself.

2) Performance Variability: Depending on the number of virtual machines and their


demands, performance can vary. If one VM needs a lot of resources, others might
experience slower performance.

3) Complexity: Handling multiple virtual machines and how they work together needs
expertise. Creating and looking after virtualization systems can be complicated.

4) Compatibility Challenges: Some older software or hardware might not work well
within virtualized environments. Compatibility issues can arise.

5) Resource Sharing: While CPU virtualization optimizes resource usage, if one VM


suddenly requires a lot of resources, it might impact the performance of others.

Memory Virtualization
In the world of cloud computing, where data and applications are scattered across
vast networks, there’s a buzzword that directs the behind-the-scenes performance
– Memory Virtualization. It’s the reason your cloud-based services work like a
finely tuned symphony, you never run out of memory when browsing the net .

What Is Memory Virtualization?

Memory virtualization is like having a super smart organizer for your computer brain
(Running Memory -RAM). Imagine your computer brain is like a big bookshelf, and all
the apps and programs you installed or are running are like books.

Memory virtualization is the librarian who arranges these books so your computer can
easily find and use them quickly. It also ensures that each application gets a fair share of
the memory to run smoothly and prevents mess, which ultimately makes your computer
brain (RAM) more organized (tidy) and efficient.

In technical language, memory virtualization is a technique that abstracts, manages, and


optimizes physical memory (RAM) used in computer systems. It creates a layer of
abstraction between the RAM and the software running on your computer. This layer
enables efficient memory allocation to different processes, programs, and virtual
machines.
Memory virtualization helps optimize resource utilization and secures the smooth
operations of multiple applications on shared physical memory (RAM) by ensuring each
application gets the required memory to work flawlessly.

*Note – Don’t confuse it with virtual memory! Virtual memory is like having a bigger
workspace (hard drive) to handle large projects, and memory virtualization is like an office
manager dividing up the shared resources, especially computer RAM, to keep things
organized and seamless.

How Is Memory Virtualization Useful In Our Daily Lives?

Basically, memory virtualization helps our computer systems to work fast and smoothly.
It also provides sufficient memory for all apps and programs to run seamlessly.

Memory virtualization, a personal computer assistant, ensures everything stays


organized and works properly, which is very important for the efficient working of our
computers and smartphones. Whether browsing the web, working on Google documents,
or using complex software, memory virtualization is the hero that provides us with a
smooth and responsive computing experience in our daily lives.

Memory virtualization is essential for modern computing, especially in cloud computing,


where multiple users and applications share the same physical hardware (Like RAM and
System).

It helps in efficient memory management and allocation, isolation between applications


(by providing the required share of memory), and dynamic adjustment based on the
running workloads of various applications. Without memory virtualization, it would be
challenging to run multiple applications at the same time.

How Does Memory Virtualization Work In Cloud Computing?

You may be thinking all that is fine, but how does memory virtualization work in cloud
computing? It’s just part of the broader concept of resource virtualization, which includes
internet, storage, network, and many other virtualization techniques.

When memory virtualization takes place in cloud infrastructure, it goes through a


process.
Key Elements Involved in Memory Virtualization
1. Abstraction of Physical Memory

Like virtual memory (Hard Drive) abstracts physical memory (RAM/Cache Memory) in
traditional computing, similarly, memory virtualization in cloud computing abstracts the
physical memory (RAM – Running Memory) of various Virtual Machines (VMs) to create
a pool of resources to allocate to a group of VMs.

For this abstraction of physical memory, Cloud service providers use a hypervisor known
as a Virtual Machine Monitor (VMM) that abstracts and manages VM memory in
cloud Computing. This abstraction process allows cloud users (VMs) to request and
consume memory without worrying about the storage limit.

2. Resource Pooling

In cloud computing, there is a Cloud Data Center where multiple physical servers host
various Virtual Machines (VMs) and manage their dynamic workloads. Memory
virtualization pools the memory resources (storage) from the data center to create a
shared resource pool (Virtual Memory).

This pool can be allocated to different VMs and cloud users per their dynamic needs and
workload.

3. Dynamic Allocation

Cloud service providers use memory virtualization to allocate virtual memory to VMs and
Cloud users instantly on demand (According to Workload). It means cloud memory can
be dynamically assigned and reassigned based on the fluctuating workload.

This elasticity of cloud computing enables effective use of available resources, and cloud
users can scale up or down their cloud memory as needed. Additionally, cloud migration
services help in ensuring the seamless transfer of data and applications to the cloud,
enhancing the benefits of memory virtualization.

4. Isolation and Data Security

Memory virtualization ensures that the virtual memory allocated to one cloud user or VM
is isolated from others. This isolation is vital for data security and prevents one individual
from accessing another’s data or memory.

That’s why many sensitive IT companies prefer to purchase private cloud services to
prevent hacking and data breaches.

Importance of memory virtualization in cloud computing:


1. Memory virtualization allows cloud providers to use physical memory resources in the
most efficient way. Overcommitting of memory allows the optimization of memory
resources and hardware.

2. This virtualization enables the dynamic allocation of cloud memory to cloud user
instances. This elasticity is crucial in cloud computing to manage varying workloads. It
allows cloud users to scale up and down memory resources as needed and promotes
flexibility and cost savings.
3. Allocating separate cloud memory for every single user prevents unauthorized access
and is a must for data security.

4. Memory virtualization is vital for handling a large number of users and workloads. It
ensures that scaling up or down memory can be done without manual intervention
whenever a VM is required.

5. Migration and live migration are important for load balancing, hardware maintenance,
and disaster recovery in cloud computing. Implementing reliable software migration
services is crucial for ensuring smooth transitions and maintaining system stability
during memory virtualization processes.

6. By optimizing virtual memory usage, memory virtualization maximizes physical


memory utilization and helps reduce the overall operational cost of the cloud.

Applications Of Memory Virtualization In The Digital World


• Cloud Computing: In shared cloud environments, memory virtualization
ensures that each virtual machine (VM) has an isolated memory and gets the
required memory whenever needed.
• High-Performance Computing (HPC): In HPC clusters, it ensures that
memory is efficiently allocated to multiple processes parallelly for seamless,
complex scientific simulations and big data analysis.
• Data Centers: Large enterprises with heavy data load require memory
virtualization to run multiple applications on a shared server. It simplifies the
resources where multiple teams and departments have varying memory
requirements and dynamic loads.
• Resource-Constrained Environment: When computers have limited physical
space(RAM), memory virtualization helps optimize memory usage and prevent
resource contention. This process helps in better memory balancing and system
performance.
• Help in Disaster Recovery: Memory virtualization enables the transfer of
memory between two data centers and maintains services during failure.
• Testing and Development of Applications: Utilized in simulated real-world
conditions and tested application performance under various regulations.
• IoT and Edge Computing: New edge applications and devices use memory
virtualization for efficient RAM allocation and isolation of cache for different apps
and websites. For example, when you use two different apps on your mobile device,
one app can’t access another app’s data without your permission.

I/O Virtualization
I/O Virtualization in cloud computing refers to the process of abstracting and managing
inputs and outputs between a guest system and a host system in a cloud environment. It
is a critical component of cloud infrastructure, enabling efficient, flexible, and scalable
data transmission between different system layers and hardware. This technology greatly
enhances the performance, scalability, and availability of cloud services, making it an
essential tool in the era of big data and high-performance computing.
• I/O virtualization involves managing the routing of I/O requests between virtual
devices and the shared physical hardware.
• Input/output virtualization involves abstracting the input and output processes
from physical hardware, allowing multiple virtual environments to share the same
physical resources.

Three ways to implement I/O virtualization:


1. Full device emulation
2. Para-virtualization
3. Direct I/O
Full device emulation: In full emulation the I/O devices, CPU, main memory are
virtualized. The guest operating system would access virtual devices not physical devices.
Para-virtualization: software instructions from the guest operating system running
inside a virtual machine can use “hypercalls” that communicate directly with the
hypervisor. This provides an interface very similar to software running natively on the
host hardware.
Direct I/O: Direct I/O virtualization lets the VM access devices directly. It can achieve
close-to-native performance without high CPU costs.

The Role of I/O Virtualization in Cloud Computing


▪ In a cloud computing environment, I/O virtualization enables multiple virtual
machines on a single physical host to share I/O devices. This is achieved by using
a virtual I/O controller that acts as an intermediary between the VMs and the
physical devices.
▪ The controller manages the I/O requests from the VMs and directs them to the
appropriate device. This process is transparent to the VMs, which operate as if
they have their own dedicated I/O devices.
▪ By abstracting the I/O devices, virtualization helps to improve resource utilization
and efficiency. It allows for more VMs to be hosted on a single physical server,
reducing hardware costs and power consumption.
▪ Additionally, it enables easy migration of VMs from one server to another, which
can be a significant advantage in terms of load balancing and fault tolerance.

Benefits of I/O Virtualization


✓ Performance: Improve performance of the system by reducing the overhead
associated with managing physical devices.
✓ Resource utilization: By allowing multiple VMs to share the same physical I/O
resources, virtualization helps to maximize the use of these resources, reducing
the need for additional hardware.
✓ Increased flexibility : With virtualization, it is possible to dynamically allocate
and reallocate resources based on demand.

Challenges in I/O Virtualization


✓ Complexity of managing virtual I/O devices.
✓ Virtualization layer can introduce additional latency, which can impact the
performance of the system.
✓ Resource contention : If multiple VMs are trying to access the same physical device
at the same time, it can lead to performance issues. This can be mitigated by using
techniques such as resource scheduling and load balancing, but these solutions can
also add to the complexity of the system.

Virtualization Support and Disaster Recovery

What is virtualization support and disaster recovery in cloud computing?


Businesses back up their data and operations using offsite virtual machines (VMs) not
affected by physical disasters. With virtualization as part of the disaster recovery plan,
businesses automate some processes, recovering faster from a natural disaster.

Cloud-based backup and retrieval capabilities help you to back-up and reestablish
business-critical directories if they are breached. Thanks to its high adaptability, cloud
technologies allow efficient disaster recovery, irrespective of the task's nature or ferocity.
Data is kept in a virtual storage environment designed for increased accessibility. The
program is accessible on availability, enabling companies of various sizes to customize
Disaster Recovery (DR) solutions to their existing requirements.
Cloud Disaster Recovery (CDR) is based on a sustainable program that provides you
recover safety functions fully from a catastrophe and offers remote access to a computer
device in a protected virtual world.
When it comes to content DRs, maintaining a supplementary data center can be expensive
and time taking. CDR (Cloud disaster recovery) has altered it all in the conventional DR
(Disaster recovery) by removing the requirement for a centralized system and drastically
reducing leisure time. Information technology (IT) departments can now use the cloud's
benefits to twist and refuse instantly. This leads to faster recovery periods at a fraction of
the price.
As corporations keep adding system and software apps and services to their day-to-day
procedures, the associated privacy concerns significantly raise. Crises can happen at any
point and maintain a company decimated by huge information loss. When you recognize
what it can charge, it is evident why it makes good sense to establish an information
restore and retrieval plan.
How is cloud disaster management working?
Cloud disaster recovery is taking a very differentiated perspective from classical DR
(Disaster recovery). Rather than stacking data centers with Operating system technology
and fixing the final configuration used in manufacturing, cloud disaster recovery captures
the whole server, including the OS, apps, fixes, and information, into a separate software
package or virtual environment.
The virtual server is then replicated or supported to an off-site server farm or rolled to a
remote server in mins. While the virtual server is not hardware-dependent, the OS, apps,
flaws, and information can be moved from one to another data center much quicker than
conventional DR methodologies.

How could Rackware assist you?


Rackware evolves cloud management technology that helps businesses relocate
implementations, offer additional disaster recovery and fallback, and cloud storage
management.
The RackWare Management Module (RMM) offers Information systems adaptability to
companies by streamlining disaster recovery and fallback to any server. Several of the
features are discussed as follows:

o Single framework
It is a single centralized solution that enables replication, sync, integration, cloud-based
disaster healing.
o Widely compatible
It endorses all physical, digital, and web environments, Hyper-v and cloud atheist load.
o Endorses all apps
It promotes all apps, their information, and setup without rewriting any implementations.
o Prevent lock-in
Rackware decreases the risk and seller bolt assistance for physical-cloud, data center, and
even cloud-physical restore and tragedy retrieval irrespective of supplier.
o Automatic disaster recovery testing
Trying down disaster recovery testing helps the company decrease time and labor costs
by up to 80 percent from auto DR statistical techniques.
o Personalize the RTO/RPO
Provides flexibility to personalize RPO, RTO, and expense priorities as per business
requirements through various pre-provisioned or adaptive methods.
o Dynamic provisioning
Dynamic procurement considerably reduces the cost of providing Disaster recovery event
servers rather than pre-provisioning: this does not use computed assets until failure
occurs.
o Selective synchronization
Selective sync enables a set of policies, security, and priorities of mission-critical
applications and file systems.

Selecting a Cloud disaster recovery provider


When choosing a cloud disaster recovery provider, six factors must be considered:
reliability, location, security, compliance, and scalability.
First, a company must perceive the physical distance and throughput of the CDR vendor;
placing the disaster recovery too close raises the risk of a given virtual disaster, but
putting the DR too far aside enhances frequency and network traffic, making it more
challenging to obtain DR material. When the DR information is available from multiple
international business places, the area can be rugged. After that, recognize the
dependability of the cloud DR provider. Only the cloud has leisure time, and system
failure during rehabilitation can be equivalently devastating for the industry.

Cloud disaster recovery methodologies

Recognize the cloud disaster recovery providing scalability. It must be protecting specific
information, apps, and other assets while also accommodating added resources as
required and providing sufficient efficiency as other international customers utilize the
facilities. Recognize the disaster recovery content's security needs and ensure that the
vendor can offer authentication, VPNs (virtual private networks), cryptography, and other
toolkits are required to protect it's vital resources.

o Warm disaster recovery


Warm disaster recovery is a reserve strategy in which copy data and systems are stored
with a cloud DR vendor and regularly updated with services and information in the prior
data center.
o Cold disaster recovery
Cold disaster recovery usually entails storing information or VMware virtual (VM)
pictures. These resources are generally inaccessible unless added work is performed, such
as retrieving the stored data or filling up the picture into a Virtual machine. Cold DR is
typically the easiest (often just memory) and absolute cheapest method. Still, it requires
a long time to regain, leaving the organization with the most leisure time in the event of
a disaster.
o Hot disaster recovery
Hot disaster recovery is traditionally described as a real-time simultaneous
implementation of information and tasks that run concurrently. Both the primary and
backup data centers execute a specific tasks and information in sync, with both websites
communicating a fraction of the entire data packets. When a disaster happens, the
residual pages continue to handle things without interruption. Consumers should be
unaware of the disturbance. Although there is no time for rest with hot DR, it is the most
complex and expensive methodology.

Advantages of Cloud disaster recovery


o Choices for pay-as-you-go
The pay-as-you-go framework of cloud providers allows companies to charge a repeated
subscription fee only for the utilized programs and infrastructure. The transactions
modify as assets are added or erased.
o Scalability and adaptability
o High dependability and geographical redundancy
o Testing is simple, and restoration is quick

Cloud disaster recovery service and vendors

The most obvious route for cloud disaster recovery is via significant public cloud providers.
Amazon Web Services (AWS) provides the CloudEndure Disaster Recovery facility, Azure
offers Azure Site Healing, and GCP (Google Cloud Platform) provides Cloud Storage and
Continuous Disk alternatives for safeguarding valuable data.
Entrepreneurship disaster recovery facilities can be designed for all three significant
cloud providers.
Aside from public clouds, a plethora of devoted disaster recovery vendors now provide
DRaaS goods, effectively getting access to devoted clouds for DR assignments.
Among the top DRaaS venders are:
o Iland
o Expedient
o IBM DRaaS
o Sungard AS
o TierPoint
o Bluelock
o Recovery Point Systems
Furthermore, more generic backup vendors are now providing DRaaS, such as:
o Acronis
o Carbonite
o Zerto
o Databarracks
o Arcserve UDP
o Unitrends
o Datto

You might also like