0% found this document useful (0 votes)
26 views

Cloud computing Unit-2.

AKTU Cloud Computing Unit 2 Notes
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

Cloud computing Unit-2.

AKTU Cloud Computing Unit 2 Notes
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

Cloud Computing (Unit 2)

❖ Representational State Transfer (REST)

REST, short for representational state transfer, is a type of software architecture that
was designed to ensure interoperability between different Internet computer systems.
Representational State Transfer (REST) is an architectural style that defines a set of
constraints to be used for creating web services. REST API is a way of accessing web
services in a simple and flexible way without having any processing.

Representational state transfer (REST) is a distributed system framework that uses Web
protocols and technologies. The REST architecture involves client and server
interactions built around the transfer of resources. Representational state transfer
(REST) is a convention for stateless client-server communications that is typically
implemented using the HTTP protocol (using other protocols is also technically
possible). REST itself is not a protocol, it is simply a set of conventions that strive to
create simplicity and consistency in resource naming across different web-based
applications or APIs.

REST is a software architectural style that defines the set of rules to be used for creating
web services. Web services which follow the REST architectural style are known as
RESTful web services. It allows requesting systems to access and manipulate web
resources by using a uniform and predefined set of rules. Interaction in REST based
systems happen through Internet’s Hypertext Transfer Protocol (HTTP).

❖ RESTful

In a RESTful architecture, standard HTTP methods are used in combination with Uniform
Resource Identifiers (URIs) to communicate requests and responses between a client and a
server. Each URIs describes a self-contained operation and contains all the information
needed to satisfy the request. RESTful API is an interface that two computer systems use to
exchange information securely over the internet. Most business applications have to
communicate with other internal and third-party applications to perform various tasks.
● A Restful system consists of a:
✔ Client who requests for the resources.

✔ Server who has the resources.

❖ How RESTful APIs work

A RESTful API breaks down a transaction to create a series of small modules. Each module
addresses an underlying part of the transaction. This modularity provides developers with a
lot of flexibility, but it can be challenging for developers to design their REST API from
scratch. Currently, several companies provide models for developers to use; the models
provided by Amazon S3, Cloud Data Management Interface (CDMI) and OpenStack Swift
are the most popular.

A RESTful API uses commands to obtain resources. The state of a resource at any given
timestamp is called a resource representation. A RESTful API uses existing HTTP
methodologies defined by the RFC 2616 protocol, such as:

● GET to retrieve a resource;


● PUT to change the state of or update a resource, which can be an object, file or block;
● POST to create that resource; and
● DELETE to remove it.

Data formats the REST API supports include:

● application/json
● application/xml
● application/x-wbe+xml
● application/x-www-form-urlencoded
● multipart/form-data

❖ Elements of the REST Paradigm


The key elements of the REST architectural paradigm are the following:

Client – this is the consumer that sends a request for a resource to the server.
Server – this is the producer that provides an API for accessing its data and
operations.

Resource – this is any content, i.e., a text file or an image that the server returns to the
client as a response.

REST-based architectures are characterized by requests and responses for


bidirectional communication between a client and a server.

In this type of communication, the client requests resources from the server, which
sends responses back to the client.

❖ The REST Request Structure

A REST request comprises a URL, an HTTP method, request headers, and the request body
optionally. The server sends back the status code, response headers, and a body to the client
in return. Here are the purposes of each of the elements of a REST request.

HTTP method – this describes the operation to be performed with a resource.

Endpoint – this comprises a Uniform Resource Identifier (URI) that can locate the resource.
Uniform Resource Location (URL) is the most common type of URI and represents the
complete web address.

Headers – these are used to store metadata relevant to both the client and the server, such as
name or IP address of the server, authentication, API key, and information about response
format, etc.

Body – this represents a piece of data sent along with the request to the server. You might
want this piece of data to be used to add or edit data at the server.

❖ Architectural constraints used by REST

There are six architectural constraints which makes any web service are listed below.

❑ Uniform Interface – It is a key constraint that differentiate between a REST API and
Non-REST API. It suggests that there should be an uniform way of interacting with a
given server irrespective of device or type of application (website, mobile app). There
are four guidelines principle of Uniform Interface are

✔ Resource-Based :- Individual resources are identified in requests.


Example- API/users.
✔ Manipulation of Resources Through Representations :- Client has
representation of resource and it contains enough information to modify or delete
the resource on the server, provided it has permission to do so. Example: Usually
user get a user id when user request for a list of users and then use that id to
delete or modify that particular user.

✔ Self-descriptive Messages :- Each message includes enough information


to describe how to process the message so that server can easily analyses the
request.

✔ Hypermedia as the Engine of Application State (HATEOAS):- It need


to include links for each response so that client can discover other resources
easily.

❑ Stateless – The foundation of REST architecture is the stateless HTTP protocol. In a


REST-based architecture, clients and servers should communicate with each other in a
stateless way. Most importantly, state information can only be managed at the client,
not at the server. In other words, each request is isolated and disconnected in REST
architecture, and no client information is preserved between requests.

❑ Cacheable – Caching is a proven technique that can enhance the scalability and
performance of an application. Every response should include whether the response is
cacheable or not and for how much duration responses can be cached at the client
side. Client will return the data from its cache for any subsequent request and there
would be no need to send the request again to the server. A well-managed caching
partially or completely eliminates some client/server interactions, further improving
availability and performance.

❑ Client-Server – In a RESTful architecture, the server and the client are clearly
isolated from each other. While the server doesn’t know the user interface, the client
doesn’t know the application’s business logic or how the application persists data. You
can change the server and the client independent of one another.

❑ Layered system – An application architecture needs to be composed of multiple


layers. Each layer doesn’t know anything about any layer other than that of immediate
layer and there can be lot of intermediate servers between client and the end server.

❑ Code on demand – It is an optional feature. According to this, servers can also


provide executable code to the client. For example, a server can send java applets or
JavaScript to a client so that the code is executed on the client side.

❖ Rules of REST API


There are certain rules which should be kept in mind while creating REST API
endpoints.

✔ REST is based on the resource or noun instead of action or verb based. It means that a
URI of a REST API should always end with a noun. Example: /api/users is a good
example, but /api?type=users is a bad example of creating a REST API.
✔ HTTP verbs are used to identify the action. Some of the HTTP verbs are – GET, PUT,
POST, DELETE, GET, PATCH.

✔ A web application should be organized into resources like users and then uses HTTP
verbs like – GET, PUT, POST, DELETE to modify those resources. And as a
developer it should be clear that what needs to be done just by looking at the endpoint
and HTTP method used.

❖ Simple Object Access Protocol (SOAP)

In today’s world, there is huge number of applications which are built on different
programming languages. For example, there could be a web application designed in Java,
another in .Net and another in PHP.

Exchanging data between applications is crucial in today’s networked world. But data
exchange between these heterogeneous applications would be complex. So will be the
complexity of the code to accomplish this data exchange. One of the methods used to combat
this complexity is to use XML (Extensible Markup Language) as the intermediate language
for exchanging data between applications.

Simple Object Access Protocol (SOAP) is a protocol for implementing Web services. SOAP
features guidelines that allow communication via the Internet between two programs, even if
they run on different platforms, use different technologies and are written in different
programming languages.

❑ SOAP is a communication protocol designed to communicate via Internet.

❑ SOAP can extend HTTP for XML messaging.


❑ SOAP provides data transport for Web services.

❑ SOAP can exchange complete documents or call a remote procedure.

❑ SOAP can be used for broadcasting a message.

❑ SOAP is platform- and language-independent.


❑ SOAP is the XML way of defining what information is sent and how.

❑ SOAP enables client applications to easily connect to remote services and invoke
remote methods.

❖ SOAP messages are XML documents that are comprised of the following three
basic building blocks:

✔ The SOAP Envelope encapsulates all the data in a message and identifies the XML
document as a SOAP message.

✔ The Header element contains additional information about the SOAP message. This
information could be authentication credentials, for example, which are used by the
calling application.

✔ The Body element includes the details of the actual message that need to be sent from
the web service to the calling application. This data includes call and response
information.

✔ The fault message is an optional fourth building block. If a SOAP fault is generated, it
is returned as an HTTP 500 error. Fault messages contain a fault code, string, actor
and detail.

❖ How does SOAP work

SOAP requests are easy to generate and process responses. First, a request for a
service is generated by a client using an XML document.

Next, a SOAP client sends the XML document to a SOAP server.


When the server receives the SOAP message, it sends the message as a service
invocation to the requested server-side application.
A response containing the requested parameters, return values and data for the client
is returned first to the SOAP request handler and then to the requesting client.

Both SOAP requests and responses are transported using Hypertext Transfer Protocol
Secure (HTTPS) or a similar protocol like HTTP.

❖ Advantages of SOAP

❑ Platform- and operating system-independent :- SOAP can be carried over a variety


of protocols, enabling communication between applications with different
programming languages on both Windows and Linux.
❑ Works on the HTTP protocol :- Even though SOAP works with many different
protocols, HTTP is the default protocol used by web applications.
❑ Can be transmitted through different network and security devices :- SOAP can
be easily passed through firewalls, where other protocols might require a special
accommodation.

❖ Disadvantages of SOAP

❑ No provision for passing data by reference :- This can cause synchronization issues
if multiple copies of the same object are passed simultaneously.
❑ Speed :- The data structure of SOAP is based on XML. XML is largely human-
readable, which makes it fairly easy to understand a SOAP message. However, that
also makes the messages relatively large compared to the Common Object Request
Broker Architecture (CORBA) and its remote procedure call (RPC) protocol that will
accommodate binary data. Because of this, CORBA and RPC are faster.
❑ Not as flexible as other methods :- Although SOAP is flexible, newer methods, such
as RESTful architecture, use XML, JavaScript Object Notation, YAML or any parser
needed, which makes them more flexible than SOAP.
❑ WSDL dependent :- SOAP uses Web Services Description Language (WSDL) and
doesn't have any other mechanism to discover the service.

❖ Difference between REST API and SOAP API


❖ Services and Web Services

Web service is a standardized medium to propagate communication between the


client and server applications on the WWW (World Wide Web). A web service is a
software module that is designed to perform a certain set of tasks. Web services in
cloud computing can be searched for over the network and can also be invoked
accordingly. When invoked, the web service would be able to provide the
functionality to the client, which invokes that web service.

A web service is a software module that is intended to carry out a specific set of
functions. Web services in cloud computing can be found and invoked over the
network. The web service would be able to deliver functionality to the client that
invoked the web service. A web service is a set of open protocols and standards that
allow data to be exchanged between different applications or systems. Web services
can be used by software programs written in a variety of programming languages and
running on a variety of platforms to exchange data via computer networks such as the
Internet in a similar way to inter-process communication on a single computer.

❑ The major difference between web service technology and other technologies such as
J2EE, CORBA, and CGI scripting is its standardization, since it is based on
standardized XML, providing a language-neutral representation of data.
❑ Most web services transmit messages over HTTP, making them available as Internet-
scale applications. In addition, unlike CORBA and J2EE, using HTTP as the
tunneling protocol by web services enables remote communication through firewalls
and proxies.

❖ Enterprise Service Bus (ESB)

The Enterprise Service Bus (ESB) is a software architecture which connects all the
services together over a bus like infrastructure. It acts as communication centre in the
SOA by allowing linking multiple systems, applications and data and connects
multiple systems with no disruption.
It makes it easy to change components or add additional components to an application.
It also makes for a convenient place to enforce security and compliance requirements,
log normal or exception conditions and even handle transaction performance
monitoring.

❖ Publish-Subscribe Model

Publish/subscribe is a style of messaging application in which the providers of


information (publishers) have no direct link to specific consumers of that information
(subscribers), but the interactions between publishers and subscribers are controlled
by pub/sub brokers.

In a publish/subscribe system, a publisher does not need to know who uses the
information (publication) that it provides, and a subscriber does not need to know who
provides the information that it receives as the result of a subscription. Publications
are sent from publishers to the pub/sub broker, subscriptions are sent from subscribers
to the pub/sub broker, and the pub/sub broker forwards the publications to the
subscribers.

The pub/sub broker ensures that messages are delivered to the correct subscribers.
Note that in each of these cases, we find a many-to-many relationship between
publishers and subscribers.
❖ Working of Publish-Subscribe Model

An input messaging channel used by the sender. The sender packages events into
messages, using a known message format, and sends these messages via the input
channel. The sender in this pattern is also called the publisher.

One output messaging channel per consumer. The consumers are known as
subscribers. A mechanism for copying each message from the input channel to the
output channels for all subscribers interested in that message. This operation is
typically handled by an intermediary such as a message broker or event bus.

❖ Benefits of Publish-Subscribe Model

✔ It decouples subsystems that still need to communicate. Subsystems can be managed


independently, and messages can be properly managed even if one or more receivers
are offline.
✔ It increases scalability and improves responsiveness of the sender. The sender can
quickly send a single message to the input channel, and then return to its core
processing responsibilities. The messaging infrastructure is responsible for ensuring
messages are delivered to interested subscribers.
✔ It improves reliability. Asynchronous messaging helps applications continue to run
smoothly under increased loads and handle intermittent failures more effectively.
✔ It allows for deferred or scheduled processing. Subscribers can wait to pick up
messages until off-peak hours, or messages can be routed or processed according to a
specific schedule.
✔ It enables simpler integration between systems using different platforms,
programming languages, or communication protocols, as well as between on-premises
systems and applications running in the cloud.
✔ It facilitates asynchronous workflows across an enterprise.

✔ It improves testability. Channels can be monitored and messages can be inspected or


logged as part of an overall integration test strategy.
✔ It provides separation of concerns for your applications. Each application can focus on
its core capabilities, while the messaging infrastructure handles everything required to
reliably route messages to multiple consumers.
❖ Components of Web Service

WSDL (Web Services Description Language) :- If a web service can’t be found, it


can’t be used. The client invoking the web service should be aware of the location of
the web service. Second, the client application must understand what the web service
does in order to invoke the correct web service. The WSDL, or Web services
description language, is used to accomplish this. The WSDL file is another XML-
based file that explains what the web service does to the client application. The client
application will be able to understand where the web service is located and how to use
it by using the WSDL document.

UDDI (Universal Description, Discovery, and Integration) :- UDDI is a standard


for specifying, publishing and discovering a service provider’s online services. It
provides a specification that aids in the hosting of data via web services. UDDI
provides a repository where WSDL files can be hosted so that a client application can
discover a WSDL file to learn about the various actions that a web service offers. As a
result, the client application will have full access to the UDDI, which serves as a
database for all WSDL files. The UDDI registry will hold the required information for
the online service, just like a telephone directory has the name, address, and phone
number of a certain individual

SOAP (Simple Object Access Protocol) :- OAP stands for “Simple Object Access
Protocol.” It is a transport-independent messaging protocol. SOAP is built on sending
XML data in the form of SOAP Messages. A document known as an XML document
is attached to each message. Only the structure of the XML document, not the content,
follows a pattern. The best thing about Web services and SOAP is that everything is
sent through HTTP, the standard web protocol.

❖ How Does Web Service Work

The client would use requests to send a sequence of web service calls to a server that
would host the actual web service. Remote procedure calls are what are used to make
these requests. Calls to methods hosted by the relevant web service are known as
Remote Procedure Calls (RPC).
● Example: Flipkart offers a web service that displays prices for items offered on
Flipkart.com. The front end or presentation layer can be written in .Net or Java, but
the web service can be communicated using either programming language.
● The data that is exchanged between the client and the server, which is XML, is the
most important part of a web service design. XML (Extensible markup language) is a
simple intermediate language that is understood by various programming languages.

❖ Web service Characteristics

❖ They are XML-Based – Web Services uses XML to represent the data at the
representation and data transportation layers. Using XML eliminates any networking,
operating system, or platform sort of dependency since XML is the common language
understood by all.

❖ Loosely Coupled – Loosely coupled means that the client and the web service are not
bound to each other, which means that even if the web service changes over time, it
should not change the way the client calls the web service. Adopting a loosely
coupled architecture tends to make software systems more manageable and allows
simpler integration between different systems.

❖ Synchronous or Asynchronous functionality – Synchronicity refers to the binding


of the client to the execution of the service. In synchronous operations, the client will
actually wait for the web service to complete an operation. Asynchronous operations
allow a client to invoke a service and then execute other functions in parallel. This is
one of the common and probably the most preferred techniques for ensuring that other
services are not stopped when a particular operation is being carried out.

❖ Ability to support Remote Procedure Calls (RPCs) – Web services enable clients to
invoke procedures, functions, and methods on remote objects using an XML-based
protocol. Remote procedures expose input and output parameters that a web service
must support.

❖ Supports Document Exchange – One of the key benefits of XML is its generic way
of representing not only data but also complex documents. These documents can be as
simple as representing a current address, or they can be as complex as representing an
entire book.

❖ Service Oriented Architecture (SOA)

Service-oriented architecture (SOA) is a type of software design that makes software


components reusable using service interfaces that use a common communication
language over a network.
A service is a self-contained unit of software functionality, or set of functionalities,
designed to complete a specific task such as retrieving specified information or
executing an operation. It contains the code and data integrations necessary to carry out
a complete, discrete business function and can be accessed remotely and interacted with
or updated independently. In other words, SOA integrates software components that
have been separately deployed and maintained and allows them to communicate and
work together to form software applications across different systems.

Service-oriented architecture (SOA) is a software architecture style that supports and


distributes application components that incorporates discovery, data mapping, security
and more. Service oriented architecture has two main functions:

1) Create a architectural model that defines goals of applications and methods that
will help achieve those goals.

2) Define implementations specifications linked through WSDL (Web Services


Description Language) and SOAP (Simple Object Access Protocol) specifications.

❖ Major Roles Service Oriented Architecture (SOA)

There are two major roles within Service-oriented Architecture:

❑ Service provider :- The service provider is the maintainer of the service and the
organization that makes available one or more services for others to use. To advertise
services, the provider can publish them in a registry, together with a service contract
that specifies the nature of the service, how to use it, the requirements for the service,
and the fees charged.
❑ Service consumer :- The service consumer can locate the service metadata in the
registry and develop the required client components to bind and use the service

❖ Guidelines Principles of Service Oriented Architecture (SOA)

❖ Interoperability :- Each service in SOA includes description documents that


specify the functionality of the service and the related terms and conditions. Any
client system can run a service, regardless of the underlying platform or
programming language.
❖ Loose coupling :- Services in SOA should be loosely coupled, having as little
dependency as possible on external resources such as data models or information
systems. They should also be stateless without retaining any information from past
sessions or transactions. This way, if you modify a service, it won’t significantly
impact the client applications and other services using the service.
❖ Abstraction :- Clients or service users in SOA need not know the service's code
logic or implementation details. Clients get the required information about what
the service does and how to use it through service contracts and other service
description documents.
❖ Service Discoverability :- Services can be discovered (usually in a service
registry). We have previously viewed this in the theory of the UDDI, which
performs a registry which can contain information about the web service.
❖ Service Composability :- It breaks large problems into tiny problems.

❖ Reusability:- Designed as components, services can be reused more effectively,


thus reducing development time and the associated costs.
❖ Autonomy:- Services have control over the logic they encapsulate and, from a
service consumer point of view, there is no need to know about their
implementation.

❖ Elements of Service Oriented Architecture (SOA)

Service Oriented Architecture or SOA architecture is the pattern used in the computer systems
to design the software where the application provides services to other applications. This
communication is done with the help of a protocol, and it happens through a network.

❑ Application Frontends :- Application frontends are the active players of an SOA.


They initiate and control all activity of the enterprise systems. There are different
types of application frontends. An application frontend with a graphical user interface,
such as a Web application or a rich client that interacts directly with end users.

❑ Services :- A service is a software component of distinctive functional meaning that


typically encapsulates a high-level business concept.

❑ Service Repository :- A service repository provides facilities to discover services and


acquire all information to use the services, particularly if these services must be
discovered outside the functional and temporal scope of the project that created them.
Service repository can provide additional information, such as physical location,
information about the provider, contact persons, usage fees, technical constraints,
security issues, and available service levels.
❑ Service Bus :- A service bus connects all participants of an SOA services and
application frontends with each other. If two participants need to communicate for
example, if an application frontend needs to invoke some functionality of a basic
service the service bus makes it happen. The service bus is not necessarily composed
of a single technology, but rather comprises a variety of products and concepts.

❑ Contract :- The service contract provides an informal specification of the purpose,


functionality, constraints, and usage of the service. The form of this specification can
vary, depending on the type of service.

❑ Implementation :- The service implementation physically provides the required


business logic and appropriate data. It is the technical realization that fulfils the
service contract. The service implementation consists of one or more artifacts such as
programs, configuration data, and databases.

❑ Interface :- The functionality of the service is exposed by the service interface to


clients that are connected to the service using a network.

❑ Business logic :- The business logic that is encapsulated by a service is part of its
implementation. It is made available through service interfaces.

❑ Data :- A service can also include data. In particular, it is the purpose of a data-
centric service.

❖ Characteristics of Service Oriented Architecture (SOA)

✔ SOA supports loose coupling everywhere in the project.

✔ SOA supports interoperability.

✔ SOA increases the quality of service

✔ SOA supports vendor diversity.

✔ SOA promotes discovery and federation.

✔ SOA is location-transparent

✔ SOA is still maturing and achievable idea


❖ Horizontal Layers of Service Oriented Architecture (SOA)

• Consumer Interface Layer: It is a GUI based app for end-users reaching the
applications.
• Business Process Layer: It is a service layer that represents the business use case and
business processes.
• Services Layer: Many service work together for creating a whole enterprise in
service inventory.
• Service Component Layer: These are managed to develop the services. like
technological interfaces, functional and technical libraries, etc.
• Operational Systems Layer: This layer contains the data models, technical patterns
and data repository, etc.

❖ Vertical Layers of Service Oriented Architecture (SOA)

▪ Integration layer :- This layer enables the integration of services through the
introduction of a reliable set of capabilities, such as intelligent routing, protocol
mediation, and other transformation mechanisms

▪ Quality of service :- This layer provides the capabilities required to monitor, manage,
and maintain QoS such as security, performance, and availability. This is a
background process through sense-and-respond mechanisms and tools that monitor
the health of SOA applications

▪ Informational :- this layer is concerned with providing the information for business
related.

▪ Governance :- also known as IT strategy layer is governed by horizontal layers in


order to reach required operational capability as needed.

❖ Terminology of Service Oriented Architecture (SOA)

• Services - The services are the logical entities defined by one or more published
interfaces.

• Service provider - It is a software entity that implements a service specification.

• Service consumer - It can be called as a requestor or client that calls a service


provider. A service consumer can be another service or an end-user application.

• Service locator - It is a service provider that acts as a registry. It is responsible for


examining service provider interfaces and service locations.

• Service broker - It is a service provider that pass service requests to one or more
additional service providers.
❖ Advantages of Service Oriented Architecture (SOA)

✔ Service reusability :- In SOA, applications are made from existing services. Thus,
services can be reused to make many applications.
✔ Easy maintenance :- As services are independent of each other they can be updated
and modified easily without affecting other services.
✔ Platform independent :- SOA allows making a complex application by combining
services picked from different sources, independent of the platform.
✔ Availability :- SOA facilities are easily available to anyone on request.

✔ Reliability :- SOA applications are more reliable because it is easy to debug small
services rather than huge codes.
✔ Scalability :- Services can run on different servers within an environment, this
increases scalability

❖ Disadvantages of Service Oriented Architecture (SOA)

✔ High overhead :- A validation of input parameters of services is done whenever


services interact this decreases performance as it increases load and response time.
✔ High investment :- A huge initial investment is required for SOA.

✔ Complex service management :- When services interact they exchange messages to


tasks. the number of messages may go in millions. It becomes a cumbersome task to
handle a large number of messages.

❖ Applications of Service Oriented Architecture (SOA)

1. SOA infrastructure is used by many armies and air forces to deploy situational
awareness systems.
2. SOA is used to improve healthcare delivery.
3. Nowadays many apps are games and they use inbuilt functions to run. For example,
an app might need GPS so it uses the inbuilt GPS functions of the device. This is SOA
in mobile solutions.
4. SOA helps maintain museums a virtualized storage pool for their information and
content.
❖ Difference in SOA, SOAP, REST

SOA (Service-Oriented Architecture) is a set of guidelines for designing loosely-


coupled software systems. One of its goals is to allow for rapid business change.

SOAP (Simple Object Access Protocol) is a protocol (set of rules) that allows web
services to communicate with one another. It defines endpoints, message formats and
transports (such as HTTP, XML).
REST (Representational State Transafrer) is a set of architectural principles by which
you can design Web services that focus on a system's resources, including how
resource states are addressed and transferred over HTTP by a wide range of clients
written in different language.

❖ Virtualization

Virtualization is the "creation of a virtual (rather than actual) version of something,


such as a server, a desktop, a storage device, an operating system or network
resources". In other words, Virtualization is a technique, which allows to share a
single physical instance of a resource or an application among multiple customers and
organizations.

It involves using specialized software to create a virtual or software-created version of


a computing resource rather than the actual version of the same resource. With the
help of Virtualization, multiple operating systems and applications can run on same
machine and its same hardware at the same time, increasing the utilization and
flexibility of hardware. Virtualization is a fundamental part of cloud computing,
especially in delivering Infrastructure as a Service (IaaS).

Virtualization creates a simulated, or virtual, computing environment as opposed to a


physical environment. Virtualization often includes computer-generated versions of
hardware, operating systems, storage devices, and more. This allows organizations to
partition a single physical computer or server into several virtual machines. Each
virtual machine can then interact independently and run different operating systems or
applications while sharing the resources of a single host machine.

❖ Virtualization Terminology

❑ Host Machine – The physical machine that hosts one or more virtual machines. To
accomplish this, virtualization software such as a hypervisor is installed on the Host
Machine.
❑ Hypervisor – The software or firmware that manages virtual machines, allowing
them to interact directly with the underlying hardware. The hypervisor is a hardware
virtualization technique that allows multiple guest operating systems (OS) to run on a
single host system at the same time. The hypervisor is an operating platform which
manages and executes the Guest VM operating systems.
❑ VM Cluster – A collection of VM Hosts that act as a single large host. If one of the
hosts are removed, all of the VMs that the host was running seamlessly continue
running on the other hosts. A true VM cluster requires shared storage such as a
SAN/NAS device.
❑ P2V (Physical to Virtual) – Refers to the process of migrating operating systems,
applications and data from the hard disk of a physical server to a virtual machine.
❑ VM Snapshot – Preserves the memory state and data of a virtual machine at a given
point in time, allowing for recovery at a single point in time for a VM. Snapshots are
NOT a way to do backups, but part of the technology used to create snapshots are
used by backup software to do backups correctly.
❑ VM Checkpoint – Prior to making changes to your VM (software installation, etc) it's
wise to create a VM checkpoint, allowing you to return to a previous state. While it is
not a full backup of a VM, a checkpoint captures data and memory state of the virtual
machine.
❑ VM Replication – The ability to replicate virtual machines at the server virtualization
level using replication software. Provides redundancy for quick VM recovery and
reduced downtime in the event of failure or disaster.
❑ VM Backup – Virtual machine backup can be performed multiple different ways
depending on the backup software and the type of hypervisor that the VM guest
resides on. The backup software guards against data loss and can be used to recover
files in the event of hardware failure or other disaster.
❑ VM Single File Restore – The ability to restore a single file from a virtual machine
rather than restoring the entire machine. Without this feature, VM backup software
would need to be installed on both the host and the guests in order to retrieve a single
file from a virtual machine.
❑ Virtual Machine (Guest VM) – A self-contained software emulation of a machine,
which does not physically exist, but shares resources of an underlying physical
machine. It runs its own operating system, applications, processes , etc.

❖ How Does Virtualization Work?

First, IT professionals create multiple virtual machines (VMs) that they model after a
single physical machine. VMs, also referred to as virtual instances, virtual computers or
virtual versions, are an emulation of a physical computer system that relies on the
internet instead of local hardware.

To create multiple VMs, IT professionals use a software known as hypervisor. Even


though all of these VMs connect to one main computer system, each unique VM has the
capability to perform the same tasks as the physical machine. This is because each VM
has its own operating system.

Once the VMs exist, companies can use them to run multiple operating systems on a
single server or host. The hypervisor software supports the VMs by allocating computing
resources to each one as needed, which reduces the amount of computing power
companies need to perform multiple tasks simultaneously.
❖ Types of Virtualization

1. Application Virtualization : - Application virtualization helps a user to have remote


access of an application from a server. The server stores all personal information and
other characteristics of the application but can still run on a local workstation through
the internet. Using application virtualization software, IT admins can set up remote
applications on a server and deliver the apps to an end user’s computer.
2. Network Virtualization :- The ability to run multiple virtual networks with each has
a separate control and data plan. It co-exists together on top of one physical network.
It can be managed by individual parties that potentially confidential to each other.
Network virtualization provides a facility to create and provision virtual networks
logical switches, routers, firewalls, load balancer, Virtual Private Network (VPN), and
workload security within days or even in weeks. This type of virtualization improves
network speed, scalability, and reliability.
3. Desktop Virtualization : - Desktop virtualization allows the users’ OS to be remotely
stored on a server in the data centre. It allows the user to access their desktop virtually,
from any location by a different machine. Users who want specific operating systems
other than Windows Server will need to have a virtual desktop. Main benefits of
desktop virtualization are user mobility, portability, easy management of software
installation, updates, patches, fix systems and add security protocols across virtual
desktops at once.
4. Storage Virtualization :- Storage virtualization is an array of servers that are
managed by a virtual storage system. The servers aren’t aware of exactly where their
data is stored, and instead function more like worker bees in a hive. It makes
managing storage from multiple sources to be managed and utilized as a single
repository. storage virtualization software maintains smooth operations, consistent
performance and a continuous suite of advanced functions despite changes, break
down, back-up and recovery purposes and differences in the underlying equipment .
5. Server Virtualization :- This is a kind of virtualization in which masking of server
resources takes place. Here, the central-server (physical server) is divided into
multiple different virtual servers by changing the identity number, processors. So,
each system can operate its own operating systems in isolate manner. Where each
sub-server knows the identity of the central server. It causes an increase in the
performance and reduces the operating cost by the deployment of main server
resources into a sub-server resource. It’s beneficial in virtual migration, reduce energy
consumption, reduce infrastructural cost, etc.
6. Data Virtualization :- This is the kind of virtualization in which the data is collected
from various sources and managed that at a single place without knowing more about
the technical information like how data is collected, stored & formatted then arranged
that data logically so that its virtual view can be accessed by its interested people and
stakeholders, and users through the various cloud services remotely.
7. Hardware Virtualization :- When the virtual machine software or virtual machine
manager (VMM) is directly installed on the hardware system is known as hardware
virtualization. The main job of hypervisor is to control and monitoring the processor,
memory and other hardware resources. After virtualization of hardware system we can
install different operating system on it and run different applications on those OS.
Hardware virtualization is mainly done for the server platforms, because controlling
virtual machines is much easier than controlling a physical server.
8. Operating System Virtualization :- When the virtual machine software or virtual
machine manager (VMM) is installed on the Host operating system instead of directly
on the hardware system is known as operating system virtualization. Operating
System Virtualization is mainly used for testing the applications on different platforms
of OS.
9. Data Centre Virtualization:- his virtualization option abstracts a data centre’s
hardware into software, enabling an administrator to apply it to multiple virtual data
centres. Clients can then access their infrastructure-as-a-service (IaaS) that runs on the
same physical hardware. This offers a route into cloud computing. Organizations can
create a data centre environment without purchasing infrastructure hardware.
10. GPU virtualization:- Graphical processing units (GPUs) improve overall computing
performance by managing heavy graphic and mathematical processing. GPU
virtualization enables multiple VMs to use a GPU's processing power for high-
intensity applications, such as artificial intelligence (AI) and video.

❖ How does virtualization work in cloud computing?

Virtualization plays a very important role in the cloud computing technology, normally in
the cloud computing, users share the data present in the clouds like application etc, but
actually with the help of virtualization users shares the Infrastructure.

The main usage of Virtualization Technology is to provide the applications with the
standard versions to their cloud users, suppose if the next version of that application is
released, then cloud provider has to provide the latest version to their cloud users and
practically it is possible because it is more expensive.

To overcome this problem we use basically virtualization technology, By using


virtualization, all severs and the software application which are required by other cloud
providers are maintained by the third party people, and the cloud providers has to pay the
money on monthly or annual basis.
❖ Characteristics of Virtualization

1. Resource Distribution :- Either be a single computer or a network of connected


servers, virtualization allows users to make a unique computer environment from one
host machine that lets users to restrict the participants as active users, scale down
power consumption and easy control.
2. Isolation :- Virtualization software involves self-contained virtual machines, these
VMs give guest users (not an individual but a number of instances as applications,
operating systems, and devices) an isolated online, virtual environment. This online
environment not only defends sensitive knowledge but also allows guest users to
remain-connected.
3. Availability :- Virtualization software provides various number of features that users
won’t obtain at physical servers, these features are beneficial in increasing uptime,
availability, fault tolerance, and many more. These features help users to avoid
downtime that subverts the users’ efficiencies and productivities and also generates
security threats and safety hazards.
4. Aggregation :- Since virtualization allows several devices to split resources from a
single machine, so it can be deployed to join multiple devices into a single potent
host. In addition to that, aggregation also demands for cluster management software in
order to connect a homogeneous group of computers or servers collectively for
making a unified resource centre.
5. Authenticity and Security :- At ease, virtualization platforms assure the continuous
uptime by balancing load automatically that runs an excessive number of servers
across multiple host machines in order to prevent interruption services.

❖ Implementation Levels of Virtualization

Traditional computer runs with a host operating system specially tailored for its
hardware architecture. After virtualization, different user applications managed by
their own operating systems (guest OS) can run on the same hardware, independent of
the host OS. This is often done by adding additional software, called a virtualization
layer. This virtualization layer is known as hypervisor or virtual machine monitor
(VMM). The VMs are shown in the upper boxes, where applications run with their
own guest OS over the virtualized CPU, memory, and I/O resources. The main
function of the software layer for virtualization is to virtualize the physical hardware
of a host machine into virtual resources to be used by the VMs, exclusively.
❑ The Five Levels of Implementing Virtualization

✔ Instruction Set Architecture Level (ISA) :- ISA virtualization can work through ISA
emulation. This is used to run many legacy codes that were written for a different
configuration of hardware. These codes run on any virtual machine using the ISA.
With this, a binary code that originally needed some additional layers to run is now
capable of running on the x86 machines. It can also be tweaked to run on the x64
machine. With ISA, it is possible to make the virtual machine hardware agnostic.
✔ Hardware Abstraction Level (HAL) :- As the name suggests, this level helps
perform virtualization at the hardware level. It uses a bare hypervisor for its
functioning. This level helps form the virtual machine and manages the hardware
through virtualization. It enables virtualization of each hardware component such as
I/O devices, processors, memory, etc. This way multiple users can use the same
hardware with numerous instances of virtualization at the same time.
✔ Operating System Level :- At the operating system level, the virtualization model
creates an abstract layer between the applications and the OS. It is like an isolated
container on the physical server and operating system that utilizes hardware and
software. Each of these containers functions like servers. When there are several
users, and no one wants to share the hardware, then this is where the virtualization
level is used. Every user will get his virtual environment using a virtual hardware
resource that is dedicated. In this way, there is no question of any conflict.
✔ Library Level :- OS system calls are lengthy and cumbersome. Which is why
applications opt for APIs from user-level libraries. Most of the APIs provided by
systems are rather well documented. Hence, library level virtualization is preferred in
such scenarios. Library interfacing virtualization is made possible by API hooks.
These API hooks control the communication link from the system to the applications.
✔ Application Level :- The application-level virtualization is used when there is a
desire to virtualize only one application and is the last of the implementation levels of
virtualization in cloud computing. One does not need to virtualize the entire
environment of the platform. This is generally used when you run virtual machines
that use high-level languages. The application will sit above the virtualization layer,
which in turn sits on the application program.

❖ Virtualization Structures

In general, there are three typical classes of VM architecture, the architectures of a


machine before and after virtualization. Before virtualization, the operating system
manages the hardware. After virtualization, a virtualization layer is inserted between
the hardware and the operating system. In such a case, the virtualization layer is
responsible for converting portions of the real hardware into virtual hardware.

Therefore, different operating systems such as Linux and Windows can run on the
same physical machine, simultaneously. Depending on the position of the
virtualization layer, there are several classes of VM architectures, namely the
hypervisor architecture, Para virtualization, and host-based virtualization. The
hypervisor is also known as the VMM (Virtual Machine Monitor). They both perform
the same virtualization operations.

❑ Hypervisor :- The hypervisor supports hardware-level virtualization on bare metal


devices like CPU, memory, disk and network interfaces. The hypervisor software sits
directly between the physical hardware and its OS. This virtualization layer is referred
to as either the VMM or the hypervisor. The hypervisor provides hypercalls for the
guest OSes and applications. Depending on the functional-ity, a hypervisor can
assume a micro-kernel architecture like the Microsoft Hyper-V. Or it can assume a
monolithic hypervisor architecture like the VMware ESX for server virtualization.
❑ Binary Translation with Full Virtualization :- Depending on implementation
technologies, hardware virtualization can be classified into two categories: full
virtualization and host-based virtualization. Full virtualization does not need to
modify the host OS. It relies on binary translation to trap and to virtualize the
execution of certain sensitive, non-virtualizable instructions. The guest OSes and their
applications consist of noncritical and critical instructions. In a host-based system,
both a host OS and a guest OS are used. A virtualization software layer is built
between the host OS and guest OS.
❑ Para-Virtualization with Compiler Support :- Para-virtualization needs to modify
the guest operating systems. A paravirtualized VM provides special APIs requiring
substantial OS modifications in user applications. Performance degradation is a
critical issue of a virtualized system. No one wants to use a VM if it is much slower
than using a physical machine. The virtualization layer can be inserted at different
positions in a machine soft-ware stack. However, para-virtualization attempts to
reduce the virtualization overhead, and thus improve performance by modifying only
the guest OS kernel.

❖ Xen Architecture

Xen is an open source hypervisor program developed by Cambridge University. Xen


is a microkernel hypervisor, which separates the policy from the mechanism. The Xen
hypervisor implements all the mechanisms, leaving the policy to be handled by
Domain 0.

It just provides a mechanism by which a guest OScan have direct access to the
physical devices. As a result, the size of the Xen hypervisor is kept rather small. Xen
provides a virtual environment located between the hardware and the OS. The core
components of a Xen system are the hypervisor, kernel, and applications.

❖ Virtualization Tools and Mechanisms

✔ Qemu :- This virtualization tool was utilized to execute the virtualization in the OSs
such as Linux and Windows. It was counted as the renowned open-source emulator
that offered swift emulation with the assist of dynamic translation. It had several
valuable commands for managing the Virtual Machine ‘VM’. Qemu was the major
open-source tool for various hardware architectures.
✔ OpenVZ :- It was also an open-source virtualization tool that relied on the control
group conceptions. OpenVZ provided Container-Based-Virtualization ‘CBV’ for the
Linux platform. It allowed several distributed execution that named Virtual
Environments ‘VEs’ or Containers with a distinct operating system kernel. It also
provided superior performance and scalability when compared with the other
virtualization tools.
✔ Docker :- Docker is open-source. It is relied on using containers to automatically
distribute Linux application. All the necessities like codes, runtime system tools, and
system libraries are included in the Docker containers. Docker utilized Linux
containers (LXC) library till version 0.9.
✔ Xen :- It was the most common virtualization open-source tool that supported both
Full-Virtualization ‘FV’ and Para-Virtualization ‘PV’. Xen was an extremely famed
virtualization resolution, initially established at the Cambridge University. The Xen
Hypervisor ‘XH’ was the layer that resided straightly on the hardware underneath any
OS. It was responsible for CPU scheduling and memory segregating of the different
‘VMs’.
✔ Vagrant :- Vagrant is an open source virtualization tool which developed by
Hashicorp and written in Ruby, but it can be used in projects written in other
programming languages such as PHP, Python, Java, C#, and JavaScript. This tool
which works on command-line that provides a framework and configuration format
for creating, managing and distributing virtualized development environments. These
environments can live on your computer or in the cloud, and are portable between
Linux, Mac OS X, and Windows.

✔ Host-Based Virtualization :- A host-based virtual machine is an instance of a


desktop operating system that runs on a centralized server. Access and control is
provided to the user by a client device connected over a network. Multiple host-based
virtual machines can run on a single server.

Host-based virtual machine, data is contained on the server, server resources can be
allocated to users as needed, users can work from a variety of clients in different
locations, and all of the virtual machines can be managed centrally. However, the
client device must always be connected to the server in order to access the virtual
machine, and when one single server is compromised many users can be affected.

The host-based approach appeals to many host machine configurations. Compared to


the hypervisor/VMM architecture, the performance of the host-based architecture may
also be low. When an application requests hardware access, it involves four layers of
mapping which downgrades performance significantly. His host-based architecture
has flexibility, the performance is too low to be useful in practice.

✔ Para-virtualization :- Paravirtualization is the category of CPU virtualization which


uses hypercalls for operations to handle instructions at compile time. In
paravirtualization, guest OS is not completely isolated but it is partially isolated by the
virtual machine from the virtualization layer and hardware. VMware and Xen are
some examples of paravirtualization. The cost of maintaining para-virtualized OSes is
high, because they may require deep OS kernel modifications.
✔ Full Virtualization :- Full Virtualization was introduced by IBM in the year 1966. It
is the first software solution for server virtualization and uses binary translation and
direct approach techniques. In full virtualization, guest OS is completely isolated by
the virtual machine from the virtualization layer and hardware. Microsoft and Parallels
systems are examples of full virtualization.

❖ Difference between Full Virtualization and Paravirtualization

The difference between Full Virtualization and Paravirtualization are as follows:

S.No. Full Virtualization Paravirtualization


In Full virtualization, virtual machines In paravirtualization, a virtual machine
permit the execution of the instructions does not implement full isolation of OS
1. but rather provides a different API which
with the running of unmodified OS in is utilized when OS is subjected to
an entirely isolated way. alteration.
Full Virtualization is less secure. While the Paravirtualization is more secure
than the Full Virtualization.
3. Full Virtualization uses binary While Paravirtualization uses hypercalls at
S.No. Full Virtualization Paravirtualization
translation and a direct approach as a compile time for operations.
technique for operations.
Full Virtualization is slow than Paravirtualization is faster in operation as
paravirtualization in operation. compared to full virtualization.
Full Virtualization is more portable and Paravirtualization is less portable and
5.
compatible. compatible.
Examples of full virtualization are Examples of paravirtualization are
Microsoft and Parallels systems. Microsoft Hyper-V, Citrix Xen, etc.
It supports all guest operating systems The guest operating system has to be
7.
without modification. modified and only a few operating systems
support it.
8. The guest operating system will issue Using the drivers, the guest operating
hardware calls.
system will directly communicate with the
hypervisor.
It is less streamlined compared to para- It is more streamlined.
virtualization.

10. It provides the best isolation. It provides less isolation compared to full
virtualization.

❖ Virtualization CPU

CPU virtualization involves a single CPU acting as if it were two separate CPUs.
In effect, this is like running two separate computers on a single physical machine.
Perhaps the most common reason for doing this is to run two different operating
systems on one machine. CPU Virtualization, all the virtual machines act as
physical machines and distribute their hosting resources like having various virtual
processors. Sharing of physical resources takes place to each virtual machine when
all hosting services get the request. Finally, the virtual machines get a share of the
single CPU allocated to them, being a single-processor acting as a dual-processor.

❖ Why CPU Virtualization is Important?

CPU Virtualization is important in lots of ways, and its usefulness has been widespread in
the cloud computing industry. I will brief regarding the advantages of using CPU
Virtualization, stated as below:

● Using CPU Virtualization, the overall performance and efficiency are improved
to a great extent because it usually takes virtual machines to work on a single CPU,
sharing resources acting like using multiple processors at the same time. This saves cost
and money.
● As CPU Virtualization uses virtual machines to work on separate operating
systems on a single sharing system, security is also maintained by it. The machines are
also kept separate from each other. Because of that, any cyber-attack or software glitch is
unable to damage the system, as a single machine cannot affect another machine.
● It purely works on virtual machines and hardware resources. It consists of a
single server where all the computing resources are stored, and processing is done based
on the CPU’s instructions that are shared among all the systems involved. Since the
hardware requirement is less and the physical machine usage is absent, that is why the
cost is very less, and timing is saved.
● It provides the best backup of computing resources since the data is stored and
shared from a single system. It provides reliability to users dependent on a single system
and provides greater retrieval options of data for the user to make them happy.
● It also offers great and fast deployment procedure options so that it reaches the
client without any hassle, and also it maintains the atomicity. Virtualization ensures the
desired data reach the desired clients through the medium and checks any constraints are
there, and are also fast to remove it.

❖ Hardware-Assisted CPU Virtualization

This technique attempts to simplify virtualization because full or paravirtualization is


complicated. Intel and AMD add an additional mode called privilege mode level (some
people call it Ring-1) to x86 processors. Therefore, operating systems can still run at
Ring 0 and the hypervisor can run at Ring -1. All the privileged and sensitive instructions
are trapped in the hypervisor automatically. This technique removes the difficulty of
implementing binary translation of full virtualization. It also lets the operating system run
in VMs without modification.

❖ Virtualization Memory

Virtual Memory is a storage allocation scheme in which secondary memory can be


addressed as though it were part of the main memory. The addresses a program may use
to reference memory are distinguished from the addresses the memory system uses to
identify physical storage sites, and program-generated addresses are translated
automatically to the corresponding machine addresses. The size of virtual storage is
limited by the addressing scheme of the computer system and the amount of secondary
memory is available not by the actual number of the main storage locations. It is a
technique that is implemented using both hardware and software. It maps memory
addresses used by a program, called virtual addresses, into physical addresses in
computer memory.

That means a two-stage mapping process should be maintained by the guest OS and the
VMM, respectively: virtual memory to physical memory and physical memory to
machine memory. Furthermore, MMU virtualization should be supported, which is
transparent to the guest OS. The guest OS continues to control the mapping of virtual
addresses to the physical memory addresses of VMs. But the guest OS cannot directly
access the actual machine memory. The VMM is responsible for mapping the guest
physical memory to the actual machine memory. Figure shows the two-level memory
mapping procedure.
❖ Virtualization I/O Devices

I/O virtualization involves managing the routing of I/O requests between virtual devices
and the shared physical hardware. At the time of this writing, there are three ways to
implement I/O virtualization: full device emulation, para-virtualization, and direct I/O.
Full device emulation is the first approach for I/O virtualization. Generally, this
approach emulates well-known, real-world devices.

All the functions of a device or bus infrastructure, such as device enumeration,


identification, interrupts, and DMA, are replicated in software. This software is
located in the VMM and acts as a virtual device. The I/O access requests of the guest
OS are trapped in the VMM which interacts with the I/O devices.

Some of the major advantages of I/O Virtualization are listed below:

✔ Flexibilty: Since I/O virtualization involves abstracting the upper layer protocols
from the underlying physical connections, it offers greater flexibilty, utilization and
faster provisioning in comparison with normal NIC and HBA cards.
✔ Cost minimization: I/O virtualization methodology involoves using fewer cables,
cards and switch ports without compromising on network I/O performance.
✔ Increased density: I/O virtualization increases the practical density of I/O by
allowing more connections to exist in a given space.
✔ Minimizing cables: The I/O virtualization helps to reduce the multiple cables needed
to connect servers to storage and network.

❖ Virtualization Support and Disaster Recovery

Disaster recovery is an organization’s method of regaining access and functionality to


its IT infrastructure after events like a natural disaster, cyber-attack, or even business
disruptions related to the COVID-19 pandemic. A variety of disaster recovery (DR)
methods can be part of a disaster recovery plan. Disaster recovery (DR) relies upon the
replication of data and computer processing in an off-premises location not affected by
the disaster. When servers go down because of a natural disaster, equipment failure or
cyber-attack, a business needs to recover lost data from a second location where the
data is backed up.

Today, most industries are moving toward virtualization, disaster recovery is no


exception. Virtualization brings great flexibility to disaster recovery, and dramatically
reduces costs compares to traditional physical measures. Virtual disaster recovery refers
to the use of virtualized workloads for disaster recovery planning and failover. To
achieve this, organizations need to regularly replicate workloads to an offsite virtual
disaster recovery site. Virtual disaster recovery is a combination of storage and server
virtualization that helps to create more effective means of disaster recovery and backup.
It is now popular in many enterprise systems because of the many ways that it helps to
mitigate risk.

The general idea of virtual disaster recovery is that combining server and storage
virtualization allows companies to store backups in places that are not tied to their own
physical location. This protects data and systems from fires, floods and other types of
natural disasters, as well as other emergencies. Many vendor systems feature redundant
design with availability zones, so that if data in one zone is compromised, another zone
can keep backups alive.

❑ Recover Data from Any Hardware :- With a secure virtualized environment, your
team is free to use a virtual platform that can be accessed on any hardware with
security protocols. This eliminates the issue of redundant hardware as you can install
the virtual desktop on any existing device.

❑ Backup and restore full images :- By having your system completely virtualized
each of your server’s files are encapsulated in a single image file. An image is
basically a single file that contains all of server’s files, including system files,
programs, and data; all in one location. By having these images it makes managing
your systems easy and backups become as simple as duplicating the image file and
restores are simplified to simply mounting the image on a new server.

❑ Run other workloads on standby hardware :- A key benefit to virtualization is


reducing the hardware needed by utilizing your existing hardware more efficiently.
This frees up systems that can now be used to run other tasks or be used as a hardware
redundancy. This mixed with features like VMware’s High Availability, which restarts
a virtual machine on a different server when the original hardware fails, or for a more
robust disaster recovery plan you can use Fault Tolerance, which keeps both servers in
sync with each other leading to zero downtime if a server should fail.

❑ Easily copy system data to recovery site :- Having an offsite backup is a huge
advantage if something were to happen to your specific location, whether it be a
natural disaster, a power outage, or a water pipe bursting, it is nice to have all your
information at an offsite location. Virtualization makes this easy by easily copying
each virtual machines image to the offsite location and with the easy customizable
automation process, it doesn’t add any more strain or man hours to the IT department.

❖ Disaster Recovery in Cloud Computing

Cloud disaster recovery is a service that enables the backup and recovery of remote
machines on a cloud-based platform. Cloud disaster recovery is primarily an
infrastructure as a service (IaaS) solution that backs up designated system data on a
remote offsite cloud server. It provides updated recovery point objective (RPO) and
recovery time objective (RTO) in case of a disaster or system restore.

Cloud disaster recovery (CDR) is simple to configure and maintain, as opposed to


conventional alternatives. Most Cloud DR also provisions backup and recovery for
critical server machines that host enterprise-level applications like MS-SQL, Oracle,
etc.

❖ Cloud Disaster Recovery Methodologies

Cloud disaster recovery providing scalability. It must be protecting specific


information, apps, and other assets while also accommodating added resources as
required and providing sufficient efficiency as other international customers utilize the
facilities. There are three basic DR strategies: warm, cold, and hot.

❑ Warm disaster recovery :- Warm disaster recovery is a reserve strategy in which


copy data and systems are stored with a cloud DR vendor and regularly updated with
services and information in the prior data center. However, the redundant assets aren't
doing anything. When a disaster happens, the warm DR can be implemented to
capability approach from the DR vendor, which is usually as simple as beginning a
Virtual machine and rerouting Domain names and traffic to the DR assets. Although
recovery times might be pretty limited, the secured tasks must still experience some
leisure time.
❑ Cold disaster recovery :- Cold disaster recovery usually entails storing information
or VMware virtual (VM) pictures. These resources are generally inaccessible unless
added work is performed, such as retrieving the stored data or filling up the picture
into a Virtual machine. Cold DR is typically the easiest (often just memory) and
absolute cheapest method. Still, it requires a long time to regain, leaving the
organization with the most leisure time in the event of a disaster.
❑ Hot disaster recovery :- Hot disaster recovery is traditionally described as a real-time
simultaneous implementation of information and tasks that run concurrently. Both the
primary and backup data centres execute a specific tasks and information in sync, with
both websites communicating a fraction of the entire data packets. When a disaster
happens, the residual pages continue to handle things without interruption. Consumers
should be unaware of the disturbance. Although there is no time for rest with hot DR,
it is the most complex and expensive methodology.

❖ Advantages of Virtualization

• Minimize Servers - Virtualization minimizes the amount of servers an organization


needs, letting them cut down on heat build-up associated with a server-heavy
datacentre. The less physical “clutter” your datacentre has, the less money and
research you need to funnel into heat dissipation.
• Reduce Hardware - When it comes to saving money, minimizing hardware is key.
With virtualization, organizations are able to reduce their hardware usage, and most
importantly, reduce maintenance, downtime, and electricity overtime.
• Quick Redeployments - Virtualization makes redeploying a new server simple and
quick. Should a server die, virtual machine snapshots can come to the rescue within
minutes.
• Simpler Backups - Backups are far simpler with virtualization. Your virtual machine
can perform backups and take snapshots throughout the day, so you always have the
most current data available. Plus, you can move your VMs between servers, and they
can be redeployed quicker.
• Reduce Costs & Carbon Footprint - As you virtualize more of your datacentre,
you’re inevitably reducing your datacentre footprint and your carbon footprint as a
whole. On top of supporting the planet, reducing your datacentre footprint also cuts
down dramatically on hardware, power, and cooling costs.
• Better Testing - You’re better equipped to test and re-test in a virtualized
environment than a hardware-driven one. Because VMs keep snapshots, you can
revert to a previous one should you make an error during testing.
• Run Any Machine on Any Hardware - Virtualization provides an abstraction layer
between software and hardware. In other words, VMs are hardware-agnostic, so you
can run any machine on any hardware. As a result, you don’t have the tie-down
associated with vendor lock-in.
• Effective Disaster Recovery - When your datacentre relies on virtual instances,
disaster recovery is far less painful, and you end up facing much shorter, infrequent
downtime. You can use recent snapshots to get your VMs up and running, or you may
choose to move those machines elsewhere.
• Cloudify Your Datacentre - Virtualization can help you “cloudify” your datacentre.
A fully or mostly virtualized environment mimics that of the cloud, getting you set up
for the switch to cloud. In addition, you can choose to deploy your VMs in the cloud.

❖ Disadvantages of Virtualization

▪ Data can be at Risk – Working on virtual instances on shared resources means


that our data is hosted on third party resource which put’s our data in vulnerable
condition. Any hacker can attack on our data or try to perform unauthorized
access. Without Security solution our data is in threaten situation.

❖ Learning New Infrastructure – As Organization shifted from Servers to Cloud.


They required skilled staff who can work with cloud easily. Either they hire new
IT staff with relevant skill or provide training on that skill which increases the cost
of company.

❖ High Initial Investment – It is true that Virtualization will reduce the cost of
companies but also it is truth that Cloud have high initial investment. It provides
numerous services which are not required and when unskilled organization will try
to set up in cloud they purchase unnecessary services which are not even required
to them.

❖ Security – Data is a crucial aspect of every organization. Data security is often


questioned in a virtualized environment since the server is managed by managed
by the third party providers. Therefore, it is important to choose the virtualization
solution wisely so that it can provide adequate protection.

❖ Availability – The primary concern that many have with virtualization is what
will happen to their work should their assets not be available. If an organization
cannot connect to their data for an extended period of time, they will struggle to
compete in their industry. And, since availability is controlled by third-party
providers, the ability to stay connected in not in one’s control with virtualization.

You might also like