Cloud computing Unit-2.
Cloud computing Unit-2.
REST, short for representational state transfer, is a type of software architecture that
was designed to ensure interoperability between different Internet computer systems.
Representational State Transfer (REST) is an architectural style that defines a set of
constraints to be used for creating web services. REST API is a way of accessing web
services in a simple and flexible way without having any processing.
Representational state transfer (REST) is a distributed system framework that uses Web
protocols and technologies. The REST architecture involves client and server
interactions built around the transfer of resources. Representational state transfer
(REST) is a convention for stateless client-server communications that is typically
implemented using the HTTP protocol (using other protocols is also technically
possible). REST itself is not a protocol, it is simply a set of conventions that strive to
create simplicity and consistency in resource naming across different web-based
applications or APIs.
REST is a software architectural style that defines the set of rules to be used for creating
web services. Web services which follow the REST architectural style are known as
RESTful web services. It allows requesting systems to access and manipulate web
resources by using a uniform and predefined set of rules. Interaction in REST based
systems happen through Internet’s Hypertext Transfer Protocol (HTTP).
❖ RESTful
In a RESTful architecture, standard HTTP methods are used in combination with Uniform
Resource Identifiers (URIs) to communicate requests and responses between a client and a
server. Each URIs describes a self-contained operation and contains all the information
needed to satisfy the request. RESTful API is an interface that two computer systems use to
exchange information securely over the internet. Most business applications have to
communicate with other internal and third-party applications to perform various tasks.
● A Restful system consists of a:
✔ Client who requests for the resources.
A RESTful API breaks down a transaction to create a series of small modules. Each module
addresses an underlying part of the transaction. This modularity provides developers with a
lot of flexibility, but it can be challenging for developers to design their REST API from
scratch. Currently, several companies provide models for developers to use; the models
provided by Amazon S3, Cloud Data Management Interface (CDMI) and OpenStack Swift
are the most popular.
A RESTful API uses commands to obtain resources. The state of a resource at any given
timestamp is called a resource representation. A RESTful API uses existing HTTP
methodologies defined by the RFC 2616 protocol, such as:
● application/json
● application/xml
● application/x-wbe+xml
● application/x-www-form-urlencoded
● multipart/form-data
Client – this is the consumer that sends a request for a resource to the server.
Server – this is the producer that provides an API for accessing its data and
operations.
Resource – this is any content, i.e., a text file or an image that the server returns to the
client as a response.
In this type of communication, the client requests resources from the server, which
sends responses back to the client.
A REST request comprises a URL, an HTTP method, request headers, and the request body
optionally. The server sends back the status code, response headers, and a body to the client
in return. Here are the purposes of each of the elements of a REST request.
Endpoint – this comprises a Uniform Resource Identifier (URI) that can locate the resource.
Uniform Resource Location (URL) is the most common type of URI and represents the
complete web address.
Headers – these are used to store metadata relevant to both the client and the server, such as
name or IP address of the server, authentication, API key, and information about response
format, etc.
Body – this represents a piece of data sent along with the request to the server. You might
want this piece of data to be used to add or edit data at the server.
There are six architectural constraints which makes any web service are listed below.
❑ Uniform Interface – It is a key constraint that differentiate between a REST API and
Non-REST API. It suggests that there should be an uniform way of interacting with a
given server irrespective of device or type of application (website, mobile app). There
are four guidelines principle of Uniform Interface are
❑ Cacheable – Caching is a proven technique that can enhance the scalability and
performance of an application. Every response should include whether the response is
cacheable or not and for how much duration responses can be cached at the client
side. Client will return the data from its cache for any subsequent request and there
would be no need to send the request again to the server. A well-managed caching
partially or completely eliminates some client/server interactions, further improving
availability and performance.
❑ Client-Server – In a RESTful architecture, the server and the client are clearly
isolated from each other. While the server doesn’t know the user interface, the client
doesn’t know the application’s business logic or how the application persists data. You
can change the server and the client independent of one another.
✔ REST is based on the resource or noun instead of action or verb based. It means that a
URI of a REST API should always end with a noun. Example: /api/users is a good
example, but /api?type=users is a bad example of creating a REST API.
✔ HTTP verbs are used to identify the action. Some of the HTTP verbs are – GET, PUT,
POST, DELETE, GET, PATCH.
✔ A web application should be organized into resources like users and then uses HTTP
verbs like – GET, PUT, POST, DELETE to modify those resources. And as a
developer it should be clear that what needs to be done just by looking at the endpoint
and HTTP method used.
In today’s world, there is huge number of applications which are built on different
programming languages. For example, there could be a web application designed in Java,
another in .Net and another in PHP.
Exchanging data between applications is crucial in today’s networked world. But data
exchange between these heterogeneous applications would be complex. So will be the
complexity of the code to accomplish this data exchange. One of the methods used to combat
this complexity is to use XML (Extensible Markup Language) as the intermediate language
for exchanging data between applications.
Simple Object Access Protocol (SOAP) is a protocol for implementing Web services. SOAP
features guidelines that allow communication via the Internet between two programs, even if
they run on different platforms, use different technologies and are written in different
programming languages.
❑ SOAP enables client applications to easily connect to remote services and invoke
remote methods.
❖ SOAP messages are XML documents that are comprised of the following three
basic building blocks:
✔ The SOAP Envelope encapsulates all the data in a message and identifies the XML
document as a SOAP message.
✔ The Header element contains additional information about the SOAP message. This
information could be authentication credentials, for example, which are used by the
calling application.
✔ The Body element includes the details of the actual message that need to be sent from
the web service to the calling application. This data includes call and response
information.
✔ The fault message is an optional fourth building block. If a SOAP fault is generated, it
is returned as an HTTP 500 error. Fault messages contain a fault code, string, actor
and detail.
SOAP requests are easy to generate and process responses. First, a request for a
service is generated by a client using an XML document.
Both SOAP requests and responses are transported using Hypertext Transfer Protocol
Secure (HTTPS) or a similar protocol like HTTP.
❖ Advantages of SOAP
❖ Disadvantages of SOAP
❑ No provision for passing data by reference :- This can cause synchronization issues
if multiple copies of the same object are passed simultaneously.
❑ Speed :- The data structure of SOAP is based on XML. XML is largely human-
readable, which makes it fairly easy to understand a SOAP message. However, that
also makes the messages relatively large compared to the Common Object Request
Broker Architecture (CORBA) and its remote procedure call (RPC) protocol that will
accommodate binary data. Because of this, CORBA and RPC are faster.
❑ Not as flexible as other methods :- Although SOAP is flexible, newer methods, such
as RESTful architecture, use XML, JavaScript Object Notation, YAML or any parser
needed, which makes them more flexible than SOAP.
❑ WSDL dependent :- SOAP uses Web Services Description Language (WSDL) and
doesn't have any other mechanism to discover the service.
A web service is a software module that is intended to carry out a specific set of
functions. Web services in cloud computing can be found and invoked over the
network. The web service would be able to deliver functionality to the client that
invoked the web service. A web service is a set of open protocols and standards that
allow data to be exchanged between different applications or systems. Web services
can be used by software programs written in a variety of programming languages and
running on a variety of platforms to exchange data via computer networks such as the
Internet in a similar way to inter-process communication on a single computer.
❑ The major difference between web service technology and other technologies such as
J2EE, CORBA, and CGI scripting is its standardization, since it is based on
standardized XML, providing a language-neutral representation of data.
❑ Most web services transmit messages over HTTP, making them available as Internet-
scale applications. In addition, unlike CORBA and J2EE, using HTTP as the
tunneling protocol by web services enables remote communication through firewalls
and proxies.
The Enterprise Service Bus (ESB) is a software architecture which connects all the
services together over a bus like infrastructure. It acts as communication centre in the
SOA by allowing linking multiple systems, applications and data and connects
multiple systems with no disruption.
It makes it easy to change components or add additional components to an application.
It also makes for a convenient place to enforce security and compliance requirements,
log normal or exception conditions and even handle transaction performance
monitoring.
❖ Publish-Subscribe Model
In a publish/subscribe system, a publisher does not need to know who uses the
information (publication) that it provides, and a subscriber does not need to know who
provides the information that it receives as the result of a subscription. Publications
are sent from publishers to the pub/sub broker, subscriptions are sent from subscribers
to the pub/sub broker, and the pub/sub broker forwards the publications to the
subscribers.
The pub/sub broker ensures that messages are delivered to the correct subscribers.
Note that in each of these cases, we find a many-to-many relationship between
publishers and subscribers.
❖ Working of Publish-Subscribe Model
An input messaging channel used by the sender. The sender packages events into
messages, using a known message format, and sends these messages via the input
channel. The sender in this pattern is also called the publisher.
One output messaging channel per consumer. The consumers are known as
subscribers. A mechanism for copying each message from the input channel to the
output channels for all subscribers interested in that message. This operation is
typically handled by an intermediary such as a message broker or event bus.
SOAP (Simple Object Access Protocol) :- OAP stands for “Simple Object Access
Protocol.” It is a transport-independent messaging protocol. SOAP is built on sending
XML data in the form of SOAP Messages. A document known as an XML document
is attached to each message. Only the structure of the XML document, not the content,
follows a pattern. The best thing about Web services and SOAP is that everything is
sent through HTTP, the standard web protocol.
The client would use requests to send a sequence of web service calls to a server that
would host the actual web service. Remote procedure calls are what are used to make
these requests. Calls to methods hosted by the relevant web service are known as
Remote Procedure Calls (RPC).
● Example: Flipkart offers a web service that displays prices for items offered on
Flipkart.com. The front end or presentation layer can be written in .Net or Java, but
the web service can be communicated using either programming language.
● The data that is exchanged between the client and the server, which is XML, is the
most important part of a web service design. XML (Extensible markup language) is a
simple intermediate language that is understood by various programming languages.
❖ They are XML-Based – Web Services uses XML to represent the data at the
representation and data transportation layers. Using XML eliminates any networking,
operating system, or platform sort of dependency since XML is the common language
understood by all.
❖ Loosely Coupled – Loosely coupled means that the client and the web service are not
bound to each other, which means that even if the web service changes over time, it
should not change the way the client calls the web service. Adopting a loosely
coupled architecture tends to make software systems more manageable and allows
simpler integration between different systems.
❖ Ability to support Remote Procedure Calls (RPCs) – Web services enable clients to
invoke procedures, functions, and methods on remote objects using an XML-based
protocol. Remote procedures expose input and output parameters that a web service
must support.
❖ Supports Document Exchange – One of the key benefits of XML is its generic way
of representing not only data but also complex documents. These documents can be as
simple as representing a current address, or they can be as complex as representing an
entire book.
1) Create a architectural model that defines goals of applications and methods that
will help achieve those goals.
❑ Service provider :- The service provider is the maintainer of the service and the
organization that makes available one or more services for others to use. To advertise
services, the provider can publish them in a registry, together with a service contract
that specifies the nature of the service, how to use it, the requirements for the service,
and the fees charged.
❑ Service consumer :- The service consumer can locate the service metadata in the
registry and develop the required client components to bind and use the service
Service Oriented Architecture or SOA architecture is the pattern used in the computer systems
to design the software where the application provides services to other applications. This
communication is done with the help of a protocol, and it happens through a network.
❑ Business logic :- The business logic that is encapsulated by a service is part of its
implementation. It is made available through service interfaces.
❑ Data :- A service can also include data. In particular, it is the purpose of a data-
centric service.
✔ SOA is location-transparent
• Consumer Interface Layer: It is a GUI based app for end-users reaching the
applications.
• Business Process Layer: It is a service layer that represents the business use case and
business processes.
• Services Layer: Many service work together for creating a whole enterprise in
service inventory.
• Service Component Layer: These are managed to develop the services. like
technological interfaces, functional and technical libraries, etc.
• Operational Systems Layer: This layer contains the data models, technical patterns
and data repository, etc.
▪ Integration layer :- This layer enables the integration of services through the
introduction of a reliable set of capabilities, such as intelligent routing, protocol
mediation, and other transformation mechanisms
▪ Quality of service :- This layer provides the capabilities required to monitor, manage,
and maintain QoS such as security, performance, and availability. This is a
background process through sense-and-respond mechanisms and tools that monitor
the health of SOA applications
▪ Informational :- this layer is concerned with providing the information for business
related.
• Services - The services are the logical entities defined by one or more published
interfaces.
• Service broker - It is a service provider that pass service requests to one or more
additional service providers.
❖ Advantages of Service Oriented Architecture (SOA)
✔ Service reusability :- In SOA, applications are made from existing services. Thus,
services can be reused to make many applications.
✔ Easy maintenance :- As services are independent of each other they can be updated
and modified easily without affecting other services.
✔ Platform independent :- SOA allows making a complex application by combining
services picked from different sources, independent of the platform.
✔ Availability :- SOA facilities are easily available to anyone on request.
✔ Reliability :- SOA applications are more reliable because it is easy to debug small
services rather than huge codes.
✔ Scalability :- Services can run on different servers within an environment, this
increases scalability
1. SOA infrastructure is used by many armies and air forces to deploy situational
awareness systems.
2. SOA is used to improve healthcare delivery.
3. Nowadays many apps are games and they use inbuilt functions to run. For example,
an app might need GPS so it uses the inbuilt GPS functions of the device. This is SOA
in mobile solutions.
4. SOA helps maintain museums a virtualized storage pool for their information and
content.
❖ Difference in SOA, SOAP, REST
SOAP (Simple Object Access Protocol) is a protocol (set of rules) that allows web
services to communicate with one another. It defines endpoints, message formats and
transports (such as HTTP, XML).
REST (Representational State Transafrer) is a set of architectural principles by which
you can design Web services that focus on a system's resources, including how
resource states are addressed and transferred over HTTP by a wide range of clients
written in different language.
❖ Virtualization
❖ Virtualization Terminology
❑ Host Machine – The physical machine that hosts one or more virtual machines. To
accomplish this, virtualization software such as a hypervisor is installed on the Host
Machine.
❑ Hypervisor – The software or firmware that manages virtual machines, allowing
them to interact directly with the underlying hardware. The hypervisor is a hardware
virtualization technique that allows multiple guest operating systems (OS) to run on a
single host system at the same time. The hypervisor is an operating platform which
manages and executes the Guest VM operating systems.
❑ VM Cluster – A collection of VM Hosts that act as a single large host. If one of the
hosts are removed, all of the VMs that the host was running seamlessly continue
running on the other hosts. A true VM cluster requires shared storage such as a
SAN/NAS device.
❑ P2V (Physical to Virtual) – Refers to the process of migrating operating systems,
applications and data from the hard disk of a physical server to a virtual machine.
❑ VM Snapshot – Preserves the memory state and data of a virtual machine at a given
point in time, allowing for recovery at a single point in time for a VM. Snapshots are
NOT a way to do backups, but part of the technology used to create snapshots are
used by backup software to do backups correctly.
❑ VM Checkpoint – Prior to making changes to your VM (software installation, etc) it's
wise to create a VM checkpoint, allowing you to return to a previous state. While it is
not a full backup of a VM, a checkpoint captures data and memory state of the virtual
machine.
❑ VM Replication – The ability to replicate virtual machines at the server virtualization
level using replication software. Provides redundancy for quick VM recovery and
reduced downtime in the event of failure or disaster.
❑ VM Backup – Virtual machine backup can be performed multiple different ways
depending on the backup software and the type of hypervisor that the VM guest
resides on. The backup software guards against data loss and can be used to recover
files in the event of hardware failure or other disaster.
❑ VM Single File Restore – The ability to restore a single file from a virtual machine
rather than restoring the entire machine. Without this feature, VM backup software
would need to be installed on both the host and the guests in order to retrieve a single
file from a virtual machine.
❑ Virtual Machine (Guest VM) – A self-contained software emulation of a machine,
which does not physically exist, but shares resources of an underlying physical
machine. It runs its own operating system, applications, processes , etc.
First, IT professionals create multiple virtual machines (VMs) that they model after a
single physical machine. VMs, also referred to as virtual instances, virtual computers or
virtual versions, are an emulation of a physical computer system that relies on the
internet instead of local hardware.
Once the VMs exist, companies can use them to run multiple operating systems on a
single server or host. The hypervisor software supports the VMs by allocating computing
resources to each one as needed, which reduces the amount of computing power
companies need to perform multiple tasks simultaneously.
❖ Types of Virtualization
Virtualization plays a very important role in the cloud computing technology, normally in
the cloud computing, users share the data present in the clouds like application etc, but
actually with the help of virtualization users shares the Infrastructure.
The main usage of Virtualization Technology is to provide the applications with the
standard versions to their cloud users, suppose if the next version of that application is
released, then cloud provider has to provide the latest version to their cloud users and
practically it is possible because it is more expensive.
Traditional computer runs with a host operating system specially tailored for its
hardware architecture. After virtualization, different user applications managed by
their own operating systems (guest OS) can run on the same hardware, independent of
the host OS. This is often done by adding additional software, called a virtualization
layer. This virtualization layer is known as hypervisor or virtual machine monitor
(VMM). The VMs are shown in the upper boxes, where applications run with their
own guest OS over the virtualized CPU, memory, and I/O resources. The main
function of the software layer for virtualization is to virtualize the physical hardware
of a host machine into virtual resources to be used by the VMs, exclusively.
❑ The Five Levels of Implementing Virtualization
✔ Instruction Set Architecture Level (ISA) :- ISA virtualization can work through ISA
emulation. This is used to run many legacy codes that were written for a different
configuration of hardware. These codes run on any virtual machine using the ISA.
With this, a binary code that originally needed some additional layers to run is now
capable of running on the x86 machines. It can also be tweaked to run on the x64
machine. With ISA, it is possible to make the virtual machine hardware agnostic.
✔ Hardware Abstraction Level (HAL) :- As the name suggests, this level helps
perform virtualization at the hardware level. It uses a bare hypervisor for its
functioning. This level helps form the virtual machine and manages the hardware
through virtualization. It enables virtualization of each hardware component such as
I/O devices, processors, memory, etc. This way multiple users can use the same
hardware with numerous instances of virtualization at the same time.
✔ Operating System Level :- At the operating system level, the virtualization model
creates an abstract layer between the applications and the OS. It is like an isolated
container on the physical server and operating system that utilizes hardware and
software. Each of these containers functions like servers. When there are several
users, and no one wants to share the hardware, then this is where the virtualization
level is used. Every user will get his virtual environment using a virtual hardware
resource that is dedicated. In this way, there is no question of any conflict.
✔ Library Level :- OS system calls are lengthy and cumbersome. Which is why
applications opt for APIs from user-level libraries. Most of the APIs provided by
systems are rather well documented. Hence, library level virtualization is preferred in
such scenarios. Library interfacing virtualization is made possible by API hooks.
These API hooks control the communication link from the system to the applications.
✔ Application Level :- The application-level virtualization is used when there is a
desire to virtualize only one application and is the last of the implementation levels of
virtualization in cloud computing. One does not need to virtualize the entire
environment of the platform. This is generally used when you run virtual machines
that use high-level languages. The application will sit above the virtualization layer,
which in turn sits on the application program.
❖ Virtualization Structures
Therefore, different operating systems such as Linux and Windows can run on the
same physical machine, simultaneously. Depending on the position of the
virtualization layer, there are several classes of VM architectures, namely the
hypervisor architecture, Para virtualization, and host-based virtualization. The
hypervisor is also known as the VMM (Virtual Machine Monitor). They both perform
the same virtualization operations.
❖ Xen Architecture
It just provides a mechanism by which a guest OScan have direct access to the
physical devices. As a result, the size of the Xen hypervisor is kept rather small. Xen
provides a virtual environment located between the hardware and the OS. The core
components of a Xen system are the hypervisor, kernel, and applications.
✔ Qemu :- This virtualization tool was utilized to execute the virtualization in the OSs
such as Linux and Windows. It was counted as the renowned open-source emulator
that offered swift emulation with the assist of dynamic translation. It had several
valuable commands for managing the Virtual Machine ‘VM’. Qemu was the major
open-source tool for various hardware architectures.
✔ OpenVZ :- It was also an open-source virtualization tool that relied on the control
group conceptions. OpenVZ provided Container-Based-Virtualization ‘CBV’ for the
Linux platform. It allowed several distributed execution that named Virtual
Environments ‘VEs’ or Containers with a distinct operating system kernel. It also
provided superior performance and scalability when compared with the other
virtualization tools.
✔ Docker :- Docker is open-source. It is relied on using containers to automatically
distribute Linux application. All the necessities like codes, runtime system tools, and
system libraries are included in the Docker containers. Docker utilized Linux
containers (LXC) library till version 0.9.
✔ Xen :- It was the most common virtualization open-source tool that supported both
Full-Virtualization ‘FV’ and Para-Virtualization ‘PV’. Xen was an extremely famed
virtualization resolution, initially established at the Cambridge University. The Xen
Hypervisor ‘XH’ was the layer that resided straightly on the hardware underneath any
OS. It was responsible for CPU scheduling and memory segregating of the different
‘VMs’.
✔ Vagrant :- Vagrant is an open source virtualization tool which developed by
Hashicorp and written in Ruby, but it can be used in projects written in other
programming languages such as PHP, Python, Java, C#, and JavaScript. This tool
which works on command-line that provides a framework and configuration format
for creating, managing and distributing virtualized development environments. These
environments can live on your computer or in the cloud, and are portable between
Linux, Mac OS X, and Windows.
Host-based virtual machine, data is contained on the server, server resources can be
allocated to users as needed, users can work from a variety of clients in different
locations, and all of the virtual machines can be managed centrally. However, the
client device must always be connected to the server in order to access the virtual
machine, and when one single server is compromised many users can be affected.
10. It provides the best isolation. It provides less isolation compared to full
virtualization.
❖ Virtualization CPU
CPU virtualization involves a single CPU acting as if it were two separate CPUs.
In effect, this is like running two separate computers on a single physical machine.
Perhaps the most common reason for doing this is to run two different operating
systems on one machine. CPU Virtualization, all the virtual machines act as
physical machines and distribute their hosting resources like having various virtual
processors. Sharing of physical resources takes place to each virtual machine when
all hosting services get the request. Finally, the virtual machines get a share of the
single CPU allocated to them, being a single-processor acting as a dual-processor.
CPU Virtualization is important in lots of ways, and its usefulness has been widespread in
the cloud computing industry. I will brief regarding the advantages of using CPU
Virtualization, stated as below:
● Using CPU Virtualization, the overall performance and efficiency are improved
to a great extent because it usually takes virtual machines to work on a single CPU,
sharing resources acting like using multiple processors at the same time. This saves cost
and money.
● As CPU Virtualization uses virtual machines to work on separate operating
systems on a single sharing system, security is also maintained by it. The machines are
also kept separate from each other. Because of that, any cyber-attack or software glitch is
unable to damage the system, as a single machine cannot affect another machine.
● It purely works on virtual machines and hardware resources. It consists of a
single server where all the computing resources are stored, and processing is done based
on the CPU’s instructions that are shared among all the systems involved. Since the
hardware requirement is less and the physical machine usage is absent, that is why the
cost is very less, and timing is saved.
● It provides the best backup of computing resources since the data is stored and
shared from a single system. It provides reliability to users dependent on a single system
and provides greater retrieval options of data for the user to make them happy.
● It also offers great and fast deployment procedure options so that it reaches the
client without any hassle, and also it maintains the atomicity. Virtualization ensures the
desired data reach the desired clients through the medium and checks any constraints are
there, and are also fast to remove it.
❖ Virtualization Memory
That means a two-stage mapping process should be maintained by the guest OS and the
VMM, respectively: virtual memory to physical memory and physical memory to
machine memory. Furthermore, MMU virtualization should be supported, which is
transparent to the guest OS. The guest OS continues to control the mapping of virtual
addresses to the physical memory addresses of VMs. But the guest OS cannot directly
access the actual machine memory. The VMM is responsible for mapping the guest
physical memory to the actual machine memory. Figure shows the two-level memory
mapping procedure.
❖ Virtualization I/O Devices
I/O virtualization involves managing the routing of I/O requests between virtual devices
and the shared physical hardware. At the time of this writing, there are three ways to
implement I/O virtualization: full device emulation, para-virtualization, and direct I/O.
Full device emulation is the first approach for I/O virtualization. Generally, this
approach emulates well-known, real-world devices.
✔ Flexibilty: Since I/O virtualization involves abstracting the upper layer protocols
from the underlying physical connections, it offers greater flexibilty, utilization and
faster provisioning in comparison with normal NIC and HBA cards.
✔ Cost minimization: I/O virtualization methodology involoves using fewer cables,
cards and switch ports without compromising on network I/O performance.
✔ Increased density: I/O virtualization increases the practical density of I/O by
allowing more connections to exist in a given space.
✔ Minimizing cables: The I/O virtualization helps to reduce the multiple cables needed
to connect servers to storage and network.
The general idea of virtual disaster recovery is that combining server and storage
virtualization allows companies to store backups in places that are not tied to their own
physical location. This protects data and systems from fires, floods and other types of
natural disasters, as well as other emergencies. Many vendor systems feature redundant
design with availability zones, so that if data in one zone is compromised, another zone
can keep backups alive.
❑ Recover Data from Any Hardware :- With a secure virtualized environment, your
team is free to use a virtual platform that can be accessed on any hardware with
security protocols. This eliminates the issue of redundant hardware as you can install
the virtual desktop on any existing device.
❑ Backup and restore full images :- By having your system completely virtualized
each of your server’s files are encapsulated in a single image file. An image is
basically a single file that contains all of server’s files, including system files,
programs, and data; all in one location. By having these images it makes managing
your systems easy and backups become as simple as duplicating the image file and
restores are simplified to simply mounting the image on a new server.
❑ Easily copy system data to recovery site :- Having an offsite backup is a huge
advantage if something were to happen to your specific location, whether it be a
natural disaster, a power outage, or a water pipe bursting, it is nice to have all your
information at an offsite location. Virtualization makes this easy by easily copying
each virtual machines image to the offsite location and with the easy customizable
automation process, it doesn’t add any more strain or man hours to the IT department.
Cloud disaster recovery is a service that enables the backup and recovery of remote
machines on a cloud-based platform. Cloud disaster recovery is primarily an
infrastructure as a service (IaaS) solution that backs up designated system data on a
remote offsite cloud server. It provides updated recovery point objective (RPO) and
recovery time objective (RTO) in case of a disaster or system restore.
❖ Advantages of Virtualization
❖ Disadvantages of Virtualization
❖ High Initial Investment – It is true that Virtualization will reduce the cost of
companies but also it is truth that Cloud have high initial investment. It provides
numerous services which are not required and when unskilled organization will try
to set up in cloud they purchase unnecessary services which are not even required
to them.
❖ Availability – The primary concern that many have with virtualization is what
will happen to their work should their assets not be available. If an organization
cannot connect to their data for an extended period of time, they will struggle to
compete in their industry. And, since availability is controlled by third-party
providers, the ability to stay connected in not in one’s control with virtualization.