CC Unit 02
CC Unit 02
Lecture #9 and 10
Contents
• REST and Systems of Systems
• Web Services
• Publish, Subscribe Model
REST and Systems of Systems
• What is REST?
• REpresentational State Transfer (REST) is a software architectural style that defines the
constraints to create web services. The web services that follows the REST architectural style is
called RESTful Web Services. It differentiates between the computer system and web services.
The REST architectural style describes the six barriers.
• Uniform Interface: The Uniform Interface defines the interface between client and server. It
simplifies and decomposes the architecture which enables every part to be developed. The Uniform
Interface has four guiding principles:
• Resource-based: Individual resources are identified using the URI as a resource identifier. The resources themselves
are different from the representations returned to the customer. For example, the server cannot send the database but
represents some database records expressed to HTML, XML or JSON depending on the server request and the
implementation details.
• Manipulation of resources by representation: When a client represents a resource associated with metadata, there is
information on the server to modify or delete it.
• Self-Descriptive Message: Each message contains enough information to describe how the message is processed. For
example, the parser can be specified by the Internet media type (known as the MIME type).
• As the engine of Hypermedia Application State (HATEOAS): Customers provide states by query-string parameters,
body content, request headers, and requested URIs. The services provide customers with the state by response codes,
response headers and body content. It is called hypermedia (hyperlink within hypertext).
• In addition to the above description, HATEOS also means that, where necessary, the object or itself is contained in the
linked body (or header) to supply the URI for retrieving the related objects.
• The same interface that any REST services provide is fundamental to the design.
• Stateless
• Stateless means the state of the service doesn't persist between subsequent requests
and response. It means that the request itself contains the state required to handle the
request. It can be a query-string parameter, entity, or header as a part of the URI. The
URI identifies the resource and state (or state change) of that resource in the unit. After
the server performs the appropriate state or status piece (s) that matters are sent back
to the client through the header, status, and response body.
• Most of us in the industry have been accustomed to programming with a container,
which gives us the concept of "session," which maintains the status among multiple
HTTP requests. In REST, the client may include all information to fulfil the server's
request and multiple requests in the state. Statelessness enables greater scalability
because the server does not maintain, update, or communicate any session state. The
resource state is the data that defines a resource representation.
• Example, the data stored in a database. Consider the application state of having data
that may vary according to client and request. The resource state is constant for every
customer who requests it.
• Client-server
• A client-server interface separates the client from the server. For
Example, the separation of concerns not having an internal
relationship with internal storage for each server to improve the
portability of customer's data codes. Servers are not connected with
the user interface or user status to make the server simpler and
scalable. Servers and clients are independently replaced and
developed until the interface is changed.
• Layered system
It is directly connected to the end server or by any intermediary
whether a client cannot tell. Intermediate servers improve the
system scalability by enabling load-balancing and providing a shared
cache. Layers can enforce security policies.
• Cacheable
On the World Wide Web, customers can cache responses. Therefore,
responses clearly define themselves as unacceptable or prevent
customers from reusing stale or inappropriate data to further
requests. Well-managed caching eliminates some client-server
interactions to improving scalability and performance.
• Code on Demand (optional)
The server temporarily moves or optimizes the functionality of a client
by logic that it executes. Examples of compiled components are Java
applets and client-side scripts.
Compliance with the constraints will enable any distributed
hypermedia system with desirable contingency properties such
as performance, scalability, variability, visibility,
portability, and reliability.
What is a system of systems?
• A system of systems (SoS) is the collection of multiple, independent systems in context as part of a
larger, more complex system. A system is a group of interacting, interrelated and interdependent
components that form a complex and unified whole.
Web Services
Lecture #11-12
Contents
• Web Services
• Basics of Virtualization
Web Services in Cloud Computing
• A web service is a standardized method for propagating messages between client and server
applications on the World Wide Web.
• A web service is a software module that aims to accomplish a specific set of tasks.
• Web services can be found and implemented over a network in cloud computing.
• The web service would be able to provide the functionality to the client that invoked the web
service.
• A web service is a set of open protocols and standards that allow data exchange between different
applications or systems.
• Any software, application, or cloud technology that uses a standardized Web protocol (HTTP or
HTTPS) to connect, interoperate, and exchange data messages over the Internet-usually XML
(Extensible Markup Language) is considered a Web service.
• The data exchanged between the client and the server, XML, is the most important part of web
service design. XML (Extensible Markup Language) is a simple, intermediate language understood
by various programming languages. It is the equivalent of HTML.
• As a result, when programs communicate with each other, they use XML. It forms a common
platform for applications written in different programming languages to communicate with each
other.
• Web services employ SOAP (Simple Object Access Protocol) to transmit XML data between
applications. The data is sent using standard HTTP. A SOAP message is data sent from a web service
to an application. An XML document is all that is contained in a SOAP message. The client
application that calls the web service can be built in any programming language as the content is
written in XML.
Features of Web Service
• (a) XML-based: A web service's information representation and record transport layers employ XML.
There is no need for networking, operating system, or platform bindings when using XML. At the mid-
level, web offering-based applications are highly interactive.
• (b) Loosely Coupled: The subscriber of an Internet service provider may not necessarily be directly
connected to that service provider. The user interface for a web service provider may change over time
without affecting the user's ability to interact with the service provider. A strongly coupled system means
that the decisions of the mentor and the server are inextricably linked, indicating that if one interface
changes, the other must be updated.
• A loosely connected architecture makes software systems more manageable and easier to integrate
between different structures.
• (c) Ability to be synchronous or asynchronous: Synchronicity refers to the client's connection to the
execution of the function. Asynchronous operations allow the client to initiate a task and continue with
other tasks. The client is blocked, and the client must wait for the service to complete its operation before
continuing in synchronous invocation.
Features of Web Service
• (d) Coarse Grain: Object-oriented systems, such as Java, make their services available differently. At the corporate level,
an operation is too great for a character technique to be useful. Building a Java application from the ground up requires
the development of several granular strategies, which are then combined into a coarse grain provider that is consumed
by the buyer or service.
• Corporations should be coarse-grained, as should the interfaces they expose. Building web services is an easy way to
define coarse-grained services that have access to substantial business enterprise logic.
• (e) Supports remote procedural calls: Consumers can use XML-based protocols to call procedures, functions, and
methods on remote objects that use web services. A web service must support the input and output framework of the
remote system.
• Enterprise-wide component development Over the years, JavaBeans (EJBs) and .NET components have become more
prevalent in architectural and enterprise deployments. Several RPC techniques are used to both allocate and access
them.
• A web function can support RPC by providing its services, similar to a traditional role, or translating incoming
invocations into an EJB or .NET component invocation.
• (f) Supports document exchanges: One of the most attractive features of XML for communicating with data and
complex entities.
Basics of Virtualization
Modern computing is more efficient due to virtualization
Lets think like this
• Have you ever wished you could clone yourself?
• If you could, would you be more efficient? Would you do more?
• Virtualization enables computers to be more efficient in a similar
fashion
• Computers that use virtualization optimize the available compute
resources
Lets ponder on this...
• Do you use a smartphone, laptop or home computer?
• Smartphones, laptops or home computers are hardware
• Similar to how your brain controls your actions, software controls
hardware
• There are different types of software that control computer actions
What is a VM
• Virtualization creates virtual hardware by cloning physical hardware
• The hypervisor uses virtual hardware to create a virtual machine (VM)
• A VM is a set of files
• With a hypervisor and VMs, one computer can run multiple OS
simultaneously
Terminologies
• Host Operating System: The operating system via which the Virtual
Machines are run. For Type 1 Hypervisors, as in Hyper-V, the hypervisor
itself is the Host OS which schedules the virtual machines and allocates
memory. For Type 2 hypervisors, the OS on which the hypervisor
applications run is the Host OS.
• Guest Operating System: The operating system that uses virtualized
hardware. It can be either Fully Virtualized or Para Virtualized. An
enlightened guest OS knows that its a virtualized system which can improve
performance.
• Virtual Machine Monitor: VMM is the application that virtualizes hardware
for a specific virtual machine and executes the guest OS with the
virtualized hardware.
Concepts
• Categories
• Modified Guest OS
• Para-virtualization.
• Unmodified Guest OS
• Binary Translations
• Hardware assisted
Full virtualization
• In this scenario, data is completely abstracted from the underlying
hardware by the virtualization layer. In this technique guest, OS is unaware
that it is a guest and hypervisor translate all OS calls on-the-fly. It provides
flexibility and no hardware assistance or modification is required.
• The advantages of full virtualization are that the emulation layer isolates
VMs from the host OS and from each other. It also controls individual VM
access to system resources, preventing an unstable VM from impacting
system performance.
• It also provides the total VM portability by emulating a consistent set of
system hardware, VMs have the ability to transparently move between
hosts with dissimilar hardware without any problems. The products
support this virtualization are VMware, Microsoft, and KVM.
Para Virtualization
• It is an enhancement of virtualization technology in which a guest OS
is recompiled prior to installation inside a virtual machine. In para-
virtualization, the guest OS is modified to enable communication with
the hypervisor to improve performance and efficiency.
• Its advantages are that the guest system comes closer to native
performance than a fully virtualized guest and also it does nor require
the latest virtualization CPU support. It also allows for an interface to
the virtual machine that can differ somewhat from that of the
underlying hardware.
• VMware and Xen are supported by this type of virtualization.
Hardware-assisted Virtualization
• In enables full virtualization with help of utilizing of a computer’s physical
components to support the software that creates and manages virtual
machines. In this technique of virtualization unmodified guest is OS and
no API are made. The sensitive calls are trapped by the hypervisor and in
2006 it was added to x86 processors (Intel VT-x or AMD-V).
• The products supporting hardware-assisted virtualization are VMware,
Xen, Microsoft, and Parallels.
• There is additionally a mix of para-virtualization and full virtualization
called Hybrid Virtualization where parts of the visitor working on
paravirtualization for certain hardware drivers, and the host utilizes full
virtualization for different highlights. This frequently delivers prevalent
execution on the visitor without the requirement for the visitor to be
totally par- virtualized.
Comparisons
HARDWARE ASSISTED
PARAMETER FULL VIRTUALIZATION PARA VIRTUALIZATION
VIRTUALIZATION
Generation 1st 2nd 3rd
Better in certain
Performance Good Fair
cases
VMware, Microsoft, VMware, Xen,
Used By VMware, Xen
KVM Microsoft, Parallels
Guest OS Codified to issue
Unmodified Unmodified
modification hypercalls
Guest OS hypervisor XenLinux runs only
Yes Yes
independent? on Hypervisor
Exit to root mode on
Technique Direct execution Hypercalls
privileged instruction
Compatibility Excellent
Poor Excellent
Types of Virtualization
Lecture #13,14
Contents
• Types of Virtualization
• Implementation Levels of Virtualization
Types of Virtualization
Types of Virtualization
• Apart from hardware virtualization, other types of
virtualization include:
• Application Virtualization
• Data Virtualization
• Desktop Virtualization
• Network Virtualization
• Server Virtualization
• Storage Virtualization
Application virtualization
• The process of installing an application
on a central server (single computer
system) that can virtually be operated
on multiple systems is known as
application virtualization. For end
users, the virtualized application works
exactly like a native application
installed on a physical machine. With
application virtualization, it’s easier for
organizations to update, maintain, and
fix applications centrally. Admins can
control and modify access permissions
to the application without logging in to
the user’s desktop.
• Virtualizing an app allows for seamless use for the end-user, making it possible
for the employee to work remotely with the same key programs installed in the
office. When virtualized, apps work in what is called a sandbox, an environment
that runs separately from the operating system. While operating in this
sandbox, any changes will appear to run in the operating system, though the
app is pulling operating power from the sandbox.
• There are two distinct kinds of application virtualization:
• Remote applications run on a server that mimics the user desktop and can
be accessed by authorized users regardless of their location.
• Streaming apps run just one instance on the server and provide local access
to the app.
• Remote app streaming is the more popular approach, thanks to the extended
reach it grants.
• With just one instance of the app to manage and fix, an organization’s IT
professionals can save time and effort through app virtualization compared to
installing the app on each user’s computer.
Data Virtualization
• Data virtualization is a data
management approach. It retrieves,
segregates, manipulates, and delivers
data without any data specifications.
• Any technical details of the data like its
exact location and formatting
information are not needed to access
it. It allows the application to get a
singular view of the overall data with
real-time access.
• Data virtualization software helps with
data warehouse management and
eliminates latency. It also provides
users with on-demand integration,
quick analysis, and real-time search
and reports capabilities.
Desktop virtualization
• Creating a virtual desktop infrastructure, or VDI, makes it
possible to work and store files in locations that everyone
in your team can easily access no matter where they
work.
• Desktop virtualization allows people to access multiple
applications and operating systems (OS) on a single
computer because the applications and OSs are installed
on virtual machines that run on a server in the data
centre.
• When it comes to desktop virtualization, there are two
main methods: local and remote. Local and remote
desktop virtualization are both possible depending on
the business needs.
• Remote desktop virtualization is more robust and popular
in the marketplace, with users running operating systems
and applications accessed from a server located inside a
secure data center.
Network virtualization
• Network virtualization helps manage and monitor the entire computer
network as a single administrative entity. Admins can keep a track of
various elements of network infrastructure such as routers and
switches from a single software-based administrator’s console.
Network virtualization helps network optimization for data transfer
rates, flexibility, reliability, security, and scalability. It improves the
overall network’s productivity and efficiency. It becomes easier for
administrators to allocate and distribute resources conveniently and
ensure high and stable network performance.
Server virtualization
Server virtualization is a process of
partitioning the resources of a single
server into multiple virtual servers. These
virtual servers can run as separate
machines. Server virtualization allows
businesses to run multiple independent
OSs (guests or virtual) all with different
configurations using a single (host) server.
The process also saves the hardware cost
involved in keeping a host of physical
servers, so businesses can make their
server infrastructure more streamlined.
Storage virtualization
• Storage virtualization performs resource
abstraction in a way that the multiple physical
storage arrays are virtualized as a single
storage pool with direct and independent
access.
• The storage virtualization software aggregates
and manages storage in various storage arrays
and serves it to applications whenever needed.
• The centralized virtual storage increases
flexibility and availability of resources needed.
This data virtualization and centralization is
easily manageable from a central console. It
allows users to manage and access multiple
arrays as a single storage unit.
Implementation Levels of Virtualization
Introduction
In recent times, it is not sufficient to use just a single software in computing.
Today professionals look to test their software and program across various platforms.
However, there are challenges here because of varied constraints. This gives rise to the concept of
virtualization.
Virtualization lets the users create several platform instances, which could be various applications and
operating systems.
A very amazing yet simple example of virtualization is your PC or your laptop.
Implementation Levels of Virtualization in Cloud Computing
• For the basic emulation, an interpreter is needed, which interprets the source code and then converts it into a
hardware format that can be read. This then allows processing. This is one of the five implementation levels
of virtualization in Cloud Computing.
2) Hardware Abstraction Level (HAL)
• True to its name HAL lets the virtualization perform at the level of the hardware. This makes use of a
hypervisor which is used for functioning. The virtual machine is formed at this level, which manages the
hardware using the virtualization process. It allows the virtualization of each of the hardware components,
which could be the input-output device, the memory, the processor, etc.
• Multiple users will not be able to use the same hardware and also use multiple virtualization instances at the
very same time. This is mostly used in the cloud-based infrastructure.
3) Operating System Level
• At the level of the operating system, the virtualization model is capable of creating a layer that is abstract
between the operating system and the application. This is an isolated container on the operating system and
the physical server, which uses the software and hardware. Each of these then functions in the form of a
server.
• When there are several users and no one wants to share the hardware, then this is where the virtualization
level is used. Every user will get his virtual environment using a dedicated virtual hardware resource. In this
way, there is no question of any conflict.
4) Library Level
• The operating system is cumbersome, and this is when the applications use the API from the libraries at a user
level. These APIs are documented well, and this is why the library virtualization level is preferred in these
scenarios. API hooks make it possible as it controls the link of communication from the application to the
system.
5) Application Level
• The application-level virtualization is used when there is a desire to virtualize only one application and is the
last of the implementation levels of virtualization in Cloud Computing. One does not need to virtualize the
entire environment of the platform.
• This is generally used when you run virtual machines that use high-level languages. The application will sit
above the virtualization layer, which in turn sits on the application program.
• It lets the high-level language programs compiled to be used at the application level of the virtual machine run
seamlessly.
Virtualization Structures
Lecture # 15,16
Contents
• Virtualization Structures
• Tools and Mechanisms
Virtualization Structures
Virtualization Structures
Virtualization is performed with the help of a hypervisor or VMM.
Hypervisors are of two types - Type 1 and Type 2
• Type 1 Hypervisor runs directly on the host with simple programming.
This doesn’t require an individual operating system to compute. It is
called bare metal or native. It lies between the OS and hardware.
This type of Hypervisor is suited for the enterprise.
• Type 2 Hypervisor functions on top of the OS. It is called a host
machine. This implies that there will be no direct point of access to the
hardware. And the VM running in this type will be managed by the
Virtual Machine Monitor (VMM).
Tools and Mechanisms
THE TOOLS OF THE VIRTUALIZATION
• The most critical and well-known virtualizations’ tools have been illustrated in detail for an instant, VMware,
OpenVZ, and Xen, etc.
Vmware:
• It was a virtual machine ‘VM’ that assisted in executing unmodified OS on the Host or User-Level application.
• The OS that utilized with VMware may get stopped, reinstalled, restarted, or crashed without having any
influence on the application that runs on the Hosted CPU.
• VMware provided the distribution of Guest-OS from the actual Host-OS. As a consequence, if the Guest-OS
failed later the physical hardware or the hosted computer did not suffer from the failure.
• VMware was utilized to create standard illusion hardware on the inner side of the Virtual Machine ‘VM’.
Hence, the VMware was utilized to execute numerous unmodified OS simultaneously on the distinct
hardware engine by executing the OS in the Virtual Machine of a particular OS.
• Despite that, the code running on the hardware like a simulator, Virtual Machine executed the code
straightly on the physical hardware without any software that interprets the code.
Xen:
• It was the most common virtualization open-source tool that supported both Full-
Virtualization ‘FV’ and Para-Virtualization ‘PV’.
• Xen was an extremely famed virtualization resolution, initially established at the
Cambridge University.
• It was the single Bare-Metal solution that was obtainable as an open-source.
• It contained several elements that cooperated to supply the virtualization atmosphere
comprising Xen Hypervisor ‘XH’, Domain-0-Guest shortened to Dom-0, and Domain-U-
Guest shortened to Dom-U that could be either ‘PV’- Guest or ‘FV’-Guest.
• The Xen Hypervisor ‘XH’ was the layer that resided straightly on the hardware
underneath any OS.
• It was responsible for CPU scheduling and memory segregating of the different ‘VMs’.
• It represented the administration of Domain-U-Guest ‘Dom-U’ to the Domain-0-Guest
‘Dom-0’
Qemu
• This virtualization tool was utilized to execute the virtualization in the
OSs such as Linux and Windows. It was counted as the renowned
open-source emulator that offered swift emulation with the assist of
dynamic translation. It had several valuable commands for managing
the Virtual Machine ‘VM’. Qemu was the major open-source tool for
various hardware architectures. Indeed, It was an example of Native-
Virtualization ‘NV’
Qemu
• This virtualization tool was utilized to execute the virtualization in the
OSs such as Linux and Windows. It was counted as the renowned
open-source emulator that offered swift emulation with the assist of
dynamic translation. It had several valuable commands for managing
the Virtual Machine ‘VM’. Qemu was the major open-source tool for
various hardware architectures. Indeed, It was an example of Native-
Virtualization ‘NV’
OpenVZ
• It was also an open-source virtualization tool that relied on the
control group conceptions. OpenVZ provided Container-Based-
Virtualization ‘CBV’ for the Linux platform. It allowed several
distributed execution that named Virtual Environments ‘VEs’ or
Containers with a distinct operating system kernel. It also provided
superior performance and scalability when compared with the other
virtualization tools
Docker
• Docker is open-source. It is relied on using containers to automatically
distribute Linux application. All the necessities like codes, runtime
system tools, and system libraries are included in the Docker
containers. Docker utilized Linux containers (LXC) library till version
0.9, but after this version, Docker utilizes a lib container for
virtualization capabilities provided by a kernel of Linux. It uses to
implement an isolated container via a highlevel application program
interface (API). The operating system (OS) is not required in Docker.
The same Linux kernel utilizes by a Docker container but is performed
by isolating the user space from the host OS. Docker is only available
and compatible with Linux
Kernel-Based Virtual Machine (KVM)
• A KVM is also open-source and is required central processing unit (CPU)
technology for (Intel VT or AMD-V).
• It utilizes the full virtualization ‘FV’ for Linux x86 and including the
extensions of virtualization, the KVM’s kernel component is included in
Linux, but the KVM’s userspace components are included in a quick
emulator (QEMU).
• However, for some devices KVM also supports the para-virtualization ‘PV’
mechanism. By using KVM end user can turn Linux into a Hypervisor that
can run multiple and isolated virtual environments called guests.
• The main limitation of KVM is that it cannot execute emulation. Instead of
that, it reveals the KVM interface and it sets up the virtual machine address
space and feeds the simulated input/output via QEMU.
virtualization mechanisms
There are several virtualization mechanisms
Full Virtualization ‘FV’
• In this form, the hardware interface delivered to Hypervisor or ‘VMM’
is nearly similar to the one afforded by the hardware’s physical. This
was meant that for offering virtualization there was no need to alter
the operating systems ‘OSs’ and applications if they were well-
matched with the original hardware. This type also could be further
categorized into another sub-forms like Bare-Metal- Virtualization
‘BMV’ and Hosted-Virtualization ‘HV’.
Para Virtualization ‘PV’
• This form is a contrast to the previous type which was Full-
virtualization ‘FV’ because of in the Para-virtualization ‘PV’ the
running Guest-OS was needed to alter. In this method, the Guest-OS
engine knew thatthey were running in a situation that was virtualized.
The major benefit of utilizing this machinery was todiminish the
virtualization over-heads and offer superior performance. Xen was an
example of Para-Virtualization
Operating System-Level virtualization
‘OSLV’
• This kind was known as that the kernel of an OS permitted several
separate User-Space instances. These instances track over the top of
OS hosted system, worked with a group of libraries that interact with
applications, and allowing them to run on a machine devoted to its
utilize. This form of virtualization mechanism also referred to as a
Container-Based -Virtualization ‘CBV’.
Application Virtualization ‘AV’
• In this type of virtualization, an end-user is allowed to run an
application of the server locally with the assist of native assets
without requiring the installation of the complete application on the
computer scheme. It also provided a separate virtualization
environment to each end-user that acted as a layer among the host
and the OS. The most well-known instance of this form was Java
Virtual Machine ‘JVM’ that acted as an intermediate among the OS
and the Java- Application-Code ‘JAC’
Desktop Virtualization ‘DV’
• It was the conception of splitting the logical desktop from the physical
appliance. It was enumerated ashardware virtualization. Virtual
Desktop Infrastructure ‘VDI’ was the main sub-type of this form.
Despite the interrelating with a host computer straight through a
peripheral computer like keyboard, mouse, and monitor, the end-user
interrelated with the host computer utilizing another desktop or a
mobile device with the help of the network connection like the
Internet. Moreover, the host computer became a server that was able
to host several Virtual Machines concurrently for more than one end-
user
Network Virtualization ‘NV’
• It had been utilized to unite both hardware ‘HW’ and software ‘SW’
assets into a Virtual Network as adistinct group of assets. It assisted in
obtaining superior infrastructure utilization in terms of reutilizing a
logical or physical asset for several other network assets like hosts,
virtual machines ‘VMs’, and routers, etc. It also assisted in diminishing
costs by distributing network assets.
Virtualization of CPU – Memory – I/O Devices
Lecture #17
Contents
• Virtualization of CPU Memory I/O Devices
• Virtualization Support and Disaster Recovery
Hardware Support
CPU Virtualization
Memory Virtualization
Memory Virtualization
Memory Virtualization
I/O Virtualization
Virtualization Support and Disaster Recovery
Virtual disaster recovery vs physical
disaster recovery