Cloud Computing Unit 1
Cloud Computing Unit 1
UNIT - 1
TOPIC:- 1. Introduction to Service Oriented Architecture, Web Services, Basic Web Services Architecture,
Introduction to SOAP, WSDL and UDDI; REST ful services: Definition, Characteristics, Components, Types; Software as
a Service, Plat form as a Service, Organizational scenarios of clouds, Administering & Monitoring cloud services,
benefits and limitations, Study of a Hypervisor.
Services might aggregate information and data retrieved from other services or create
workflows of services to satisfy the request of a given service consumer. This practice is
known as service orchestration Another important interaction pattern is service
choreography, which is the coordinated interaction of services without a single point of
control.
Components of SOA:
Any software, application, or cloud technology that uses a standardized Web protocol (HTTP or
HTTPS) to connect, interoperate, and exchange data messages over the Internet-usually XML
(Extensible Markup Language) is considered a Web service. Is.
Web services allow programs developed in different languages to be connected between a client and a
server by exchanging data over a web service. A client invokes a web service by submitting an XML
request, to which the service responds with an XML response.
Remote procedure calls are used to perform these requests. The calls to the methods hosted by the
respective web service are known as Remote Procedure Calls (RPC). Example: Flipkart provides a web
service that displays the prices of items offered on Flipkart.com. The front end or presentation layer
can be written in .NET or Java, but the web service can be communicated using a programming
language.
The data exchanged between the client and the server, XML, is the most important part of web service
design. XML (Extensible Markup Language) is a simple, intermediate language understood by various
programming languages. It is the equivalent of HTML.
As a result, when programs communicate with each other, they use XML. It forms a common platform
for applications written in different programming languages to communicate with each other.
Web services employ SOAP (Simple Object Access Protocol) to transmit XML data between
applications. The data is sent using standard HTTP. A SOAP message is data sent from a web service
to an application. An XML document is all that is contained in a SOAP message. The client
application that calls the web service can be built in any programming language as the content is
written in XML.
(a) XML-based: A web service's information representation and record transport layers employ XML.
There is no need for networking, operating system, or platform bindings when using XML. At the mid-
level, web offering-based applications are highly interactive.
(b) Loosely Coupled: The subscriber of an Internet service provider may not necessarily be directly
connected to that service provider. The user interface for a web service provider may change over time
without affecting the user's ability to interact with the service provider. A strongly coupled system
means that the decisions of the mentor and the server are inextricably linked, indicating that if one
interface changes, the other must be updated.
A loosely connected architecture makes software systems more manageable and easier to integrate
between different structures.
(c) Ability to be synchronous or asynchronous: Synchronicity refers to the client's connection to the
execution of the function. Asynchronous operations allow the client to initiate a task and continue with
other tasks. The client is blocked, and the client must wait for the service to complete its operation
before continuing in synchronous invocation.
Asynchronous clients get their results later, but synchronous clients get their effect immediately when
the service is complete. The ability to enable loosely connected systems requires asynchronous
capabilities.
(d) Coarse Grain: Object-oriented systems, such as Java, make their services available differently. At
the corporate level, an operation is too great for a character technique to be useful. Building a Java
application from the ground up requires the development of several granular strategies, which are then
combined into a coarse grain provider that is consumed by the buyer or service.
Corporations should be coarse-grained, as should the interfaces they expose. Building web services is
an easy way to define coarse-grained services that have access to substantial business enterprise logic.
(e) Supports remote procedural calls: Consumers can use XML-based protocols to call procedures,
functions, and methods on remote objects that use web services. A web service must support the input
and output framework of the remote system.
Enterprise-wide component development Over the years, JavaBeans (EJBs) and .NET components
have become more prevalent in architectural and enterprise deployments. Several RPC techniques are
used to both allocate and access them.
A web function can support RPC by providing its services, similar to a traditional role, or translating
incoming invocations into an EJB or .NET component invocation.
(f) Supports document exchanges: One of the most attractive features of XML for communicating
with data and complex entities.
The architecture of web service interacts among three roles: service provider, service
requester, and service registry. The interaction involves the three operations: publish, find, and bind.
These operations and roles act upon the web services artifacts. The web service artifacts are the web
service software module and its description.
The service provider hosts a network-associable module (web service). It defines a service description
for the web service and publishes it to a service requestor or service registry. These service requestor
uses a find operation to retrieve the service description locally or from the service registry. It uses the
service description to bind with the service provider and invoke with the web service implementation.
The following figure illustrates the operations, roles, and their interaction.
o Service Provider
o Service Requestor
o Service Registry
Service Provider
From an architectural perspective, it is the platform that hosts the services.
Service Requestor
Service requestor is the application that is looking for and invoking or initiating an interaction with a
service. The browser plays the requester role, driven by a consumer or a program without a user
interface.
Service Registry
Service requestors find service and obtain binding information for services during development.
Operations in a Web Service Architecture
Three behaviors that take place in the microservices:
Publish: In the publish operation, a service description must be published so that a service requester
can find the service.
Find: In the find operation, the service requestor retrieves the service description directly. It can be
involved in two different lifecycle phases for the service requestor:
o At design, time to retrieve the service's interface description for program development.
o And, at the runtime to retrieve the service's binding and location description for invocation.
Bind: In the bind operation, the service requestor invokes or initiates an interaction with the service at
runtime using the binding details in the service description to locate, contact, and invoke the service.
o Service
o Service Registry
Service: A service is an interface described by a service description. The service description is the
implementation of the service. A service is a software module deployed on network-accessible
platforms provided by the service provider. It interacts with a service requestor. Sometimes it also
functions as a requestor, using other Web Services in its implementation.
o Requirements Phase
o Analysis Phase
o Design Phase
o Coding Phase
o Test Phase
o Deployment Phase
Requirements Phase
The objective of the requirements phase is to understand the business requirement and translate them
into the web services requirement. The requirement analyst should do requirement elicitation (it is the
practice of researching and discovering the requirements of the system from the user, customer, and
other stakeholders). The analyst should interpret, consolidate, and communicate these requirements to
the development team. The requirements should be grouped in a centralized repository where they can
be viewed, prioritized, and mined for interactive features.
Analysis Phase
The purpose of the analysis phase is to refine and translate the web service into conceptual models by
which the technical development team can understand. It also defines the high-level structure and
identifies the web service interface contracts.
Design Phase
In this phase, the detailed design of web services is done. The designers define web service interface
contract that has been identified in the analysis phase. The defined web service interface contract
identifies the elements and the corresponding data types as well as mode of interaction between web
services and client.
Coding Phase
Coding and debugging phase is quite similar to other software component-based coding and debugging
phase. The main difference lies in the creation of additional web service interface wrappers, generation
of WSDL, and client stubs.
Test Phase
In this phase, the tester performs interoperability testing between the platform and the client's program.
Testing to be conducted is to ensure that web services can bear the maximum load and stress. Other
tasks like profiling of the web service application and inspection of the SOAP message should also
perform in the test phase.
Deployment Phase
The purpose of the deployment phase is to ensure that the web service is properly deployed in the
distributed system. It executes after the testing phase. The primary task of deployer is to ensure that the
web service has been properly configured and managed. Other optional tasks like specifying and
registering the web service with a UDDI registry also done in this phase.
In the above figure, the top most layers build upon the capabilities provided by the lower layers. The
three vertical towers represent the requirements that are applied at every level of the stack. The text on
the right represents technologies that apply at that layer of the stack. A web service protocol stack
typically stacks four protocols:
o Transport Protocol
o Messaging Protocol
o Description Protocol
o Discovery Protocol
(Service) Transport Protocol: The network layer is the foundation of the web service stack. It is
responsible for transporting a message between network applications. HTTP is the network protocol
for internet available web services. It also supports other network protocol such as SMTP,
FTP, and BEEP (Block Extensible Exchange Protocol).
(XML) Messaging Protocol: It is responsible for encoding message in a common XML format so that
they can understand at either end of a network connection. SOAP is the chosen XML messaging
protocol because it supports three operations: publish, find, and bind operation.
(Service) Description Protocol: It is used for describing the public interface to a specific web service.
WSDL is the standard for XML-based service description. WSDL describes the interface and
mechanics of service interaction. The description is necessary to specify the business context, quality
of service, and service-to-service relationship.
(Service) Discovery Protocol: It is a centralized service into a common registry so that network Web
services can publish their location and description. It makes it easy to discover which services are
available on the network.
The first three layers of the stack are required to provide or use any web service. The simplest stack
consists of HTTP for the network layer, SOAP protocol for the XML-based messaging, and WSDL for
the service description layer. These three-layer provides interoperability and enables web service to
control the existing internet infrastructure. It creates a low cost of entry to a global environment.
The bottom three layers of the stack identify technologies for compliance and interoperability, the next
two layer- Service Publication and Service Discovery can be implemented with a range of solutions.
What is REST?
REpresentational State Transfer (REST) is a software architectural style that defines the constraints to
create web services. The web services that follows the REST architectural style is called RESTful Web
Services. It differentiates between the computer system and web services. The REST architectural style
describes the six barriers.
1. Uniform Interface
The Uniform Interface defines the interface between client and server. It simplifies and decomposes
the architecture which enables every part to be developed. The Uniform Interface has four guiding
principles:
o Resource-based: Individual resources are identified using the URI as a resource identifier. The
resources themselves are different from the representations returned to the customer. For
example, the server cannot send the database but represents some database records expressed
to HTML, XML or JSON depending on the server request and the implementation details.
o Self-Descriptive Message: Each message contains enough information to describe how the
message is processed. For example, the parser can be specified by the Internet media type
(known as the MIME type).
o The same interface that any REST services provide is fundamental to the design.
2. Client-server
A client-server interface separates the client from the server. For Example, the separation of concerns
not having an internal relationship with internal storage for each server to improve the portability of
customer's data codes. Servers are not connected with the user interface or user status to make the
server simpler and scalable. Servers and clients are independently replaced and developed until the
interface is changed.
3. Stateless
Stateless means the state of the service doesn't persist between subsequent requests and response. It
means that the request itself contains the state required to handle the request. It can be a query-string
parameter, entity, or header as a part of the URI. The URI identifies the resource and state (or state
change) of that resource in the unit. After the server performs the appropriate state or status piece (s)
that matters are sent back to the client through the header, status, and response body.
o Most of us in the industry have been accustomed to programming with a container, which gives
us the concept of "session," which maintains the status among multiple HTTP requests.
In REST, the client may include all information to fulfil the server's request and multiple
requests in the state. Statelessness enables greater scalability because the server does
not maintain, update, or communicate any session state. The resource state is the data that
defines a resource representation.
Example, the data stored in a database. Consider the application state of having data that may vary
according to client and request. The resource state is constant for every customer who requests it.
4. Layered system
It is directly connected to the end server or by any intermediary whether a client cannot tell.
Intermediate servers improve the system scalability by enabling load-balancing and providing a
shared cache. Layers can enforce security policies.
5. Cacheable
On the World Wide Web, customers can cache responses. Therefore, responses clearly define
themselves as unacceptable or prevent customers from reusing stale or inappropriate data to further
requests. Well-managed caching eliminates some client-server interactions to improving scalability and
performance.
Business Services - SaaS Provider provides various business services to start-up the business. The SaaS
business services include ERP (Enterprise Resource Planning), CRM (Customer Relationship
Management), billing, and sales.
Social Networks - As we all know, social networking sites are used by the general public, so social
networking service providers use SaaS for their convenience and handle the general public's
information.
Mail Services - To handle the unpredictable number of users and load on e-mail services, many e-mail
providers offering their services using SaaS.
2. One to Many
SaaS services are offered as a one-to-many model means a single instance of the application is shared
by multiple users.
6. Multidevice support
SaaS services can be accessed from any device such as desktops, laptops, tablets, phones, and thin
clients.
7. API Integration
SaaS services easily integrate with other software or services through standard APIs.
8. No client-side installation
SaaS services are accessed directly from the service provider using the internet connection, so do not
need to require any software installation.
2) Latency issue
Since data and applications are stored in the cloud at a variable distance from the end-user, there is a
possibility that there may be greater latency when interacting with the application compared to local
deployment. Therefore, the SaaS model is not suitable for applications whose demand response time is
in milliseconds.
1. Programming languages
PaaS providers provide various programming languages for the developers to develop the applications.
Some popular programming languages provided by PaaS providers are Java, PHP, Ruby, Perl, and
Go.
2. Application frameworks
PaaS providers provide application frameworks to easily understand the application development.
Some popular application frameworks provided by PaaS providers are Node.js, Drupal, Joomla,
WordPress, Spring, Play, Rack, and Zend.
3. Databases
PaaS providers provide various databases such as ClearDB, PostgreSQL, MongoDB, and Redis to
communicate with the applications.
4. Other tools
PaaS providers provide various other tools that are required to develop, test, and deploy the
applications.
Advantages of PaaS
There are the following advantages of PaaS -
1) Simplified Development
PaaS allows developers to focus on development and innovation without worrying about infrastructure
management.
2) Lower risk
No need for up-front investment in hardware and software. Developers only need a PC and an internet
connection to start building applications.
4) Instant community
PaaS vendors frequently provide online communities where the developer can get the ideas to share
experiences and seek advice from others.
5) Scalability
Applications deployed can scale from one to thousands of users without any changes to the
applications.
2) Data Privacy
Corporate data, whether it can be critical or not, will be private, so if it is not located within the walls
of the company, there can be a risk in terms of privacy of data.
In this article, we are going to explore some of the standard ‘Monitoring as a Service’ tools
with their detailed specifications. So, let’s get started:
1. Amazon CloudWatch
Amazon CloudWatch allows us to completely monitor the tech stack of our application and
infrastructure. It notifies us with alarms, logs, etc, and helps us to take necessary actions
which thereby reduces the Mean Time to Resolution (MTTR). It also monitors the EC2
instances, Dynamo tables, etc. It is best suited for applications hosted in AWS. The logs,
alerts, and troubleshooting of these applications can be done easily using Amazon
CloudWatch. Amazon CloudWatch does not charge for the first 50 metrics for a single
dashboard. If the metrics limit is exceeded, the user is charged with some amount. Amazon
CloudWatch can be accessed using Command Line Interface, APIs, AWS Console.
2. Azure Monitor
It collects, monitors, and takes necessary actions on the data of the devices and instances in
the Azure and on-premises environment. It is very efficient and identifies and resolves
problems in seconds. It simply collects the data from various sources and stores it as logs.
This data can later be used for logs, analysis, security checks, notifications, etc. The main
advantage of it is that it not only reports the issue to the user but also provides the solution to
solve the issue.
3. AppDynamics
AppDynamics is another cloud monitoring tool that is used for monitoring every aspect of
the application. It can monitor the business transactions, transaction snapshots, tires, and
nodes, etc. It also monitors the full technology stack of the application from the database to
the server. The architecture of AppDynamics is simpler and is controlled by a central
management server known as the controller. AppDynamics was founded in 2008 by a person
in WIly Technology. Now, it is acquired by Cisco company. AppDynamics holds a rank of 9
in Cloud 100 list which is ranked by Forbes.
5. Solarwinds
The software was founded by Donald Yonce and David Yonce in Tulsa in 1999. It is
customizable and intelligent to use. However, it is not that attractive as other tools but it gets
the job done without any problems. It can support up to 1200 applications and systems. It
allows us to monitor the components through PowerShell, REST API, etc. It also has
configurations for windows and Linux which leads to faster performance.
6. ManageEngine
ManageEngine is founded by Zoho Corporation. It is also an infrastructure monitoring tool
with real-time monitoring of networks It has customizable dashboards for users. It has more
than 70 metrics for VMWare and more than 40 metrics for Hyper V. It also has inbuilt fault
monitoring and alerting. But the problem with this is, it has no hosted version. It manages the
computers in various domains and allows checks the bandwidth too. It is both available as
free edition and premium edition. The free edition starts from 495 dollars and the cloud
version starts from 645 dollars.
7. Zabbix
Zabbix is founded by Alexei Vladishev. It is one of the most popular open-source
infrastructure monitoring tools in the market. It is available on multiple platforms like
Windows, Unix, Linux, etc. It can send notifications on various streams like SMS, email,
script alerts, webhooks, etc. The main advantage is it is open source and has a strong
community for support. Zabbix allows APIs, access controls/permissions, activity
dashboard, audit trails, data visualization, CPU monitoring, and a lot more features.
8. Nagios
Nagios is founded by Ethan Galstad. Nagios is yet another famous monitoring tool. It
periodically runs security checks on all the important aspects of the system. It is available as
both an open-source and paid enterprise solution. It is Linux-based. The architecture of
Nagios can be extended through plugins. It is open-source and gives us full access to source
code. It is more popular and is used by companies like Uber, Twitch, Dropbox, Fiverr,
9GAG, Zalando, etc.
9. Site 24×7
It is also a monitoring tool that inspects the servers, network containers, and visualization
platforms. It runs on both Windows and Linux servers. It easily monitors more than 60
metrics for servers. It also provides plugin integrations for MySQL and Apache. It also
provides website services like HTTP, DNS servers, etc. Site 24×7 monitoring allows us to use
APIs, Baseline managers, Email monitoring, email alerts, event logs, mail server monitoring,
reporting & statistics, SLA, and much more.
10. Datadog
Datadog infrastructure monitoring is founded by Olivier Pompel and Alexis Le-Quoc. It also
monitors both cloud and on-premise infrastructures. It provides visibility into the state of the
components we are using. It allows us to use consolidated dashboards giving us visibility into
the infrastructure. It has a customizable Datadog API. It has more than 400 vendor-backed
integration thereby it can give us deep insight into our IT stack. It has a broader use case, it is
used by more than 800 companies and 2000 developers. With the help of Datadog
infrastructure monitoring, we can monitor the performance and well-being of the entire IT
infrastructure.
2) Improved collaboration
Cloud applications improve collaboration by allowing groups of people to quickly and easily share
information in the cloud via shared storage.
3) Excellent accessibility
Cloud allows us to quickly and easily access store information anywhere, anytime in the whole world,
using an internet connection. An internet cloud infrastructure increases organization productivity and
efficiency by ensuring that our data is always accessible.
5) Mobility
Cloud computing allows us to easily access all cloud data via mobile.
8) Data security
Data security is one of the biggest advantages of cloud computing. Cloud offers many advanced
features related to security and ensures that data is securely stored and handled.
1) Internet Connectivity
As you know, in cloud computing, every data (image, audio, video, etc.) is stored on the cloud, and we
access these data through the cloud by using the internet connection. If you do not have good internet
connectivity, you cannot access these data. However, we have no any other way to access data from
the cloud.
2) Vendor lock-in
Vendor lock-in is the biggest disadvantage of cloud computing. Organizations may face problems when
transferring their services from one vendor to another. As different vendors provide different platforms,
that can cause difficulty moving from one cloud to another.
3) Limited Control
As we know, cloud infrastructure is completely owned, managed, and monitored by the service
provider, so the cloud users have less control over the function and execution of services within a cloud
infrastructure.
4) Security
Although cloud service providers implement the best security standards to store important information.
But, before adopting cloud technology, you should be aware that you will be sending all your
organization's sensitive information to a third party, i.e., a cloud computing service provider. While
sending the data on the cloud, there may be a chance that your organization's information is hacked by
Hackers.
What is a hypervisor
A hypervisor, also known as a virtual machine monitor or VMM. The hypervisor is a piece of software
that allows us to build and run virtual machines which are abbreviated as VMs.
A hypervisor allows a single host computer to support multiple virtual machines (VMs) by sharing
resources including memory and processing.
Kinds of hypervisors
There are two types of hypervisors: "Type 1" (also known as "bare metal") and "Type 2" (also known as
"hosted"). A type 1 hypervisor functions as a light operating system that operates directly on the host's
hardware, while a type 2 hypervisor functions as a software layer on top of an operating system, similar
to other computer programs.
Since they are isolated from the attack-prone operating system, bare-metal hypervisors are extremely
stable.
Furthermore, they are usually faster and more powerful than hosted hypervisors. For these purposes,
the majority of enterprise businesses opt for bare-metal hypervisors for their data center computing
requirements.
While hosted hypervisors run inside the OS, they can be topped with additional (and different)
operating systems.
The hosted hypervisors have longer latency than bare-metal hypervisors which is a very major
disadvantage of the it. This is due to the fact that contact between the hardware and the hypervisor
must go through the OS's extra layer.
The Type 1 hypervisor
The native or bare metal hypervisor, the Type 1 hypervisor is known by both names.
It replaces the host operating system, and the hypervisor schedules VM services directly to the
hardware.
The type 1 hypervisor is very much commonly used in the enterprise data center or other server-based
environments.
It includes KVM, Microsoft Hyper-V, and VMware vSphere. If we are running the updated version of
the hypervisor then we must have already got the KVM integrated into the Linux kernel in 2007.
Benefits of hypervisors
Using a hypervisor to host several virtual machines has many advantages:
o Speed: The hypervisors allow virtual machines to be built instantly unlike bare-metal servers.
This makes provisioning resources for complex workloads much simpler.
o Efficiency: Hypervisors that run multiple virtual machines on the resources of a single physical
machine often allow for more effective use of a single physical server.
o Flexibility: Since the hypervisor distinguishes the OS from the underlying hardware, the
program no longer relies on particular hardware devices or drivers, bare-metal hypervisors
enable operating systems and their related applications to operate on a variety of hardware
types.
o Portability: Multiple operating systems can run on the same physical server thanks to
hypervisors (host machine). The hypervisor's virtual machines are portable because they are
separate from the physical computer.
As an application requires more computing power, virtualization software allows it to access additional
machines without interruption.
Container vs hypervisor
Containers and hypervisors also help systems run faster and more efficiently. But they both do these
things in very different manner that is why are different form each other.
The Hypervisors:
o Using virtual machines, an operating system can operate independently from the underlying
hardware.
Containers:
o There is no specific need of the O.S for the program to run, the container makes it sure.
o They only need a container engine to run on any platform or on any operating system.
o Are incredibly versatile since an application has everything it requires to operate within a
container.