MC4203 - Cloud Computing Technologies
MC4203 - Cloud Computing Technologies
UNIT-I
DISTRIBUTED SYSTEMS
INTRODUCTION TO DISTRIBUTED SYSTEMS
A distributed system is one in which components located at networked computers communicate and co-
ordinate their actions only by passing messages.
A distributed system consists of a collection of autonomous computers linked by a computer network and
equipped with distributed system software.
This software enables computers to coordinate their activities and to share the resources of the system
hardware, software, and data.
There are three significant characteristics on how distributed systems are different from centralized
systems:
1. Concurrency of components - different system components do work at once and handle
communication by passing messages.
2. Lack of global clock - in distributed systems each system has its own clock. Systems might
somewhat synchronize their clocks sometimes but they most likely will not have the same time.
3. Independent failures of components - in a distributed system one component might fail due to
some unforeseen circumstances.
Finance and commerce eCommerce e.g. Amazon and eBay, PayPal, online banking and
trading
The information society Web information and search engines, ebooks, Wikipedia; social
networking: Facebook and MySpace
Creative industries and online gaming, music and film in the home, user-generated content,
entertainment e.g. YouTube, Flickr
Healthcare health informatics, on online patient records, monitoring patients
1
Education e-learning, virtual learning environments; distance learning
Transport and logistics GPS in route finding systems, map services: Google Maps, Google
Earth
Science The Grid as an enabling technology for collaboration between
scientists
Environmental management sensor technology to monitor earthquakes, floods or tsunamis
Common Characteristics
Certain common characteristics can be used to assess distributed systems
Resource Sharing
Ability to use any hardware, software or data anywhere in the system.
Resource manager controls access, provides naming scheme and controls concurrency.
Resource sharing model (e.g. client/server or object-based) describing how resources are provided,
they are used and provider and user interact with each other.
Openness
Openness is concerned with extensions and improvements of distributed systems.
Detailed interfaces of components need to be published.
New components have to be integrated with existing components.
Differences in data representation of interface types on different processors have to be resolved.
Concurrency
Components in distributed systems are executed in concurrent processes.
Components access and update shared resources (e.g. variables, databases, device drivers).
Integrity of the system may be violated if concurrent updates are not coordinated.
Scalability
Adaption of distributed systems to accommodate more users respond faster (this is the hard one)
Usually done by adding more and/or faster processors.
Components should not need to be changed when scale of a system increases.
Design components to be scalable
Fault Tolerance
Distributed systems must maintain availability even at low levels of hardware/software/network
reliability.
Fault tolerance is achieved by recovery redundancy
Transparency
Distributed systems should be perceived by users and application programmers as a whole rather
than as a collection of cooperating components.
Software layers
The concept of layering is a familiar one and is closely related to abstraction.
In a layered approach, a complex system is partitioned into a number of layers, with a given layer
making use of the services offered by the layer below.
A given layer therefore offers a software abstraction, with higher layers being unaware of
implementation details, or indeed of any other layers beneath them.
A platform for distributed systems and applications consists of the lowest-level hardware and software
layers.
2
These low-level layers are implemented independently in each computer, bringing the system’s
programming interface up to a level that facilitates communication and coordination between processes.
Intel x86/Windows, Intel x86/Solaris, Intel x86/Mac OS X, Intel x86/Linux and ARM/Symbian are
major examples.
Middleware is a software that allows a level of programming beyond processes and message passing.
System Architecture:
Client-server model
The system is structured as a set of processes, called servers, that offer services to the users, called
clients.
The client-server model is usually based on a simple request/reply protocol, implemented with
send/receive primitives or using remote procedure calls (RPC) or remote method invocation (RMI):
The client sends a request (invocation) message to the server asking for some service;
The server does the work and returns a result (e.g. the data requested) or an error code if the work could
not be performed.
3
A server can itself request services from other servers; thus, in this new relation, the server itself acts
like a client.
Peer-to-peer
All processes (objects) play similar role.
Processes (objects) interact without particular distinction between clients and servers.
The pattern of communication depends on the particular application.
A large number of data objects are shared; any individual computer holds only a small part of the
application database.
Processing and communication loads for access to objects are distributed across many computers and
access links.
This is the most general and flexible model.
REMOTE INVOCATION
The remote procedure call (RPC) approach extends the common programming abstraction of the
procedure call to distributed environments, allowing a calling process to call a procedure in a remote
node as if it is local.
Remote Method Invocation (RMI) is similar to RPC but for distributed objects, with added benefits in
terms of using object-oriented programming concepts in distributed systems and also extending the
concept of an object reference to the global distributed environments, and allowing the use of object
references as parameters in remote invocations
4
REQUEST-REPLY PROTOCOLS
Typical client-server interactions – request-reply communication is synchronous because the client
process blocks until the reply arrives.
Asynchronous request-reply communication – an alternative that may be useful in situations where
clients can afford to retrieve replies later.
doOperation by clients to invoke remote op.; together with additional arguments; return a byte array.
Marshaling and unmarshaling
getRequest by server process to acquire service requests
sendReply send reply to the client
Message identifiers
5
Any scheme that involves the management of messages to provide additional properties such as reliable
message delivery or request-reply communication requires that each message have a unique message
identifier by which it may be referenced.
A message identifier consists of two parts:
requestID – increasing sequence of integers by the sender
server process identifier – e.g. internet address and port
Timeouts
There are various options as to what doOperation can do after a timeout.
The simplest option is to return immediately from doOperation with an indication to the client that the
doOperation has failed.
To compensate for the possibility of lost messages, doOperation sends the request message repeatedly
until either it gets a reply.
History
For servers that require retransmission of replies without re-execution of operations, a history may be
used.
The term ‘history’ is used to refer to a structure that contains a record of (reply) messages that have
been transmitted.
A problem associated with the use of a history is its memory cost. A history will become very large
unless the server can tell when the messages will no longer be needed for retransmission.
Later version
Persistent connections – connections remain open ofer a series of request-reply exchanges
Client may receive a message from the server saying that the connection is closed while it is in the
middle of sending.
Browser will resend the requests without user involvement
Requests and replies are marshalled into messages as ASCII text strings, but resources can be
represented as byte sequences and may be compressed.
Multipurpose Internet Mail Extensions (MIME) is a standard for sending multipart data containing.
HTTP methods
GET: Requests the resource whose URL is given as its argument.
HEAD: identical to GET, but does not return any data but instead, all the information about the data
POST: data supplied in the body of the request, action may change data on the server
PUT: Requests that the data supplied in the request is stored with the given URL as its identifier.
DELETE: deletes the resource identified by the given URL
OPTIONS: server supplies the client with a list of methods it allows to be applied to the given URL.
TRACE: The server sends back the request message. Used for diagnostic purposes
7
Message contents
The Request message specifies the name of a method, the URL of a resource, the protocol version, some
headers and an optional message body.
A Reply message specifies the protocol version, a status code and ‘reason’, some headers and an
optional message body
8
RPC call semantics
doOperation implementations with different delivery guarantees:
Retry request message
Duplicate filtering
Retransmission of results
Maybe semantics
With maybe semantics, the remote procedure call may be executed once or not at all.
When no fault-tolerance measures applied, it can suffer from:
Omission failures
Crash failures
At-least-once semantics
It can be achieved by retransmission of request messages.
Types of failures
Crash failures when the server containing the remote procedure fails
Arbitrary failures – in cases when the request message is retransmitted, the remote server may
receive it and execute the procedure more than once, possibly causing wrong values stored or
returned.
At-most-once semantics:
9
At-most-once semantics can be achieved by using all of the fault-tolerance measures.
The caller receives either a result or an exception
Transparency
The originators of RPC, aimed to make remote procedure calls as much like local procedure calls as
possible, with no distinction in syntax between a local and a remote procedure call.
The current consensus is that remote calls should be made transparent in the sense that the syntax of a
remote call is the same as that of a local invocation, but that the difference between local and remote
calls should be expressed in their interfaces.
Implementation of RPC
The software components required to implement RPC are:
Stub procedure behaves like a local procedure to the client, but instead of executing the call, it marshals
the procedure identifier and the arguments into a request message, which it sends via its communication
module to the server
The dispatcher selects one of the server stub procedures according to the procedure identifier in the
request message.
The server stub procedure then unmarshals the arguments in the request message, calls the
corresponding service procedure and marshals the return values for the reply message.
RPC generally implemented over request-reply protocol
RPC may be implemented to have one of the choices of invocation semantics as at-leastonce or at-most-
once is generally chosen.
Object model
Object oriented languages allow programmers to define objects whose instance variables can be
accessed directly.
But for use in a distributed object system, an object’s data should be accessible only via its methods.
Object references: to invoke a method object’s reference and method name are given
Interfaces: definition of the signatures of a set of methods without their implementation
Actions: initiated by an object invoking a method in another object
Exceptions: a block of code may be defined to throw an exception; another block catches the exception
A language such as Java, that can detect automatically when an object is no longer accessible recovers
the space and makes it available for allocation to other objects. This process is called garbage collection.
Distributed objects
Distributed object systems may adopt the client-server architecture
They are usually done using client-server methods:
The client sends the RMI request in a message to the server
The server executes the invoked method of the object
The server returns the result to the client in another message
Two fundamental concepts in distributed system for remote invocation are Remote Object Reference
and Remote Interface
Remote object reference: identifier that can be used throughout a distributed system to refer to a
particular unique remote object. It be passed as arguments and results of remote method invocations
Remote interfaces: which of the object methods can be invoked remotely
Actions
11
An action initiated by a method invocation may result in further invocations on methods in other
objects located indifference processes or computers
Remote invocations could lead to the instantiation of new objects, ie. objects M and N of following
figure.
Garbage Collection
Distributed garbage collections is generally achieved by cooperation between the existing local garbage
collector and an added module that carries out a form of distributed garbage collection
Exceptions
RMI should be able to raise exceptions such as timeouts that are due to distribution as well as those
raised during the execution of the method invoked
Implementation of RMI
Communication module
Responsible for transfering request and reply messages between the client and server uses only 3 fields
of the messages: message type, requestId and remote reference.
Communication modules are together responsible for providing a specified invocation semantics.
Servants
12
A servant is an instance of a class that provides the body of a remote object.
It is the servant that eventually handles the remote requests passed on by the corresponding skeleton.
Servants live within a server process. They are created when remote objects are instantiated and remain
in use until they are no longer needed, finally being garbage collected or deleted.
Dispatcher
A server has one dispatcher and one skeleton for each class representing a remote object.
The dispatcher receives request messages from the communication module. It uses the operationId to
select the appropriate method in the skeleton, passing on the request message.
Skeleton:
A skeleton method unmarshals the arguments in the request message and invokes the corresponding
method in the servant.
It waits for the invocation to complete and then marshals the result, together with any exceptions, in a
reply message to the sending proxy’s method.
Factory methods
The remote object interfaces cannot include constructors.
Servants are created either in the initialization section or in methods in a remote interface designed for
that purpose.
The term factory method is sometimes used to refer to a method that creates servants, and a factory
object is an object with factory methods.
Binder
A binder in a distributed system is a separate service that maintains a table containing mappings from
textual names to remote object references.
Server threads
To avoid the execution of one remote invocation delaying the execution of another, servers generally
allocate a separate thread for the execution of each remote invocation.
Object location
A location service helps clients to locate remote objects from their remote object references.
GROUP COMMUNICATION
Group communication offers a service whereby a message is sent to a group and then this message is
delivered to all members of the group.
In this action, the sender is not aware of the identities of the receivers
Key areas of application:
Reliable dissemination of information to potentially large numbers of clients
Support for collaborative applications
Support for a range of fault-tolerance strategies
Support for system monitoring and management
14
Inoverlapping groups, entities (processes or objects) may be members of multiple groups, and non-
overlapping groups imply that membership does not overlap.
ORDERED MULTICAST
Reliability in one-to-one communication has two properties: integrity (the message received is the same
as the one sent, and no messages are delivered twice) and validity (any outgoing message is eventually
delivered).
As well as reliability guarantees, group communication demands extra guarantees in terms of the relative
ordering of messages delivered to multiple destinations.
Ordering is not guaranteed by underlying interprocess communication primitives.
Group communication services offer ordered multicast, with the option of one or more of the following
properties:
FIFO ordering: First-in-first-out (FIFO) (or source ordering) – if sender sends one before the other, it
will be delivered in this order at all group processes
Casual ordering: – if a message happens before another message in the distributed system, this so-
called casual relationship will be preserved in the delivery of the associated messages at all processes
Total ordering: – if a message is delivered before another message at one process, the same order
will be preserved at all processes
TIME ORDERING
Time is an important and interesting issue in distributed systems, for several reasons.
First, time is a quantity we often want to measure accurately.
Second, algorithms that depend upon clock synchronization have been developed for several problems in
distribution.
These include maintaining the consistency of distributed data, checking the authenticity of a request sent
to a server and eliminating the processing of duplicate updates.
A process is a sequence of totally ordered events, i.e., for any event a and b in a process, either a comes
before b or b comes before a.
A system is distributed if the time it takes to send a message from one process to another is significant
compared to the time interval between events in a single process
Two approaches for building clocks: physical and logical clocks.
Physical Clocks:
Each machine has its own local clock.
Clock synchronization algorithms run periodically to keep them synchronized with each other within
some bounds.
15
Useful for giving a consistent view of “current time” across all nodes within some bounds, but cannot
order events always
Logical Clocks:
Use the notion of causality to order events
Useful for ordering events, but not for giving a consistent view of “current time” across all nodes
Internal Synchronization:
Requires the clocks of the nodes to be synchronized to within a pre-specified bound
However, the clock times may not be synchronized to any external time reference, and can vary
arbitrarily from any such reference
External Synchronization
Requires the clocks to be synchronized to within a prespecified bound of an external reference clock
Cristian’s Algorithm
One node acts as the time server
All other nodes sends a message periodically to the time server asking for current time
Time server replies with its time to the client node
Client node sets its clock to the reply
Berkeley Algorithm
Centralized as in Cristian’s, but the time server is active
Time server asks for time of other nodes at periodic intervals
Other nodes reply with their time
Time server averages the times and sends the adjustments needed to each machine
Adjustments may be different for different machines
UNIT –II
CLOUD COMPUTING BASICS
DEFINE CLOUD COMPUTING (2 MARKS)
EXPLAIN ABOUT CLOUD COMPUTING. (8 MARKS)
“The National Institute of Standards and Technology (NIST) defines cloud computing as a "pay-per-use
model for enabling available, convenient and on-demand network access to a shared pool of configurable
computing resources that can be rapidly provisioned and released with minimal management effort or
service provider interaction."
Cloud computing means on demand delivery of IT resources via the internet with pay as you go pricing.
IT provides a solution of IT Infrastructure in low cost.
The term cloud came from a network design that was used by network engineers to represent the location
of various network devices and there inter- connection. The shape of this network design was like a
cloud.
17
Why Cloud Computing?
Large and small scale businesses today thrive on their data & they spent a huge amount of Money to
maintain this data.
It requires a strong IT support and a storage hub. Not all businesses can afford high cost of in house IT
infrastructure and back up support services. For them cloud computing is a cheaper solution.
Cloud Computing decreases the hardware and software demand form the user’s side.
The only thing that user must be able to run is the cloud computing system interface software, which can
be as simple as web browser and the cloud network take care of the rest.
18
The rapid growth of flash memory and solid – state drives [SSDs] also impacts the future system.
Eventually, power consumption, cooling and packaging will limit large system development.
High- band width networking increases the capability of building massively distributed system. Most
data centers are using Gigabit Ethernet in their server clusters.
History
19
Per-usage metered and billed
Elastic
Customizable.
Self-Service
Consumers of cloud computing services expect on-demand, nearly instant access to resources.
To support this expectation, clouds must allow self-service access so that customers can request,
customize, pay, and use services without intervention of human operators .
Elasticity
Cloud computing gives the illusion of infinite computing resources available on demand.
Therefore users expect clouds to rapidly provide resources in any quantity at any time.
In particular, it is expected that the additional resources can be:
(a) Provisioned, possibly automatically, when an application load increases
(b) Released when load decreases (scale up and down) .
Customization
In a multi-tenant cloud a great disparity between user needs is often the case. Thus, resources rented from
the cloud must be highly customizable.
In the case of infrastructure services, customization means allowing users to deploy specialized virtual
appliances and to be given privileged (root) access to the virtual servers.
Other service classes (PaaS and SaaS) offer less flexibility and are not suitable for general-purpose
computing , but still are expected to provide a certain level of customization.
ELASTICITY IN CLOUD
Elasticity is defined as the ability of a system to add and remove resources to adapt to the load variation
in real time.
Elasticity is a dynamic property for cloud computing.
Elasticity is the degree to which a system is able to adapt to workload changes by provisioning and
deprovisioning resources in an autonomic manner, such that at each point in time the available resources
match the current demand as closely as possible.
Elasticity is built on top of scalability
It can be considered as an automation of the concept of scalability and aims to optimize at best and as
quickly as possible the resources at a given time.
Another term associated with elasticity is the efficiency, which characterizes how cloud resource can be
efficiently utilized as it scales up or down.
It is the amount of resources consumed for processing a given amount of work, the lower this amount is,
the higher the efficiency of a system.
Elasticity also introduces a new important factor, which is the speed.
Classification
Elasticity solutions can be arranged in different classes based on
Scope
20
Policy
Purpose
Method
Scope
Elasticity can be implemented on any of the cloud layers.
Most commonly, elasticity is achieved on the IaaS level, where the resources to be provisioned are virtual
machine instances.
On the PaaS level, elasticity consists in scaling containers or databases for instance.
Finally, both PaaS and IaaS elasticity can be used to implement elastic applications, be it for private use
or in order to be provided as a SaaS
The elasticity actions can be applied either at the infrastructure or application/platform level.
Google App Engine and Azure elastic pool are examples of elastic Platform as a Service (PaaS).
Application Map: The elasticity controller must have a complete map of the application components and
instances.
Code embedded: The elasticity controller is embedded in the application source code.
Policy
Elastic solutions can be either manual or automatic.
A manual elastic solution would provide their users with tools to monitor their systems and add or
remove resources but leaves the scaling decision to them.
Automatic mode:
All the actions are done automatically, and this could be classified into reactive and proactive modes.
Reactive mode:
The elasticity actions are triggered based on certain thresholds or rules, the system reacts to the load
and triggers actions to adapt changes accordingly.
An elastic solution is reactive when it scales a posteriori, based on a monitored change in the system.
These are generally implemented by a set of Event-Condition-Action rules.
Proactive mode:
This approach implements forecasting techniques, anticipates the future needs and triggers actions
based on this anticipation.
A predictive or proactive elasticity solution uses its knowledge of either recent history or load
patterns inferred from longer periods of time in order to predict the upcoming load of the system and
scale according to it.
Purpose
An elastic solution can have many purposes.
The first one to come to mind is naturally performance, in which case the focus should be put on their
speed.
Another purpose for elasticity can also be energy efficiency, where using the minimum amount of
resources is the dominating factor.
Other solutions intend to reduce the cost by multiplexing either resource providers or elasticity methods
Elasticity has different purposes such as improving performance, increasing resource capacity, saving
energy, reducing cost and ensuring availability.
Method
Vertical elasticity, changes the amount of resources linked to existing instances on-the-fly.
This can be done in two manners.
The first method consists in explicitly redimensioning a virtual machine instance, i.e., changing the quota
of physical resources allocated to it.
21
This is however poorly supported by common operating systems as they fail to take into account changes
in CPU or memory without rebooting, thus resulting in service interruption.
The second vertical scaling method involves VM migration: moving a virtual machine instance to another
physical machine with a different overall load changes its available resources
Horizontal scaling is the process of adding/removing instances, which may be located at different
locations.
Load balancers are used to distribute the load among the different instances.
Vertical scaling is the process of modifying resources (CPU, memory, storage or both) size for an
instance at run time.
It gives more flexibility for the cloud systems to cope with the varying workloads
ON-DEMAND PROVISIONING
Resource Provisioning means the selection, deployment, and run-time management of software and
hardware resources for ensuring guaranteed performance for applications.
Resource Provisioning is an important and challenging problem in the large-scale distributed systems
such as Cloud computing environments.
There are many resource provisioning techniques, both static and dynamic each one having its own
advantages and also some challenges.
These resource provisioning techniques used must meet Quality of Service (QoS) parameters like
availability, throughput, response time, security, reliability etc., and thereby avoiding Service Level
Agreement (SLA) violation.
Over provisioning and under provisioning of resources must be avoided.
Another important constraint is power consumption
The ultimate goal of the cloud user is to minimize cost by renting the resources and from the cloud
service provider’s perspective to maximize profit by efficiently allocating the resources.
In order to achieve the goal, the cloud user has to request cloud service provider to make a provision for
the resources either statically or dynamically.
Static Provisioning :
For applications that have predictable and generally unchanging demands/workloads, it is possible to use
“static provisioning" effectively.
With advance provisioning, the customer contracts with the provider for services.
The provider prepares the appropriate resources in advance of start of service.
The customer is charged a flat fee or is billed on a monthly basis.
Dynamic Provisioning:
In cases where demand by applications may change or vary, “dynamic provisioning" techniques have
been suggested whereby VMs may be migrated on-the-fly to new compute nodes within the cloud.
The provider allocates more resources as they are needed and removes them when they are not.
22
The customer is billed on a pay-per-use basis.
When dynamic provisioning is used to create a hybrid cloud, it is sometimes referred to as cloud bursting.
APPLICATIONS
Cloud service providers provide various applications in the field of art, business, data storage and backup
services, education, entertainment, management, social networking, etc.
The most widely used cloud computing applications are given below -
23
Art Applications:
Cloud computing offers various art applications for quickly and easily design attractive cards,
booklets, and images. Some most commonly used cloud art applications are given below:
Moo
Prime Ministers of India | List of Prime Minister of India (1947-2020)
Moo is one of the best cloud art applications. It is used for designing and printing business cards,
postcards, and mini cards.
Vistaprint
Vistaprint allows us to easily design various printed marketing products such as business cards,
Postcards, Booklets, and wedding invitations cards.
Business Applications
Business applications are based on cloud service providers.
Today, every organization requires the cloud business application to grow their business. It also
ensures that business applications are 24*7 available to users.
There are the following business applications of cloud computing -
MailChimp
MailChimp is an email publishing platform which provides various options to design,
send, and save templates for emails.
Paypal
Paypal offers the simplest and easiest online payment mode using a secure internet account. Paypal
accepts the payment through debit cards, credit cards, and also from Paypal account holders.
Mozy
24
Mozy provides powerful online backup solutions for our personal and business data. It schedules
automatically back up for each day at a specific time.
Google G Suite
Google G Suite is one of the best cloud storage and backup application. It includes Google Calendar,
Docs, Forms, Google+, Hangouts, as well as cloud storage and tools for managing cloud apps.
The most popular app in the Google G Suite is Gmail. Gmail offers free email services to users.
Education Applications
Cloud computing in the education sector becomes very popular. It offers various online distance
learning platforms and student information portals to the students.
The advantage of using cloud in the field of education is that it offers strong virtual classroom
environments, Ease of accessibility, secure data storage, scalability, greater reach for the students, and
minimal hardware requirements for the applications.
There are the following education applications offered by the cloud -
Google Apps for Education
Google Apps for Education is the most widely used platform for free web-based email, calendar,
documents, and collaborative study.
Entertainment Applications
Entertainment industries use a multi-cloud strategy to interact with the target audience. Cloud
computing offers various entertainment applications such as online games and video conferencing.
Online games
Today, cloud gaming becomes one of the most important entertainment media. It offers various online
games that run remotely from the cloud. The best cloud gaming services are Shaow, GeForce Now,
Vortex, Project xCloud, and PlayStation Now.
Management Applications
Cloud computing offers various cloud management tools which help admins to manage all types of
cloud activities, such as resource deployment, data integration, and disaster recovery.
These management tools also provide administrative control over the platforms, applications, and
infrastructure.
Some important management applications are -
Toggl
Toggl helps users to track allocated time period for a particular project.
Evernote
Evernote allows you to sync and save your recorded notes, typed notes, and other notes in one
convenient place. It is available for both free as well as a paid version.
It uses platforms like Windows, macOS, Android, iOS, Browser, and Unix.
25
Social Applications
Social cloud applications allow a large number of users to connect with each other using social
networking applications such as Facebook, Twitter, Linkedln, etc.
There are the following cloud based social applications -
Facebook
Facebook is a social networking website which allows active users to share files, photos, videos,
status, more to their friends, relatives, and business partners using the cloud storage system.
On Facebook, we will always get notifications when our friends like and comment on the posts.
Twitter
Twitter is a social networking site. It is a microblogging system. It allows users to follow high profile
celebrities, friends, relatives, and receive news. It sends and receives short posts called tweets.
BENEFITS
There are some clear business benefits to building applications in the cloud. A few of these are listed
here:
Almost Zero Upfront Infrastructure Investment.
Just-in-Time Infrastructure.
More Efficient Resource Utilization.
Usage-Based Costing
Reduced Time to Market
Some of the technical benefits of cloud computing includes:
Automation
Auto-scaling
Proactive Scaling
More Efficient Development Life Cycle
Improved Testability
Disaster Recovery and Business Continuity
“Overflow” the Traffic to the Cloud
26
For the operation of this computing, the following three components have a big hand and the
responsibilities of these components can be elucidated clearly as below:
Clients
Clients in cloud computing are in general to the operation of Local Area Networks (LAN’s).
They are just the desktops where they have their place on desks. These might be also in the form of
laptops, mobiles, tablets to enhance mobility.
Clients hold the responsibility of interaction which pushes for the management of data on cloud
servers.
Datacentre
It is an array of servers that houses the subscribed application.
Progressing the IT industry has brought the concept of virtualizing servers, where the software might
be installed through the utilization of various instances of virtual servers.
This approach streamlines the process of managing dozens of virtual servers on multiple physical
servers.
Distributed Servers
These are considered as a server where that is housed in the other location.
So, the physical servers might not be housed in a similar location. Even the distributed server and the
physical server appear to be in different locations, they perform as they are so close to each other.
While the other component is Cloud Applications, where it is defined as cloud computing in the form
of software architecture.
So, cloud applications serve as a service which operates both the hardware and software architecture.
Application
Further, cloud computing has many other components:
Infrastructure as a Service (IaaS)
Platform as a Service (PaaS)
Software as a Service (SaaS)
There are three significant characteristics on how distributed systems are different from centralized
systems:
4. Concurrency of components - different system components do work at once and handle
communication by passing messages.
5. Lack of global clock - in distributed systems each system has its own clock. Systems might
somewhat synchronize their clocks sometimes but they most likely will not have the same time.
6. Independent failures of components - in a distributed system one component might fail due to
some unforeseen circumstances.
Finance and commerce eCommerce e.g. Amazon and eBay, PayPal, online banking and
trading
The information society Web information and search engines, ebooks, Wikipedia; social
networking: Facebook and MySpace
Creative industries and online gaming, music and film in the home, user-generated content,
entertainment e.g. YouTube, Flickr
Healthcare health informatics, on online patient records, monitoring patients
Common Characteristics
Certain common characteristics can be used to assess distributed systems
Resource Sharing
Ability to use any hardware, software or data anywhere in the system.
Resource manager controls access, provides naming scheme and controls concurrency.
Resource sharing model (e.g. client/server or object-based) describing how resources are provided,
they are used and provider and user interact with each other.
Openness
Openness is concerned with extensions and improvements of distributed systems.
Detailed interfaces of components need to be published.
New components have to be integrated with existing components.
Differences in data representation of interface types on different processors have to be resolved.
Concurrency
Components in distributed systems are executed in concurrent processes.
Components access and update shared resources (e.g. variables, databases, device drivers).
Integrity of the system may be violated if concurrent updates are not coordinated.
28
Scalability
Adaption of distributed systems to accommodate more users respond faster (this is the hard one)
Usually done by adding more and/or faster processors.
Components should not need to be changed when scale of a system increases.
Design components to be scalable
Fault Tolerance
Distributed systems must maintain availability even at low levels of hardware/software/network
reliability.
Fault tolerance is achieved by recovery redundancy
Transparency
Distributed systems should be perceived by users and application programmers as a whole rather
than as a collection of cooperating components.
Software layers
The concept of layering is a familiar one and is closely related to abstraction.
In a layered approach, a complex system is partitioned into a number of layers, with a given layer
making use of the services offered by the layer below.
A given layer therefore offers a software abstraction, with higher layers being unaware of
implementation details, or indeed of any other layers beneath them.
A platform for distributed systems and applications consists of the lowest-level hardware and software
layers.
These low-level layers are implemented independently in each computer, bringing the system’s
programming interface up to a level that facilitates communication and coordination between processes.
Intel x86/Windows, Intel x86/Solaris, Intel x86/Mac OS X, Intel x86/Linux and ARM/Symbian are
major examples.
Middleware is a software that allows a level of programming beyond processes and message passing.
29
System Architecture:
Client-server model
The system is structured as a set of processes, called servers, that offer services to the users, called
clients.
The client-server model is usually based on a simple request/reply protocol, implemented with
send/receive primitives or using remote procedure calls (RPC) or remote method invocation (RMI):
The client sends a request (invocation) message to the server asking for some service;
The server does the work and returns a result (e.g. the data requested) or an error code if the work could
not be performed.
A server can itself request services from other servers; thus, in this new relation, the server itself acts
like a client.
Peer-to-peer
All processes (objects) play similar role.
Processes (objects) interact without particular distinction between clients and servers.
The pattern of communication depends on the particular application.
A large number of data objects are shared; any individual computer holds only a small part of the
application database.
Processing and communication loads for access to objects are distributed across many computers and
access links.
This is the most general and flexible model.
30
Problems with peer-to-peer:
High complexity due to
Cleverly place individual objects
retrieve the objects
maintain potentially large number of replicas.
Distributed Computing
Distributed computing is different than parallel computing even though the principle is the same.
Distributed computing is a field that studies distributed systems. Distributed systems are systems that
have multiple computers located in different locations.
These computers in a distributed system work on the same program. The program is divided into different
tasks and allocated to different computers.
The computers communicate with the help of message passing. Upon completion of computing, the result
is collated and presented to the user.
Resource Sharing
In systems implementing parallel computing, all the processors share the same memory.
They also share the same communication medium and network. The processors communicate with each
other with the help of shared memory.
Distributed systems, on the other hand, have their own memory and processors.
Synchronization
In parallel systems, all the processes share the same master clock for synchronization. Since all the
processors are hosted on the same physical system, they do not need any synchronization algorithms.
In distributed systems, the individual processing systems do not have access to any central clock.
Hence, they need to implement synchronization algorithms.
Art Applications:
Cloud computing offers various art applications for quickly and easily design attractive cards,
booklets, and images. Some most commonly used cloud art applications are given below:
Moo
Prime Ministers of India | List of Prime Minister of India (1947-2020)
Moo is one of the best cloud art applications. It is used for designing and printing business cards,
postcards, and mini cards.
Vistaprint
Vistaprint allows us to easily design various printed marketing products such as business cards,
Postcards, Booklets, and wedding invitations cards.
Business Applications
Business applications are based on cloud service providers.
Today, every organization requires the cloud business application to grow their business. It also
ensures that business applications are 24*7 available to users.
There are the following business applications of cloud computing -
MailChimp
MailChimp is an email publishing platform which provides various options to design,
send, and save templates for emails.
Paypal
Paypal offers the simplest and easiest online payment mode using a secure internet account. Paypal
accepts the payment through debit cards, credit cards, and also from Paypal account holders.
Google G Suite
Google G Suite is one of the best cloud storage and backup application. It includes Google Calendar,
Docs, Forms, Google+, Hangouts, as well as cloud storage and tools for managing cloud apps.
The most popular app in the Google G Suite is Gmail. Gmail offers free email services to users.
Education Applications
Cloud computing in the education sector becomes very popular. It offers various online distance
learning platforms and student information portals to the students.
The advantage of using cloud in the field of education is that it offers strong virtual classroom
environments, Ease of accessibility, secure data storage, scalability, greater reach for the students, and
minimal hardware requirements for the applications.
There are the following education applications offered by the cloud -
Google Apps for Education
Google Apps for Education is the most widely used platform for free web-based email, calendar,
documents, and collaborative study.
Entertainment Applications
Entertainment industries use a multi-cloud strategy to interact with the target audience. Cloud
computing offers various entertainment applications such as online games and video conferencing.
Online games
Today, cloud gaming becomes one of the most important entertainment media. It offers various online
games that run remotely from the cloud. The best cloud gaming services are Shaow, GeForce Now,
Vortex, Project xCloud, and PlayStation Now.
Management Applications
Cloud computing offers various cloud management tools which help admins to manage all types of
cloud activities, such as resource deployment, data integration, and disaster recovery.
These management tools also provide administrative control over the platforms, applications, and
infrastructure.
Some important management applications are -
Toggl
Toggl helps users to track allocated time period for a particular project.
Evernote
34
Evernote allows you to sync and save your recorded notes, typed notes, and other notes in one
convenient place. It is available for both free as well as a paid version.
It uses platforms like Windows, macOS, Android, iOS, Browser, and Unix.
Social Applications
Social cloud applications allow a large number of users to connect with each other using social
networking applications such as Facebook, Twitter, Linkedln, etc.
There are the following cloud based social applications -
Facebook
Facebook is a social networking website which allows active users to share files, photos, videos,
status, more to their friends, relatives, and business partners using the cloud storage system.
On Facebook, we will always get notifications when our friends like and comment on the posts.
Twitter
Twitter is a social networking site. It is a microblogging system. It allows users to follow high profile
celebrities, friends, relatives, and receive news. It sends and receives short posts called tweets.
BENEFITS
There are some clear business benefits to building applications in the cloud. A few of these are listed
here:
Almost Zero Upfront Infrastructure Investment.
Just-in-Time Infrastructure.
More Efficient Resource Utilization.
Usage-Based Costing
Reduced Time to Market
Some of the technical benefits of cloud computing includes:
Automation
Auto-scaling
Proactive Scaling
More Efficient Development Life Cycle
Improved Testability
Disaster Recovery and Business Continuity
“Overflow” the Traffic to the Cloud
CLOUD SERVICES
The cloud can provide exactly the same technologies as “traditional” IT infrastructure – This service can
be accessible over a cloud management interface layer, which provides access over REST/SOAP API or
a management console website.
As an example, let’s consider Amazon Web Services (AWS).
Amazon Elastic Compute cloud (EC2) is a key web service that provide a facility to create manage
virtual machine instances with operation system running inside them.
Amazon relational database service (RDS) provides MySQL and oracle database services cloud.
Amazon S3 is a redundant and fast cloud storage service that provides public access to files over.
The three major cloud computing offerings are
Software as a Service(SaaS)
Platform as a Service (PaaS)
Infrastructure as a Service (IaaS)
IaaS
Iaas (Infrastructure as a service) is one of the fundamental service model of cloud computing.
It provides access to computing resources in a virtualized environment “the cloud “ on internet.
35
It provides computing infrastructure like virtual server space, network connections, bandwidth
balances and IP addresses.
The pool of hardware resources is extracted from multiple servers and networks usually distributed
numerous data centers. This provides redundancy and reliability to IaaS.
IaaS(Infrastructure as a service) is a complete package for computing. For small scale business looking
for cutting cost on IT infrastructure, IaaS is one of the solutions.
Annually a lot of money is spent in maintenance which a business owner could have saved for
expenses by using IaaS.
Benefits:
Full control over computing resources through administrative access to VMs
Flexible and efficient renting of computer hardware.
Portability, interoperability with legacy application.
Issues:
Compatibility with legacy security vulnerabilities
Virtual Machine Sprawl
Robustness of VM Level isolation
PaaS
Platform as a service, is referred as PaaS, it provides a platform and environment to allow developers
to build applications and services.
This service is hosted in the cloud and accessed by the users via internet.
To Understand in a simple terms, let compare this with painting a picture, where you are provided with
paint colors, paint brushes and paper and you just have to draw a beautiful picture using those tools.
Paas services are constantly updated & new features added. Software developers, web developers and
business can benefit from PaaS.
It provides platform to support application development.
It includes software support and management services, storage, networking, deploying, testing,
collaborating, hosting and maintaining applications.
Benefits:
Lower administrative overhead
Lower total cost of Ownership
Scalable Solutions
More current system software
Issues:
Lack of Portability between PaaS clouds
Event based processors scheduling.
36
SaaS
Saas or Software as a service distribution model in which application are hosted by a vendor or service
provider and made available to customer over a network(internet).
SaaS is becoming an increasingly prevalent delivery model as underlying technologies that support
service oriented architecture (SOA) or Web services.
Through internet this service is available to user anywhere in the world.
Traditionally, Software application needed to be purchased upfront and then installed it onto your
computer. SaaS users on the other hand, Instead of purchasing the software subscribes to it, usually on
monthly basis via internet.
Anyone who needs an access to a particular piece of software can be subscribe as a user, whether it is
one or two people or every thousands of employee in a corporation.
SaaS is compatible with all internet enabled devices.
Benefits:
Modest Software tools
Efficient use of software Licenses
Centralized management and data
Platform responsibilities managed by providers
Multitenant Solutions
Issues:
Browser based risks
Network dependence
Lack of portability between SaaS clouds
Eucalyptus architecture:
Images - An image is a fixed collection of software modules, system software, application software,
and configuration information that is started from a known baseline
Instances - When an image is put to use, it is called an instance
IP addressing - Eucalyptus instances can have public and private IP addresses. An IP address is
assigned to an instance when the instance is created from an image.
Security - TCP/IP security groups share a common set of firewall rules. This is a mechanism to
firewall off an instance using IP address and port block/allow functionality.
Networking - There are three networking modes. In Managed Mode Eucalyptus manages a local
network of instances, including security groups and IP addresses.
Access Control - A user of Eucalyptus is assigned an identity, and identities can be grouped together
for access control
37
Components:
The Cloud Controller (CLC):
It is a Java program that offers EC2-compatible interfaces, as well as a web interface to the outside
world.
38
The VMware Broker overlays existing ESX/ESXi hosts and transforms Eucalyptus Machine Images
(EMIs) to VMware virtual disks.
OPEN NEBULA
OpenNebula is a cloud computing platform for managing heterogeneous distributed data center
infrastructures.
The OpenNebula platform manages a data center's virtual infrastructure to build private, public and
hybrid implementations of infrastructure as a service.
39
OPENSTACK:
OpenStack is a free and open-source software platform for cloud computing, mostly deployed as
infrastructure-as-a-service, whereby virtual servers and other resources are made available to
customers.
Compute (Nova):
OpenStack Compute (Nova) is a cloud computing fabric controller, which is the main part of
an IaaS system.
Networking (Neutron):
OpenStack Networking (Neutron) is a system for managing networks and IP addresses .
Block storage (Cinder):
OpenStack Block Storage (Cinder) provides persistent block-level storage devices for use with
OpenStack compute instances.
Identity (Keystone):
OpenStack Identity (Keystone) provides a central directory of users mapped to the OpenStack
services they can access.
Image (Glance):
OpenStack Image (Glance) provides discovery, registration, and delivery services
for disk and serverimages. Stored images can be used as a template.
ANEKA
Aneka is an Application Platform-as-a-Service (Aneka PaaS) for Cloud Computing.
It acts as a framework for building customized applications
Aneka is a market oriented Cloud development and management platform with rapid application
development and workload distribution capabilities.
CLOUDSIM:
EXPLAIN ABOUT CLOUDSIM IN DETAIL (PART B)
CloudSim is a framework for modeling and simulation of cloud computing infrastructures and
services.
It helps tune the bottlenecks before real-world deployment.
40
The basic components of CloudSim are:
Datacenter:
Data center is used to model the core services at the system level of a cloud infrastructure.
Host:
This component is used to assign processing capabilities (which is specified in the milion of
instruction per second that the processor could perform), memory and a scheduling policy to allocate
different processing cores to multiple virtual machines that is in the list of virtual machines managed
by the host.
Virtual Machines:
This component manages the allocation of different virtual machines different hosts, so that
processing cores can be scheduled (by the host) to virtual machines.
This configuration depends on particular application, and the default policy of the allocation of
virtual machines is “first-come, first-serve”.
Datacenter broker:
The responsibility of a broker is to meditate between users and service providers, depending on the
requirement of quality of service that the user specifies.
UNIT – III
CLOUD ARCHITECTURE AND DESIGN
An Internet cloud is envisioned as a public cluster of servers provisioned on demand to perform
collective web services or distributed applications using data-center resources.
Cloud Platform Design Goals
Enabling Technologies for Clouds
A Generic Cloud Architecture
Cloud Platform Design Goals
Scalability
Virtualization
Efficiency
Reliability
Security
Cloud management receives the user request and finds the correct resources.
Cloud calls the provisioning services which invoke the resources in the cloud.
Cloud management software needs to support both physical and virtual machines
Enabling Technologies for Clouds
Cloud users are able to demand more capacity at peak demand, reduce costs, experiment with new
services, and remove unneeded capacity.
Service providers can increase system utilization via multiplexing, virtualization and dynamic resource
provisioning.
Clouds are enabled by the progress in hardware, software and networking technologies
Cloud users are able to demand more capacity at peak demand, reduce costs, experiment with new
services, and remove unneeded capacity.
41
Service providers can increase system utilization via multiplexing, virtualization and dynamic resource
provisioning.
Clouds are enabled by the progress in hardware, software and networking technologies
A Generic Cloud Architecture
The Internet cloud is envisioned as a massive cluster of servers.
Servers are provisioned on demand to perform collective web services using data- center resources.
The cloud platform is formed dynamically by provisioning or deprovisioning servers, software, and
database resources.
Servers in the cloud can be physical machines or VMs.
User interfaces are applied to request services. The cloud computing resources are built into the data
centers.
Data centers are typically owned and operated by a third-party provider. Consumers do not need to
know the underlying technologies
In a cloud, software becomes a service.
Cloud demands a high degree of trust of massive amounts of data retrieved from large data centers.
The software infrastructure of a cloud platform must handle all resource management and maintenance
automatically.
Software must detect the status of each node server joining and leaving.
Cloud computing providers such as Google and Microsoft, have built a large number of data centers.
Each data center may have thousands of servers.
42
ARCHITECTURAL DESIGN CHALLENGES
Challenge 1: Service Availability and Data Lock-in ProblemService
Availability
Service Availability in Cloud might be affected because of:
Single Point Failure:
Distributed Denial of Service
Single Point Failure
Depending on single service provider might result in failure.
In case of single service providers, even if company has multiple data centres located in different
geographic regions, it may have common software infrastructure and accounting systems.
Distributed Denial of service (DDoS) attacks.
Cyber criminals, attack target websites and online services and makes services unavailable to users.
DDoS tries to overwhelm (disturb) the services unavailable to user by having more traffic than the
server or network can accommodate.
Data Lock-in
It is a situation in which a customer using service of a provider cannot be moved to another service
It
provider because technologies used by a provider will be incompatible with other providers.
This makes a customer dependent on a vendor for services and makes customer unable to use service
of another vendor.
Challenge 2: Data Privacy and Security Concerns
Cloud services are prone to attacks because they are accessed through internet.
Security is given by
Storing the encrypted data in to cloud.
Firewalls, filters.
Cloud environment attacks include
1. Guest Hopping: Virtual machine hyper jumping (VM jumping) is an attack method that
exploits(make use of) hypervisor’s weakness that allows a virtual machine (VM) to be accessed from
another.
2. Hijacking: Hijacking is a type of network security attack in which the attacker takes control of a
communication
3. VM Rootkit: is a collection of malicious (harmful) computer software, designed to enable access to a
computer that is not otherwise allowed.
43
Interoperability
Open Virtualization Format (OVF) describes an open, secure, portable, efficient, and extensible format
for the packaging and distribution of VMs.
OVF defines a transport mechanism for VM, that can be applied to different virtualization platforms
Standardization
Cloud standardization, should have ability for virtual machine to run on any virtual platform.
"pay-per-use model for enabling available, convenient and on-demand network access to a
shared pool of configurable computing resources (e.g., networks, servers, storage,
applications and services) that can be rapidly provisioned and released with minimal
management effort or service provider interaction."
Architecture
Architecture consists of 3 tiers
Cloud Deployment Model
Cloud Service Model
Essential Characteristics of Cloud Computing
45
Deployment Models
Private cloud
Community cloud
Public cloud
Hybrid cloud
Service Models
Software as a Service (Saas)
Platform as a Service (PaaS)
Infrastructure as a Service (IaaS)
46
Each performer is an object (a person or an organization) that contributes to a transaction or method
and/or performs tasks in Cloud computing.
Cloud Carrier:
The mediator who provides offers connectivity and transport of cloud services within cloud service
providers and cloud consumers.
It allows access to the services of the cloud through Internet networks, telecommunication, and other
access devices. Network and telecom carriers or a transport agent can provide distribution.
Cloud Broker:
An organization or a unit that manages the performance, use, and delivery of cloud services by
enhancing specific capability and offers value-added services to cloud consumers.
It combines and integrates various services into one or more new services.
There are major three services offered by a cloud broker:
1. Service Intermediation.
2. Service Aggregation.
3. Service Arbitrage.
Cloud Auditor:
Cloud Auditor can make an assessment of the security controls in the information system to determine
the extent to which the controls are implemented correctly, operating as planned and constructing the
desired outcome with respect to meeting the security necessities for the system.
There are three major roles of Cloud Auditor which are mentioned below:
1. Security Audit.
2. Privacy Impact Audit.
3. Performance Audit.
47
Cloud Consumer:
A cloud consumer is the end-user who browses or utilizes the services provided by Cloud Service
Providers (CSP), sets up service contracts with the cloud provider.
The cloud consumer pays peruse of the service provisioned.
Cloud consumers use Service-Level Agreement (SLAs) to specify the technical performance
requirements to be fulfilled by a cloud provider.
Public cloud
In the public cloud, systems and services are accessible to the general public. For example, Google,
IBM, Microsoft etc.
Public cloud is open to all. Hence, it may be less secure.
This cloud is suitable for information which is not sensitive.
48
Advantages:
Public cloud is less expensive than the private cloud or hybrid cloud because it shares same resources
with many customers.
It is easy to combine public cloud with private cloud so it gives the flexible approach to the customer.
It is reliable because it provides large number of resources from various locations and if any resource
fails, another is employed.
Private cloud
In the private cloud, systems and services are accessible within an organization.
This cloud is operated only in a particular organization. It is managed internally or by third party.
Advantages:
Private cloud is highly secured because resources are shared from distinct pool of resources.
As compared to the Public cloud, Private cloud has more control on its resources and hardware
because it accessed only in the boundary of an organization.
Disadvantages:
Private cloud is very difficult to deploy globally and it can be accessed locally only.
Private cloud's cost is more than that of Public cloud.
Hybrid cloud
Hybrid cloud is a mixture of public and private cloud.
49
In hybrid cloud, critical activities are conducted using Private cloud and the non-critical activities are
conducted using Public cloud.
Advantages:
It is scalable because it gives the features of both public and private cloud.
It gives secure resources because of Private cloud and scalable resources because of Public cloud.
The cost of the Hybrid cloud is less as compared to Private cloud.
Disadvantages:
In hybrid cloud, networking becomes complicated because both Private and Public cloud are available.
Community cloud
Community cloud enables the system and services which are accessible by group of organizations.
It shares the infrastructure between several organizations from a specific community.
It is managed internally and operated by several organizations or by the third party or combination of
them.
Advantages:
In Community cloud, cost is low as compared to Private cloud.
Community cloud gives an infrastructure to share cloud resources and capabilities between several
organizations.
This cloud is more secure than the Public cloud but less secured than the Private cloud.
50
CLOUD MODELS: IAAS, PAAS AND SAAS
The cloud can provide exactly the same technologies as “traditional” IT
infrastructure – This service can be accessible over a cloud management interface layer, which
provides access over REST/SOAP API or a management console website.
As an example, let’s consider Amazon Web Services (AWS).
Amazon Elastic Compute cloud (EC2) is a key web service that provide a facility to create manage
virtual machine instances with operation system running inside them.
Amazon relational database service (RDS) provides MySQL and oracle database services cloud.
Amazon S3 is a redundant and fast cloud storage service that provides public access to files over.
The three major cloud computing offerings are
Software as a Service(SaaS)
Platform as a Service (PaaS)
Infrastructure as a Service (IaaS)
IaaS
Iaas (Infrastructure as a service) is one of the fundamental service model of cloud
computing.
It provides access to computing resources in a virtualized environment “the cloud “
on internet.
It provides computing infrastructure like virtual server space, network connections,
bandwidth balances and IP addresses.
The pool of hardware resources is extracted from multiple servers and networks usually distributed
numerous data centers. This provides redundancy and reliability to IaaS.
For small scale business looking for cutting cost on IT infrastructure, IaaS is one of the solutions.
51
Annually a lot of money is spent in maintenance which a business owner could have saved for
expenses by using IaaS.
Benefits:
Full control over computing resources through administrative access to VMs
Flexible and efficient renting of computer hardware.
Portability, interoperability with legacy application.
Issues:
Compatibility with legacy security vulnerabilities
Virtual Machine Sprawl
Robustness of VM Level isolation
PaaS
Platform as a service, is referred as PaaS, it provides a platform and environment to allow
developers to build applications and services.
This service is hosted in the cloud and accessed by the users via internet.
To Understand in a simple terms, let compare this with painting a picture, where you are provided with
paint colors, paint brushes and paper and you just have to draw a beautiful picture using those tools.
Paas services are constantly updated & new features added. Software developers, web
developers and business can benefit from PaaS.
Benefits:
Lower administrative overhead
Lower total cost of Ownership
Scalable Solutions
More current system software
Issues:
Lack of Portability between PaaS clouds
Event based processors scheduling.
SaaS
Saas or Software as a service distribution model in which application are hosted by a
vendor or service provider and made available to customer over a network(internet).
SaaS is becoming an increasingly prevalent delivery model as underlying technologies that
support service oriented architecture (SOA) or Web services.
Through internet this service is available to user anywhere in the world.
Instead of purchasing the software subscribes to it, usually on monthly basis via internet.
Anyone who needs an access to a particular piece of software can be subscribe as a user, whether it is
one or two people or every thousands of employee in a corporation.
Benefits:
Modest Software tools
Efficient use of software Licenses
Centralized management and data
Platform responsibilities managed by providers
Multitenant Solutions
Issues:
Browser based risks
Network dependence
Lack of portability between SaaS clouds
52
CLOUD STORAGE PROVIDERS
Introduction
Cloud storage is a flexible and convenient new-age solution to store data. In the past, data stored on hard
drives and external storage devices such as floppy disks, thumb drives, and compact discs.
More recently, cloud storage has become a popular medium of choice for enterprises as well as
individuals.
All the data is stored remotely without taking up physical space in your home or office or exhausting the
megabytes on your computer.
In other words, cloud storage is a service that lets you transfer data over the Internet and store it in an
offsite storage system maintained by a third party.
How data are stored in cloud?
Cloud storage lets you store data on hosted servers. The remote servers are managed and owned by
hosting companies. You can access your data via the Internet.
With so many cloud storage providers that have flooded the market today, the size and maintenance of
cloud storage systems can be quite different based on the provider.
Leading cloud storage providers like Microsoft Azure, Amazon Web Services (AWS), and Google
Cloud Computing maintain such gigantic data centers that store data from all over the world.
Benefits
Accessibility: Data stored on the cloud can be accessed on-the-go, anytime, and anywhere. All you
need is an internet connection.
Mobility: Cloud storage providers even offer applications that work with various devices such as
mobile phones and tablets.
Synchronization: You have the option to sync all your files across all devices so that you have the
most current available to you all the time, creating a single source of truth.
Collaboration: Cloud storage services come with features that allow multiple people to collaborate on
a single file even if they are spread across various locations across the world.
Cost-Saving: Cloud storage providers generally require you to pay only for the amount of storage you
use, which prevents businesses from over-investing into their storage needs.
Scalable: Cloud storage providers offer various plans that can quickly scale your data storage capacity
to meet the growing needs of your business.
Low Maintenance: The responsibility of upkeeping of the storage lies with the cloud storage
provider.
Space-Saving: Servers and even other forms of physical storage devices such as hard disks and USBs
require space. This situation does not arise with cloud storage.
Reduced Carbon Footprint: Even a small data center requires servers, networks, power, cooling,
space, and ventilation, all of which can contribute significantly to energy consumption and CO2
53
emissions. Switching to cloud computing, therefore, can drastically reduce your energy consumption
levels.
Security: Cloud storage providers have about two to three backup servers located in different places
globally.
Types of Cloud Storage
Public Cloud Storage
Suitable for unstructured data, public cloud storage is offered by third-party cloud storage providers
over the open Internet.
They may be available for free or on a paid basis. Users are usually required to pay for only what they
use.
Private Cloud Storage
A private cloud allows organizations to store data in their environment.
The infrastructure is hosted on-premises. It offers many benefits that come with a public cloud service
such as self-service and scalability.
Hybrid Cloud Storage
Hybrid cloud allows data and applications to be shared between a public and a private cloud.
Google Drive:
Features:
It supports most file formats- photos, drawings, videos, recordings, etc.
Up to 15 GB of free storage capacity.
You need a Google account to get started.
Upload files from your phone and tablet too and not just your computer.
Share data and collaborate on them without an email attachment.
54
pCloud:
Features:
Free storage up to 10 GB
TLS/SSL encryption for data security
Supports file management from the web, desktop, and also the mobile
Multiple file-sharing options.
Saves versions of files for specific amounts of time
Back up your photos from social media (Facebook, Instagram, and Picasa)
Microsoft OneDrive:
Features:
Works with Microsoft Office and Office 365 suite
Built-in Windows file syncing and restoration features
Integrated into the file explorer, automatically jumpstarting online backup.
Dropbox:
Features:
Store files in one central space
Create and share anything with Dropbox Paper- rough drafts, videos, images to code and sound
Suitable for freelancers, solo workers, teams, and businesses alike
Admin controls to simplify team management tasks
MediaFire:
Features:
Start with 10GB of free space which can be increased through usual activities such as referrals.
Supports files up to 4GB large
Unlimited downloads
Automated photo and video syncing and streaming
OpenDrive:
Features:
Comes with tools for data management, project and workflow, and branding
Available for personal use with features like online storage, online backup, file syncing, online file
sharing, and file hot linking, etc.
Box:
Features:
Work online with others, share folders and co-edit with Microsoft Office 365 or Box Notes.
Large files can be shared straight from the Box.
IDrive:
Features:
Real-time, automatic backups
Entire drive including the OS and settings can be backed up
Backup from unlimited devices from a single account
Backup storage remains unimpacted by sync storage
256-bit AES file encryption
You can only delete files manually or run an Archive Cleanup
Deleted files can entirely be recovered within 30 days.
iCloud:
Features:
It is Apple’s proprietary cloud storage solution
Collaborate with Pages, Numbers, Keynote, and Notes
Pick up every conversation from where it was left off
It will work perfect if you change your phone
55
ENABLING TECHNOLOGIES FOR THE INTERNET OF THINGS
“The Internet of Things (IoT) is a system of interrelated computing devices, mechanical and digital
machines, objects, animals or people that are provided with unique identifiers and the ability to transfer
data over a network without requiring human-to-human or human-to-computer interaction.”
IoT(internet of things) enabling technologies are
1. Wireless Sensor Network
2. Cloud Computing
3. Big Data Analytics
4. Communications Protocols
5. Embedded System
Cloud Computing:
It provides us the means by which we can access applications as utilities over the internet. Cloud
means something which is present in remote locations.
With Cloud computing, users can access any resources from anywhere like databases, webservers,
storage, any device, and any software over the internet.
Characteristics:
1. Broad network access
2. On demand self-services
3. Rapid scalability
4. Measured service
5. Pay-per-use
Communications Protocols:
Communication protocols allow devices to exchange data over the network. Multiple protocols often
describe different aspects of a single communication.
56
A group of protocols designed to work together is known as a protocol suite; when implemented in
software they are a protocol stack.
They are used in
1. Data encoding
2. Addressing schemes
Embedded Systems:
It is a combination of hardware and software used to perform special tasks.
It includes microcontroller and microprocessor memory, networking units and storage devices.
It collects the data and sends it to the internet.
Examples:
1. Digital camera
2. DVD player, music player
3. Industrial robots
4. Wireless Routers etc.
Predictive maintenance
Predictive maintenance consists of detecting the need for a machine to be maintained before a
crisis takes place and production needs to be stopped urgently.
It is therefore among the reasons to implement a data acquisition, analysis and management system.
Pinpoint inventories
The use of Industrial IoT systems allows for the automated monitoring of inventory, certifying
whether plans are followed and issuing an alert in case of deviations.
It is yet another essential Industrial IOT application to maintain a constant and efficient workflow.
Quality control
Another entry among the most important IIoT applications is the ability to monitor the quality of
manufactured products at any stage.
This information is vital when studying the efficiency of the company and applying the necessary
changes in case failures are detected.
IoT in Healthcare
57
IoT is undoubtedly transforming the healthcare industry by redefining the space of devices and people
interaction in delivering healthcare solutions.
IoT has applications in healthcare that benefit patients, families, physicians, hospitals and insurance
companies.
IoT in Security
IoT security covers both physical device security and network security, and impacts the processes,
technologies, and measures necessary to protect IoT devices and networks.
It spans industrial machines, smart energy grids, building automation systems, entertainment devices,
and more, including devices that often aren’t designed for network security.
IoT device security must protect systems, networks, and data from a broad spectrum of IoT security
attacks, which target four types of vulnerabilities:
1. Communication attacks on the data transmitted between IoT devices and servers.
2. Lifecycle attacks on the IoT device as it changes hands from user to maintenance.
3. Attacks on the device software.
4. Physical attacks, which directly target the chip in the device.
IoT in Retail
Location tracking
The Internet of Things solves one of the biggest issues in retail — lack of delivery reliability.
The technology is capable of increasing operational efficiencies and improving logistic transparency.
The German supermarket chain Feneberg Lebensmittel uses IoT technology to get visibility on its
goods and employees’ movements, both within the warehouse and in transit.
Smart shelves:
As a stocker walks around the shop with a digital shopping list on their smartphone, the cell phone will
vibrate in case a needed product is on the shelf nearby.
For a better visibility, a shelf in need of more merchandise will even light up.
Smart shelves have three common elements — an RFID tag, an RFID reader, and an antenna.
UNIT IV
SERVICE ORIENTED ARCHITECTURE (SOA)
A Service-Oriented Architecture or SOA is a design pattern which is designed to build distributed
systems that deliver services to other applications through the protocol. It is only a concept and not
limited to any programming language or platform.
What is Service?
A service is a well-defined, self-contained function that represents a unit of functionality. A service can
exchange information from another service.
It is not dependent on the state of another service. It uses a loosely coupled, message-based
communication model to communicate with applications and other services.
Service Connections
The figure given below illustrates the service-oriented architecture. Service consumer sends a service
request to the service provider, and the service provider sends the service response to the service
consumer.
The service connection is understandable to both the service consumer and service provider.
Service-Oriented Terminologies
59
Services - The services are the logical entities defined by one or more published interfaces.
Service consumer - It can be called as a requestor or client that calls a service provider. A service
providers.
Characteristics of SOA
The services have the following characteristics:
They are loosely coupled.
60
Functional aspects
Transport - It transports the service requests from the service consumer to the service provider and
Business Process - It represents the group of services called in a particular sequence associated with
their services.
services to consumers.
Security - It represents the set of protocols required for identification and authorization.
61
Transaction - It provides the surety of consistent result. This means, if we use the group of services
to complete a business function, either all must complete or none of the complete.
Management - It defines the set of attributes used to manage the services.
Advantages of SOA
Easy to integrate - In a service-oriented architecture, the integration is a service specification that
Reliable - As services are small in size, it is easier to test and debug them.
WEB SERVICES
Web services are XML-centered data exchange systems that use the internet for A2A (application-
to-application) communication and interfacing.
These processes involve programs, messages, documents, and/or objects.
Web Services = XML + transport protocol (such as HTTP)
A key feature of web services is that applications can be written in various languages and are still able to
communicate by exchanging data with one another via a web service between clients and servers.
A client summons a web service by sending a request via XML, and the service then responses with an
XML response.
Web services are also often associated with SOA (Service-Oriented Architecture).
62
UDDI (Universal Description, Discovery, and Integration) is an XML-based standard for detailing,
publishing, and discovering web services. It’s basically an internet registry for businesses around the
world. The goal is to streamline digital transactions and e-commerce among company systems.
SOAP, which will be described in detail later in the blog, is an XML-based Web service protocol to
exchange data and documents over HTTP or SMTP (Simple Mail Transfer Protocol). It allows
independent processes operating on disparate systems to communicate using XML.
REST, which will also be described in great detail later in the blog, provides communication and
connectivity between devices and the internet for API-based tasks. Most RESTful services use HTTP as
the supporting protocol.
Here are some well-known web services that use markup languages:
Web template
JSON-RPC
JSON-WSP
Web Services Description Language (WSDL)
Web Services Conversation Language (WSCL)
Web Services Flow Language (WSFL)
Web Services Metadata Exchange (WS-MetadataExchange)
XML Interface for Network Services (XINS)
BASICS OF VIRTUALIZATION
Virtualization is the "creation of a virtual (rather than actual) version of something, such as a server, a
desktop, a storage device, an operating system or network resources".
In other words, Virtualization is a technique, which allows to share a single physical instance of a
resource or an application among multiple customers and organizations.
It does by assigning a logical name to a physical storage and providing a pointer to that physical
resource when demanded.
EMULATION
Emulation, as name suggests, is a technique in which Virtual machines simulates complete hardware in
software.
There are many virtualization techniques that were developed in or inherited from emulation technique.
It is very useful when designing software for various systems.
It simply allows us to use current platform to access an older application, data, or operating system
63
TYPES OF VIRTUALIZATION
Hardware Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly installed on the
hardware system is known as hardware virtualization.
The main job of hypervisor is to control and monitoring the processor, memory and other hardware
resources.
After virtualization of hardware system we can install different operating system on it and run different
applications on those OS.
Usage:
Hardware virtualization is mainly done for the server platforms, because controlling virtual
machines is much easier than controlling a physical server.
Server Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly installed on the
Server system is known as server virtualization.
Usage:
64
Server virtualization is done because a single physical server can be divided into multiple
servers on the demand basis and for balancing the load.
Storage Virtualization:
Storage virtualization is the process of grouping the physical storage from multiple network storage
devices so that it looks like a single storage device.
Storage virtualization is also implemented by using software applications.
Usage:
Storage virtualization is mainly done for back-up and recovery purposes.
Levels of Abstraction
There are 5 levels of abstraction, They are
1) Instruction Set Architecture Level
2) Hardware Abstraction Level
3) Operating System Level
4) Library Support Level
5) User-Application Level
65
Instruction Set Architecture Level(ISA)
With this approach, it is possible to run a large amount of legacy binary code written for various
processors on any given new hardware host machine.
Hardware Abstraction Level
The idea is to virtualize a computer’s resources, such as its processors, memory, and I/O devices.
Operating System Level
This refers to an abstraction layer between traditional OS and user applications.
OS-level virtualization creates isolated containers on a single physical server and the OS instances to
utilize the hard-ware and software in data centers.
The containers behave like real servers.
Library Support Level
Virtualization with library interfaces is possible by controlling the communication link between
applications and the rest of a system through API .
User-Application Level
Virtualization at the application level virtualizes an application as a VM.
Application-level virtualization is also known as process-level virtualization.
VIRTUALIZATION STRUCTURE
Before virtualization, the operating system manages the hardware.
After virtualization, a virtualization layer is inserted between the hardware and the operating system.
In such a case, the virtualization layer is responsible for converting portions of the real hardware into
virtual hardware.
Depending on the position of the virtualization layer, there are several classes of VM architectures,
namely
The hypervisor architecture,
Para-virtualization, and
Host-based virtualization.
Hypervisor and Xen Architecture
The hypervisor supports hardware-level virtualization
The hypervisor provides hyper calls for the guest OSes and applications.
Xen is an open source hypervisor program developed by Cambridge University. Xen is a micro-
kernel hypervisor, which separates the policy from the mechanism.
The Xen hypervisor implements all the mechanisms, leaving the policy to be handled by Domain 0
Xen does not include any device drivers .
It just provides a mechanism by which guestsOS can have direct access to the physical devices.
As a result, the size of the Xen hypervisor is kept rather small.
66
The guest OS, which has control ability, is called Domain 0, and the others are called Domain U.
Domain 0 is designed to access hardware directly and manage devices.
Para-Virtualization
A para-virtualized VM provides special APIs requiring substantial OS modifications in user
applications.
Figure illustrates the concept of a para-virtualized VM architecture. The guest operating systems are
paravirtualized.
The traditional x86 processor offers four instruction execution rings: Rings 0, 1, 2, and 3.
The lower the ring number, the higher the privilege of instruction being executed.
The OS is responsible for managing the hardware and the privileged instructions to execute at Ring
0, while user-level applications run at Ring 3.
Host-Based Virtualization
An alternative VM architecture is to install a virtualization layer on top of the host OS.
This host OS is still responsible for managing the hardware.
The guest OSes are installed and run on top of the virtualization layer.
First, the user can install this VM architecture without modifying the host OS.
Second, the host-based approach appeals to many host machine configurations.
Compared to the hypervisor/VMM architecture, the performance of the host-based architecture may
also be low.
67
RV tools from Robware.net:
For VMware environments, this handy little free application is written in Microsoft.NET and
leverages the VMware SDKs to collect information from center servers and ESX/ESXi hosts.
V Control:
V control is a multi-hypervisor web-based self provisioning and VM virtualization management for
citrixXenserver,Microsoft Hyper-V and VMWare ESX/ESXi.
CPU Virtualization
Unprivileged instructions of VMs run directly on the host machine for higher efficiency. Other
critical instructions should be handled carefully for correctness and stability.
The critical instructions are divided into three categories,
Privileged instructions
Control sensitive instruction
Behaviour –Sensitive instructions
Privileged instructions:
Privileged instructions execute in a privileged modeand will be trapped if executed outside
this mode.
Control-sensitive instructions:
Control-sensitive instructions attempt to change the configuration of resources used.
Behavior-sensitive instructions:
Behavior-sensitive instructions have different behaviors depending on the configuration of
resources, including the load and store operations over the virtual memory.
Memory Virtualization
Virtual memory virtualization involves sharing the physical system memory in RAM and
dynamically allocating it to the physical memory of the VMs.
That means a two-stage mapping process should be maintained by the guest OS and the
VMM, respectively:
virtual memory to physical memory
physical memory to machine memory.
The guest OS cannot directly access the actual machine memory.
The VMM is responsible for mapping the guest physical memory to the actual machine memory.
Figure shows the two-level memory mapping procedure.
68
I/O Virtualization:
I/O virtualization involves managing the routing of I/O requests between virtual devices and the
shared physical hardware.
At the time of this writing, there are three ways to implement I/O virtualization:
Full device emulation,
Para-virtualization,
Direct I/O.
Full device emulation is the first approach for I/O virtualization.
Generally, this approach emulates(match) well-known, real-world devices.
DESKTOP VIRTUALIZATION
Desktop virtualization is technology that lets users simulate a workstation load to access a desktop from
a connected device remotely or locally.
This separates the desktop environment and its applications from the physical client device used to
access it.
Desktop virtualization is a key element of digital workspaces and depends on application virtualization.
3. Desktop-as-a-Service (DaaS)
70
DaaS’s functionality is similar to that of VDI: users access their desktops and apps from any
endpoint device or platform.
However, in VDI, you have to purchase, deploy, and manage all the hardware components yourself.
In DaaS, though, you outsource desktop virtualization to a third party to help you develop and
operate virtual desktops.
Client Virtualization:
Client virtualization is a virtual machine (VM) environment in the user's machine.
Also called "endpoint virtualization," the user's computer hosts multiple VMs, each of which
contains an operating system and set of applications.
Business capabilities
As an organization, you need to ask yourself whether you have adequate expertise, resources, and the
compelling need to mount VDI, RDS, or DaaS.
Simply put, would it better if you consumed desktop virtualization as “as a service” or you
implemented it as VDI or RDS?
Cost
Cost is always a primary concern whether you’re implementing VDI, RDS, or DaaS.
However, with DaaS, you execute all the desktop workloads in the cloud.
Infrastructure control
When it comes to VDI and RDS deployments, your IT admins have absolute control in terms of
updating the infrastructure, including securing network services.
DaaS deployment, on the other hand, takes away infrastructure control from the organization to a
cloud vendor.
Geography
When comparing VDI, RDS and DaaS, an organization must determine the location of its data and
where its users reside.
DaaS deployments make sense if you want to support multiple users in different places.
If the data center is miles away from users, RDS and VDI deployments may hurt the end-user
experience.
71
Agility and elasticity
If you need a desktop virtualization solution that is easier to set up and run, then DaaS is your go-to
solution.
For example, if you would like to accommodate seasonal or contract workers on your infrastructure,
it makes sense to select DaaS over RDS and VDI that takes time to set up.
SERVER VIRTUALIZATION
Server virtualization is the process of dividing a physical server into multiple unique and isolated virtual
servers by means of a software application.
Each virtual server can run its own operating systems independently.
Key Benefits of Server Virtualization:
Higher server ability
Cheaper operating costs
Eliminate server complexity
Increased application performance
Deploy workload quicker
72
Google App Engine is An example of Platform as a Service (PaaS).
Google App Engine provides Web app developers and enterprises with access to Google's scalable
hosting and tier 1 Internet service.
Google App Engine provides a scalable runtime based on the Java and Python programming language.
Applications in Google app engine stores data in Google BigTable.
Application in Google app engine uses Google query language.
If applications are non-compatible to Google app engine, than application needed to be make compatible
with Google app engine. All application are not supported by Google app engine.
Google App Engine also removed some system administration and developmental tasks to make it easier
to write scalable applications.
MICROSOFT AZURE
To enable .NET developers to extend their applications into the cloud, Microsoft has created a set of
.NET services, which it now refers to as the Windows Azure Platform.
Azure includes:
1. Azure AppFabric: A virtualization service. That creates an application hosting environment.
2. Storage: A high capacity non-relational storage facility.
3. Compute: A set of virtual machine instances.
4. SQL Azure Database: A cloud-enabled version of SQL Server.
5. Dallas: A database marketplace based on SQL Azure Database code.
6. Dynamic CRM: An xRM (Anything Relations Management) service based on Microsoft
Dynamics.
7. SharePoint Services: A document and collaboration service based on SharePoint.
8. Windows Live Services: A collection of services that runs on Windows Live, which can be used
in applications that run in the Azure cloud.
Windows Azure Platform can be viewed in a sense as the next Microsoft operating system, the first one
that is a cloud OS.
Azure is Microsoft's Infrastructure as a Service (IaaS) Web hosting service and Platform as a Service
both.
An application on Azure architecture can run locally, run in the cloud, or some combination of both.
Applications on Azure can be run as applications, as background processes or services, or as both.
Six main elements are part of Windows Azure:
1. Application: This is the runtime of the application that is running in the cloud.
73
2. Compute: This is the load-balanced Windows server computation and policy engine that allows
you to create and manage virtual machines that serve either in a Web role and a Worker role.
3. Storage: This is a non-relational storage system for large-scale storage.
4. Fabric: This is the Windows Azure Hypervisor, which is a version of Hyper-V that runs on
Windows Server 2008.
5. Config: This is a management service.
6. Virtual machines: These are instances of Windows that run the applications and services that are
part of a particular deployment.
Architecture:
The architecture of Federated Cloud consists of three basic components:
1. Cloud Exchange
The Cloud Exchange acts as a mediator between cloud coordinator and cloud broker.
The demands of the cloud broker are mapped by the cloud exchange to the available services provided
by the cloud coordinator.
The cloud exchange has a track record of what is the present cost, demand patterns, and available
cloud providers, and this information is periodically reformed by the cloud coordinator.
2. Cloud Coordinator
The cloud coordinator assigns the resources of the cloud to the remote users based on the quality of
service they demand and the credits they have in the cloud bank.
The cloud enterprises and their membership are managed by the cloud controller.
3. Cloud Broker
The cloud broker interacts with the cloud coordinator, analyzes the Service-level agreement and the
resources offered by several cloud providers in cloud exchange.
Cloud broker finalizes the most suitable deal for their client.
74
Properties of Federated Cloud:
1. In the federated cloud, the users can interact with the architecture either centrally or in a
decentralized manner. Decentralized interaction permits the user to interact directly with the clouds
in the federation.
2. Federated cloud can be practiced with various niches like commercial and non-commercial.
3. The visibility of a federated cloud assists the user to interpret the organization of several clouds in
the federated environment.
4. Federated cloud can be monitored in two ways. MaaS (Monitoring as a Service) provides
information that aids in tracking contracted services to the user. Global monitoring aids in
maintaining the federated cloud.
5. The providers who participate in the federation publish their offers to a central entity. The user
interacts with this central entity to verify the prices and propose an offer.
6. The marketing objects like infrastructure, software, and platform have to pass through federation
when consumed in the federated cloud.
UNIT V
MICROSERVICES AND DEVOPS
DEFINING MICROSERVICES
Microservices are self-contained software components that are no more than 100 lines of code
Microservices are independently deployable processes communicating asynchronously using lightweight
mechanisms focused on specific business capabilities running in an automated but platform- and
language-independent environment
Microservices are mini web servers offering a small REST-based HTTP API that accepts and returns
JSON documents
A microservice is an independent software component that takes no more than one iteration to build and
deploy
The term "micro web services" was first used by Dr. Peter Rogers during a conference on cloud
computing in 2005.
76
In a monolithic architecture, the software is a single application distributed on a CD-ROM, released once
a year with the newest updates.
Examples are Photoshop CS6 or Microsoft 2008.
Architecutre:
Microservices is an architecture wherein all the components of the system are put into
individual components, which can be built, deployed, and scaled individually.
77
These microservices communicate with each other using an Application Program Interface(API).
After the Microservices communicate within themselves, they deploy the static content to a cloud-based
storage service that can deliver them directly to the clients via Content Delivery Networks (CDNs).
78
Chained or Chain of Responsibility Pattern
Chained or Chain of Responsibility Design Patterns produces a single output which is a combination of
multiple chained outputs.
So, if you have three services lined up in a chain, then, the request from the client is first received by
Service A.
Then, this service communicates with the next Service B and collects data. Finally, the second service
communicates with the third service to generate the consolidated output.
All these services use synchronous HTTP request or response for messaging.
79
So, either the system can have a database per each service or it can have shared database per service.
You can use database per service and shared database per service to solve various problems.
Branch Pattern
Branch microservice design pattern is a design pattern in which you can simultaneously process the
requests and responses from two or more independent microservices.
This design pattern extends the Aggregator design pattern and provides the flexibility to produce
responses from multiple chains or single chain.
80
Circuit Breaker Pattern
Circuit Breaker design pattern is used to stop the process of request and response if a service is not
working.
With the help of this pattern, the client will invoke a remote service via a proxy.
This proxy will basically behave as a circuit barrier. So, when the number of failures crosses the
threshold number, the circuit breaker trips for a particular time period.
Publish - A provider informs the broker (service registry) about the existence of the web service by
using the broker's publish interface to make the service accessible to clients
Find - The requestor consults the broker to locate a published web service
Bind - With the information it gained from the broker(service registry) about the web service, the
requestor is able to bind, or invoke, the web service.
You probably have an active logged-in user, where the user can log out or choose to manage their
account. That suggests a microservice with responsibility for users.
You’ll have an advertising service, because that’s part of the business model for the newspaper.
The article-page service pulls in content from the adverts, user, and article services.
82
You need to send out all the requests at the same time and combine the responses once they come in.
You need to develop some abstractions around message sending and receiving so that you can make the
transportation of messages between services uniform.
The article-page service emits a message that announces the event that an article has been read, but
article-page doesn’t care how many people receive it or need a response.
In this case, the analytics and recommend services don’t consume messages sent to them.
As your system grows over time, the dependency tree of services also grows, both in breadth and depth.
Security
Microservices are often deployed across multi-cloud environments, resulting in increased risk and loss of
control and visibility of application components—resulting in additional vulnerable points.
83
Due to its distributed framework, setting up access controls and administering secured authentication to
individual services poses not only a technical challenge but also increases the attack surface
substantially.
Testing:
The testing phase of any software development lifecycle (SDLC) is increasingly complex for
microservices-based applications.
Communication
Independently deployed microservices act as miniature standalone applications that communicate with
each other. To achieve this, you have to configure infrastructure layers that enable resource sharing
across services.
SOA VS MICROSERVICE
84
SOA MSA
Follows “share-as-much-as-possible” architecture Follows “share-as-little-as-possible” architecture
approach approach
Importance is on business functionality reuse Importance is on the concept of “bounded context”
They focus on people collaboration and freedom
They have common governance and standards
of other options
Uses Enterprise Service bus (ESB) for
Simple messaging system
communication
85
SOA MSA
They use lightweight protocols such
They support multiple message protocols
as HTTP/REST etc.
Single-threaded usually with the use of Event
Multi-threaded with more overheads to handle I/O
Loop features for non-locking I/O handling
Maximizes application service reusability Focuses on decoupling
Traditional Relational Databases are more often Modern Relational Databases are more often
used used
A systematic change requires modifying the monolith A systematic change is to create a new service
DevOps / Continuous Delivery is becoming popular,
Strong focus on DevOps / Continuous Delivery
but not yet mainstream
The above methods help us standardize a way in which actions will be performed on various applications
having different interfaces.
Also, with the help of these methods, you as a developer can easily understand the inference of the
actions taken across the different interfaces.
86
Now, even if one microservice, does not work, then the application will not go down. Instead, only that
particular feature will not be working, and once it starts working,
APIs’ can process the request again and send the required response, back to the client.
Microservices vs API:
Microservices API
An architectural style through which, you can A set of procedures and functions which allow
build applications in the form of small the consumer to use the underlying service of
autonomous services. an application.
Architecture Overview
This is the process that you will follow to stand up microservics and safely transition the application's
traffic away from the monolith.
87
Deployed Monolith:
This is the starting configuration. The monolithic node.js app running in a container on Amazon ECS.
Start Microservices
Using the three container images you built and pushed to Amazon ECR in the previous module, you will
start up three microservices on your existing Amazon ECS cluster.
88
Configure Target Groups
Like in Module 2, you will add a target group for each service and update the ALB Rules to connect the
new microservices.
The DevOps is a combination of two words, one is software Development, and second is Operations.
This allows a single team to handle the entire application lifecycle, from development to testing,
deployment, and operations.
DevOps helps you to reduce the disconnection between software developers, quality assurance (QA)
engineers, and system administrators.
DevOps promotes collaboration between Development and Operations team to deploy code to
production faster in an automated & repeatable way.
OVERVIEW OF DEVOPS
Key Features
Here are some key features of DevOps architecture, such as:
Automation
Automation can reduce time consumption, especially during the testing and deployment phase.
The productivity increases, and releases are made quicker by automation. This will lead in catching bugs
quickly so that it can be fixed easily.
Collaboration
The Development and Operations team collaborates as a DevOps team, which improves the cultural
model as the teams become more productive with their productivity, which strengthens accountability
and ownership.
The teams share their responsibilities and work closely in sync, which in turn makes the deployment to
production faster.
Integration
Applications need to be integrated with other components in the environment.
The integration phase is where the existing code is combined with new functionality and then tested.
Configuration management
It ensures the application to interact with only those resources that are concerned with the environment
in which it runs. T
he configuration files are not created where the external configuration to the application is separated
from the source code.
Advantages
1. DevOps is an excellent approach for quick development and deployment of applications.
2. It responds faster to the market changes to improve business growth.
3. DevOps escalate business profit by decreasing software delivery time and transportation costs.
4. DevOps clears the descriptive process, which gives clarity on product development and delivery.
90
5. It improves customer experience and satisfaction.
6. DevOps simplifies collaboration and places all tools in the cloud for customers to access.
7. DevOps means collective responsibility, which leads to better team engagement and productivity.
Disadvantages
1. DevOps professional or expert's developers are less available.
2. Developing with DevOps is so expensive.
3. Adopting new DevOps technology into the industries is hard to manage in short time.
4. Lack of DevOps knowledge can be a problem in the continuous integration of automation projects.
HISTORY OF DEVOPS
In 2009, the first conference named DevOpsdays was held in Ghent Belgium. Belgian consultant and
Patrick Debois founded the conference.
In 2012, the state of DevOps report was launched and conceived by Alanna Brown at Puppet.
In 2014, the annual State of DevOps report was published by Nicole Forsgren, Jez Humble, Gene Kim,
and others. They found DevOps adoption was accelerating in 2014 also.
In 2015, Nicole Forsgren, Gene Kim, and Jez Humble founded DORA (DevOps Research and
Assignment).
In 2017, Nicole Forsgren, Gene Kim, and Jez Humble published "Accelerate: Building and Scaling High
Performing Technology Organizations".
Build
Without DevOps, the cost of the consumption of the resources was evaluated based on the pre-defined
individual usage with fixed hardware allocation.
91
And with DevOps, the usage of cloud, sharing of resources comes into the picture, and the build is
dependent upon the user's need, which is a mechanism to control the usage of resources or capacity.
Code:
Many good practices such as Git enables the code to be used, which ensures writing the code for
business, helps to track changes, getting notified about the reason behind the difference in the actual and
the expected output, and if necessary reverting to the original code developed.
Test
The application will be ready for production after testing.
In the case of manual testing, it consumes more time in testing and moving the code to the output.
The testing can be automated, which decreases the time for testing so that the time to deploy the code to
production can be reduced as automating the running of the scripts will remove many manual steps.
Plan
DevOps use Agile methodology to plan the development. With the operations and development team in
sync, it helps in organizing the work to plan accordingly to increase productivity.
Monitor
Continuous monitoring is used to identify any risk of failure. Also, it helps in tracking the system
accurately so that the health of the application can be checked.
The monitoring becomes more comfortable with services where the log data may get monitored through
many third-party tools such as Splunk.
Deploy
Many systems can support the scheduler for automated deployment.
The cloud management platform enables users to capture accurate insights and view the optimization
scenario, analytics on trends by the deployment of dashboards.
Operate
DevOps changes the way traditional approach of developing and testing separately.
The operation team interacts with developers, and they come up with a monitoring plan which serves the
IT and business requirements.
Release
Deployment to an environment can be done by automation. But when the deployment is made to the
production environment, it is done by manual triggering.
Many processes involved in release management commonly used to do the deployment in the production
environment manually to lessen the impact on the customers.
92
Continuous Development
This phase involves the planning and coding of the software. The vision of the project is decided during
the planning phase.
And the developers begin developing the code for the application. There are no DevOps tools that are
required for planning, but there are several tools for maintaining the code.
Continuous Integration
This stage is the heart of the entire DevOps lifecycle. It is a software development practice in which the
developers require to commit changes to the source code more frequently.
Then every commit is built, and this allows early detection of problems if they are present.
Building code is not only involved compilation, but it also includes unit testing, integration testing,
code review, and packaging.
Prime Ministers of India | List of Prime Minister of India (1947-2020)
Continuous Testing
This phase, where the developed software is continuously testing for bugs.
For constant testing, automation testing tools such as TestNG, JUnit, Selenium, etc are used.
These tools allow QAs to test multiple code-bases thoroughly in parallel to ensure that there is no flaw in
the functionality. I
n this phase, Docker Containers can be used for simulating the test environment.
Selenium does the automation testing, and TestNG generates the reports. This entire testing phase can
automate with the help of a Continuous Integration tool called Jenkins.
Continuous Monitoring
Monitoring is a phase that involves all the operational factors of the entire DevOps process, where
important information about the use of the software is recorded and carefully processed to find out trends
and identify problem areas.
Usually, the monitoring is integrated within the operational capabilities of the software application.
93
Continuous Feedback
The application development is consistently improved by analyzing the results from the operations of the
software.
This is carried out by placing the critical phase of constant feedback between the operations and the
development of the next version of the current software application.
Continuous Deployment
In this phase, the code is deployed to the production servers. Also, it is essential to ensure that the code
is correctly used on all the servers.
The new code is deployed continuously, and configuration management tools play an essential role in
executing tasks frequently and quickly.
Here are some popular tools which are used in this phase, such as Chef, Puppet, Ansible,
and SaltStack.
Continuous Operations
All DevOps operations are based on the continuity with complete automation of the release process and
allow the organization to accelerate the overall time to market continuingly.
ADOPTION OF DEVOPS
People in DevOps
DevOps Culture:
DevOps is a cultural movement; it’s all about people. An organization may adopt the most efficient
processes or automated tools possible, but they’re useless without the people who eventually must execute
those processes and use those tools. Building a DevOps culture, therefore, is at the core of DevOps adoption.
Building a DevOps culture requires the leaders of the organization to work with their teams to create an
environment and culture of collaboration and sharing.
DevOps team
The arguments for and against having a separate DevOps team are as old as the concept itself. Some
organizations, such as Netflix, don’t have separate development and operations teams; instead, a single
“NoOps” team owns both sets of responsibilities. Other organizations have succeeded with DevOps liaison
teams, which resolve any conflicts and promote collaboration. Such a team may be an existing tools group
or process group, or it may be a new team staffed by representatives of all teams that have a stake in the
application being delivered.
94
If you choose to have a DevOps team, your most important goal is to ensure that it functions as a center of
excellence that facilitates collaboration without adding a new layer of bureaucracy or becoming the team
that owns addressing all DevOps related problems — a development that would defeat the purpose of
adopting a DevOps culture.
DEVOPS TOOLS
Here are some most popular DevOps tools with brief explanation shown in the below image, such as:
Puppet
Puppet is the most widely used DevOps tool. It allows the delivery and release of the technology
changes quickly and frequently.
Features
1. Real-time context-aware reporting.
2. Model and manage the entire environment.
3. It eliminates manual work for the software delivery process.
4. It helps the developer to deliver great software quickly.
Ansible
Ansible is a leading DevOps tool. Ansible is an open-source IT engine that automates application
deployment, cloud provisioning, intra service orchestration, and other IT tools.
It makes it easier for DevOps teams to scale automation and speed up productivity.
Features
1. It is easy to use to open source deploy applications.
2. It helps in avoiding complexity in the software development process.
3. It eliminates repetitive tasks.
4. It manages complex deployments and speeds up the development process.
Docker
Docker is a high-end DevOps tool that allows building, ship, and run distributed applications on multiple
systems.
It also helps to assemble the apps quickly from the components, and it is typically suitable for container
management.
Features
It configures the system more comfortable and faster.
It increases productivity.
It provides containers that are used to run the application in an isolated environment.
It routes the incoming request for published ports on available nodes to an active container. This
feature enables the connection even if there is no task running on the node.
It allows saving secrets into the swarm itself.
95
Nagios
Nagios is one of the more useful tools for DevOps. It can determine the errors and rectify them with the
help of network, infrastructure, server, and log monitoring systems.
Features
It provides complete monitoring of desktop and server operating systems.
The network analyzer helps to identify bottlenecks and optimize bandwidth utilization.
It helps to monitor components such as services, application, OS, and network protocol.
It also provides to complete monitoring of Java Management Extensions.
CHEF
A chef is a useful tool for achieving scale, speed, and consistency. The chef is a cloud-based system and
open source technology.
Chef has got its convention for different building blocks, which are required to manage and automate
infrastructure.
Features
It maintains high availability.
It can manage multiple cloud environments.
It uses popular Ruby language to create a domain-specific language.
The chef does not make any assumptions about the current status of the node. It uses its mechanism
to get the current state of the machine.
Jenkins
Jenkins is a DevOps tool for monitoring the execution of repeated tasks. Jenkins is a software that allows
continuous integration.
It helps to integrate project changes more efficiently by finding the issues quickly.
Features
Jenkins increases the scale of automation.
It can easily set up and configure via a web interface.
It supports continuous integration and continuous delivery.
It requires little maintenance and has a built-in GUI tool for easy updates.
Git
Git is an open-source distributed version control system that is freely available for everyone.
It is designed to handle minor to major projects with speed and efficiency. It is developed to co-ordinate
the work among programmers.
Features
It is a free open source tool.
It allows distributed development.
It supports the pull request.
It enables a faster release cycle.
Git is very scalable.
It is very secure and completes the tasks very fast.
SALTSTACK
Stackify is a lightweight DevOps tool. It shows real-time error queries, logs, and more directly into the
workstation.
SALTSTACK is an ideal solution for intelligent orchestration for the software-defined data center.
Features
It eliminates messy configuration or data changes.
It can trace detail of all the types of the web request.
It allows us to find and fix the bugs before production.
It provides secure access and configures image caches.
96
Splunk
Splunk is a tool to make machine data usable, accessible, and valuable to everyone. It delivers
operational intelligence to DevOps teams.
It helps companies to be more secure, productive, and competitive.
Features
It has the next-generation monitoring and analytics solution.
It delivers a single, unified view of different IT services.
Extend the Splunk platform with purpose-built solutions for security.
Data drive analytics with actionable insight.
Selenium
Selenium is a portable software testing framework for web applications. It provides an easy interface for
developing automated tests.
Features
It is a free open source tool.
It supports multiplatform for testing, such as Android and ios.
It is easy to build a keyword-driven framework for a WebDriver.
It creates robust browser-based regression automation suites and tests.
Code:
Many good practices such as Git enables the code to be used, which ensures writing the code for
business, helps to track changes, getting notified about the reason behind the difference in the actual and
the expected output, and if necessary reverting to the original code developed.
Test
The application will be ready for production after testing.
In the case of manual testing, it consumes more time in testing and moving the code to the output.
The testing can be automated, which decreases the time for testing so that the time to deploy the code to
production can be reduced as automating the running of the scripts will remove many manual steps.
Plan
DevOps use Agile methodology to plan the development. With the operations and development team in
sync, it helps in organizing the work to plan accordingly to increase productivity.
Monitor
Continuous monitoring is used to identify any risk of failure. Also, it helps in tracking the system
accurately so that the health of the application can be checked.
The monitoring becomes more comfortable with services where the log data may get monitored through
many third-party tools such as Splunk.
Deploy
Many systems can support the scheduler for automated deployment.
The cloud management platform enables users to capture accurate insights and view the optimization
scenario, analytics on trends by the deployment of dashboards.
Operate
DevOps changes the way traditional approach of developing and testing separately.
97
The operation team interacts with developers, and they come up with a monitoring plan which serves the
IT and business requirements.
Release
Deployment to an environment can be done by automation. But when the deployment is made to the
production environment, it is done by manual triggering.
Many processes involved in release management commonly used to do the deployment in the production
environment manually to lessen the impact on the customers.
98