0% found this document useful (0 votes)
7 views203 pages

Cloud Computing Technologies

The document provides an overview of distributed systems, defining them as systems with multiple components that communicate to function as a single unit. It discusses the benefits, challenges, and various architectural models of distributed systems, including client-server and peer-to-peer models. Additionally, it covers key concepts such as remote invocation, request-reply protocols, and the characteristics that differentiate distributed systems from centralized systems.

Uploaded by

Ranjana TM
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views203 pages

Cloud Computing Technologies

The document provides an overview of distributed systems, defining them as systems with multiple components that communicate to function as a single unit. It discusses the benefits, challenges, and various architectural models of distributed systems, including client-server and peer-to-peer models. Additionally, it covers key concepts such as remote invocation, request-reply protocols, and the characteristics that differentiate distributed systems from centralized systems.

Uploaded by

Ranjana TM
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 203

MC4203 – CLOUD COMPUTING TECHNOLOGIES

UNIT I DISTRIBUTED SYSTEMS


Introduction to Distributed Systems – characterization of Distributed systems – distributed
Architectural models – remote invocation – Request –Reply protocols – Remote procedure call –
Remote method Invocation – Group communication – Coordination in Group Communication –
Ordered Multicast – Time Ordering – PhysicalClock Synchronization – Logical Time and Logical
clocks

1.1 INTRODUCTION

i. Definition

A distributed system, also known as distributed computing, is a system with multiple


components located on different machines that communicate and coordinate actions in order to
appear as a single coherent system to the end-user.

A distributed computer system consists of multiple software components that are onmultiple
computers, but run as a single system. The computers that are in a distributed system can be
physically close together and connected by a localnetwork, or they can be geographically distant
and connected by a wide areanetwork

Overview

The machines that are a part of a distributed system may be computers, physicalservers, virtual
machines, containers, or any other node that can connect to the network, have local memory,
and communicate by passing messages.

There are two general ways that distributed systems function:

1. Each machine works toward a common goal and the end-user views results asone
cohesive unit.
Mrs. V.DHIVYALAKSHMI AP/MCA Page 1
MC4203 – CLOUD COMPUTING TECHNOLOGIES
2. Each machine has its own end-user and the distributed system facilitates sharing
resources or communication services.

Although distributed systems can sometimes be obscure, they usually have three primary
characteristics: all components run concurrently, there is no global clock,and all components
fail independently of each other.

ii. Benefits and challenges of distributed systems

There are three reasons that teams generally decide to implement distributedsystems:

 Horizontal Scalability—Since computing happens independently on eachnode, it


is easy and generally inexpensive to add additional nodes and functionality as
necessary.
 Reliability—Most distributed systems are fault-tolerant as they can be madeup of
hundreds of nodes that work together. The system generally doesn’t experience any
disruptions if a single machine fails.
 Performance—Distributed systems are extremely efficient because workloads
can be broken up and sent to multiple machines.

Three more challenges you may encounter include:

 Scheduling—A distributed system has to decide which jobs need to run, when they
should run, and where they should run. Schedulers ultimately havelimitations, leading
to underutilized hardware and unpredictable runtimes.
 Latency—The more widely your system is distributed, the more latency youcan
experience with communications. This often leads to teams making tradeoffs
between availability, consistency, and latency.
 Observability—Gathering, processing, presenting, and monitoring hardwareusage
metrics for large clusters is a significant challenge.

iii. Types of distributed systems


Distributed systems generally fall into one of four different basic architecturemodels:
1. Client-server—Clients contact the server for data, then format it and displayit to the
end-user. The end-user can also make a change from the client-side and commit it
back to the server to make it permanent.
2. Three-tier—Information about the client is stored in a middle tier rather thanon the
client to simplify application deployment. This architecture model is most common
for web applications.
3. n-tier—Generally used when an application or server needs to forwardrequests
to additional enterprise services on the network.
4. Peer-to-peer—There are no additional machines used to provide services or manage
resources. Responsibilities are uniformly distributed among machinesin the system,
known as peers, which can serve as either client or server.
iv Distributed system characteristics:
Mrs. V.DHIVYALAKSHMI AP/MCA Page 2
MC4203 – CLOUD COMPUTING TECHNOLOGIES
• Fault-Tolerant
• providing services
• Recoverable
• Consistent
• Scalable
• Predictable Performance
• Secure
v. Handling failures
It is an important theme in distributed systems design. Failures fall into two obviouscategories:
Hardware and software.
 Hardware failures Decreased heat production and power consumption of smaller
circuits, reduction of off-chip connections and wiring, and high-quality
manufacturing techniques have all played a positive role in improving hardware
reliability. Today, problems aremost often associated with connections and
mechanical devices, i.e., networkfailures and drive failures.
 Software failures are a significant issue in distributed systems. Even with rigorous
testing, software bugs account for a substantial fraction of unplanned downtime
vi. Eight Fallacies
Everyone, when they first build a distributed system, makes the following eight assumptions.
These are so well-known in this field that they are commonly referredto as the "8 Fallacies".
1. The network is reliable.
2. Latency is zero.
3. Bandwidth is infinite.
4. The network is secure.
5. Topology doesn't change.
6. There is one administrator.
7.Transport cost is zero.
8. The network is homogeneous.
Latency: the time between initiating a request for data and the beginning of the actual data
transfer.
Bandwidth: A measure of the capacity of a communicationschannel. The higher a channel's
bandwidth, the more information it can carry.

1.2 CHARACTERIZATION OF DISTRIBUTED SYSTEMS

i. Definitions
 Distributed system (1) - A distributed system is one in which components located
at networked computers communicate and coordinate their actions only by passing messages.

ii. Characterization of a Distributed System

There are three significant characteristics on how distributed systems are dfiferent from centralized
systems:
1. Concurrency of components – different system components do work at once and handle

Mrs. V.DHIVYALAKSHMI AP/MCA Page 3


MC4203 – CLOUD COMPUTING TECHNOLOGIES
communication (ex. retrieving results or sending data) by passing messages.
2. Lack of global clock - in distributed systems each system has its own clock. Systems might
somewhat synchronize their clocks sometimes but they most likely will not have thesame time.
3. Independent failures of components - in a distributed system one component might fail due to
some unforeseen (or foreseen) circumstances. Dependant on how the distributed systemis managed, the
other components may keep on running or fail as well.

iii.Examples of small and large scale distributed systems

Small Large
Small group of interconnected robots Face book/Google systems
Distributed database on few computers WWW
Airplane control software Mobile networks / phones
File system on a laptop Large P2P systems
UT networked DS Grid computing

iv.Examples appeared on the slides:

Finance and commerce eCommerce e.g. Amazon and eBay, PayPal, onlinebanking and trading
The information society Web information and search engines, ebooks, Wikipedia;social networking:
Facebook and MySpace

Creative industries online gaming, music and film in the home, user-generatedcontent, e.g.
andentertainment YouTube, Flickr
Healthcare health informatics, on online patient records, monitoringpatients

Education e-learning, virtual learning environments; distance learning

Transport and logistics GPS in route finding systems, map services: Google Maps,Google Earth

Science The Grid as an enabling technology for collaborationbetween scientists


Environmental management sensor technology to monitor earthquakes, floods ortsunamis

v.Benefits and challenges of distributed systems


Benefits
 Speed
 Cost reduction
 Accessibiility / resource management
 Redundancy
 Fault tolerance
Challenges
 Synchronization issues
Mrs. V.DHIVYALAKSHMI AP/MCA Page 4
MC4203 – CLOUD COMPUTING TECHNOLOGIES
 Security
 Complexity
 Infrastrucuture
 Resource Management
 Physical issues (ex. the speed of light can play a roll in the data transfer speeds)
 Maintainability
 Programming languages
 Complexity of tools/frameworks
 Power consumption
 Cooling of systems
 Vulnerable to network failures
 Troubleshooting - physical and software problems

1.3 DISTRIBUTED ARCHITECURE MODELS

Distributed System Models is as follows:


1. Architectural Models
2. Interaction Models
3. Fault Models
i.Architectural Models

Architectural model describes responsibilities distributed between systemcomponents and


how are these components placed.
a) Client-server model
☞ The system is structured as a set of processes, called servers, that offer servicesto the
users, called clients.
 The client-server model is usually based on a simple request/reply protocol,

Mrs. V.DHIVYALAKSHMI AP/MCA Page 5


MC4203 – CLOUD COMPUTING TECHNOLOGIES
implemented with send/receive primitives or using remote procedure calls (RPC) or
remote method invocation (RMI):
 The client sends a request (invocation) message to the server asking for someservice;
 The server does the work and returns a result (e.g. the data requested) or anerror
code if the work could not be performed.

A server can itself request services from other servers; thus, in this new relation, theserver itself
acts like a client.
b) Peer-to-peer
☞ All processes (objects) play similar role.
 Processes (objects) interact without particular distinction between clients andservers.
 The pattern of communication depends on the particular application.
 A large number of data objects are shared; any individual computer holdsonly a
small part of the application database.
 Processing and communication loads for access to objects are distributedacross
many computers and access links.
 This is the most general and flexible model.
 Peer-to-Peer tries to solve some of the above
 It distributes shared resources widely -> share computing and communicationloads.

☞ Problems with peer-to-peer:

Mrs. V.DHIVYALAKSHMI AP/MCA Page 6


MC4203 – CLOUD COMPUTING TECHNOLOGIES
 High complexity due to
o Cleverly place individual objects
o retrieve the objects
o Maintain potentially large number of replicas.

ii.Interaction Model
Interaction model are for handling time i. e. for process execution, message delivery,clock
drifts etc.
 Synchronous distributed systems
Main features:
 Lower and upper bounds on execution time of processes can be set.
 Transmitted messages are received within a known bounded time.
 Drift rates between local clocks have a known bound.
Important consequences:
1. In a synchronous distributed system there is a notion of global physical time(with a
known relative precision depending on the drift rate).
2. Only synchronous distributed systems have a predictable behavior in terms oftiming.
Only such systems can be used for hard real-time applications.
3. In a synchronous distributed system it is possible and safe to use timeouts inorder to
detect failures of a process or communication link.
☞ It is difficult and costly to implement synchronous distributed systems.
 Asynchronous distributed systems
☞ Many distributed systems (including those on the Internet) are asynchronous. - No bound
on process execution time (nothing can be assumed about speed, load, and reliability of
computers). - No bound on message transmission delays (nothingcan be assumed about
speed, load, and reliability of interconnections) - No boundson drift rates between local
clocks.
Important consequences:
1. In an asynchronous distributed system there is no global physical time. Reasoning can
be only in terms of logical time (see lecture on time and state).
2. Asynchronous distributed systems are unpredictable in terms of timing.
3. No timeouts can be used.
☞ Asynchronous systems are widely and successfully used in practice.
iii.Fault Models
☞ Failures can occur both in processes and communication channels. The reasoncan be both
software and hardware faults.
☞ Fault models are needed in order to build systems with predictable behavior incase of
faults (systems which are fault tolerant).
☞ such a system will function according to the predictions, only as long as the realfaults behave
as defined by the “fault model”.

1.4 REMOTE INVOCATION


 request-reply communication: most primitive; minor improvement over underlying IPC primitives

Mrs. V.DHIVYALAKSHMI AP/MCA Page 7


MC4203 – CLOUD COMPUTING TECHNOLOGIES
o 2-way exchange of messages as in client-server computing
 RPC, RMI: mechanisms enabling a client to invoke a procedure/method from the server via
communication between client and server
 Remote Procedure Call (RPC): extension of conventional procedural programming model
o allow client programs to transparently call procedures in server programs running in separate
processes, and in separate machines from the client
 Remote Method Invocation (RMI): extension of conventional object oriented programming model
o allows objects in different processes to communicate i.e. an object in one JVM is able to invoke
methods in an object in another JVM
o extension of local method invocation: allows object in one process to invoke methods of an
object living in another process

Request-Reply Protocol

 most common exchange protocol for remote invocation

Operations

 doOperation(): send request to remote object, and returns the reply received
 getRequest(): acquire client request at server port

Mrs. V.DHIVYALAKSHMI AP/MCA Page 8


MC4203 – CLOUD COMPUTING TECHNOLOGIES
 sendReply(): sends reply message from server to client

Design issues

 timeouts: what to do when a request times out? how many retries?


 duplicate messages: how to discard?
o e.g. recognise successive messages with the same request ID and filter them
 lost replies: dependent on idempotency of server operations
 history: do servers need to send replies without re-execution? then history needs to be maintained
Exchange protocols
Different flavours of exchange protocols:
 request (R): no value to be returned from remote operation
o client needs no confirmation operation has been executed
e.g. sensor producing large amounts of data: may be acceptable for some loss
 request-reply (RR): useful for most client-server exchanges. Reply regarded as acknowledgement of
request
o subsequent request can be considered acknowledgement of the previous reply
 request-reply-acknowledge (RRA): acknowledgement of reply contains request id, allowing server to
discard entry from history

Fault tolerance

1.5 REQUEST/REPLY PROTOCOL

Communication Protocols for Remote Procedure Calls:


The following are the communication protocols that are used:
 Request Protocol
 Request/Reply Protocol
 The Request/Reply/Acknowledgement-Reply Protocol

i. Request Protocol:

 The Request Protocol is also known as the R protocol.

Mrs. V.DHIVYALAKSHMI AP/MCA Page 9


MC4203 – CLOUD COMPUTING TECHNOLOGIES
 It is used in Remote Procedure Call (RPC) when a request is made from the calling procedure to the
called procedure. After execution of the request, a called procedure has nothing to return and there is no
confirmation required of the execution of a procedure.
 Because there is no acknowledgement or reply message, only one message is sent from client to
server.
 In most cases, asynchronous RPC with an unstable transport protocol is utilized to implement
periodic update services. One of its applications is the Distributed System Window.

ii.Request/Reply Protocol:

 The Request-Reply Protocol is also known as the RR protocol.


 It works well for systems that involve simple RPCs.
 This protocol has a concept base of using implicit acknowledgements instead of explicit
acknowledgements.
 To deal with failure handling e.g. lost messages, the timeout transmission technique is used with RR
protocol.
 If a client does not get a response message within the predetermined timeout period, it retransmits the
request message.
 Exactly-once semantics is provided by servers as responses get held in reply cache that helps in
filtering the duplicated request messages and reply messages are retransmitted without processing the
request again.
 If there is no mechanism for filtering duplicate messages then at least-call semantics is used by RR
protocol in combination with timeout transmission.

Mrs. V.DHIVYALAKSHMI AP/MCA Page 10


MC4203 – CLOUD COMPUTING TECHNOLOGIES

iii. The Request/Reply/Acknowledgement-Reply Protocol:

 This protocol is also known as the RRA protocol (request/reply/acknowledge-reply).


 Exactly-once semantics is provided by RR protocol which refers to the responses getting held in reply
cache of servers resulting in loss of replies that have not been delivered.
 The RRA (Request/Reply/Acknowledgement-Reply ) Protocol is used to get rid of the drawbacks of
the RR (Request/Reply) Protocol.

Mrs. V.DHIVYALAKSHMI AP/MCA Page 11


MC4203 – CLOUD COMPUTING TECHNOLOGIES

 In this protocol, the client acknowledges the receiving of reply messages and when the server gets
back the acknowledgement from the client then only deletes the information from its cache.
 Because the reply acknowledgement message may be lost at times, the RRA protocol requires unique
ordered message identities. This keeps track of the acknowledgement series that has been sent.
1.6 REMOTE PROCEDURE CALL (RPC)
RPC is a communication technology that is used by one program to make a request to another program for
utilizing its service on a network without even knowing the network’s details. A function call or a subroutine
call is other terms for a procedure call.
It is based on the client-server concept. The client is the program that makes the request, and the server is the
program that gives the service. Remote Procedure Call program as often as possible utilizes the Interface
Definition Language (IDL), a determination language for describing a computer program component’s
Application Programming Interface (API). In this circumstance, IDL acts as an interface between ma chines
at either end of the connection, which may be running different operating systems and programming
languages.
Working Procedure for RPC Model:
 The process arguments are placed in a precise location by the caller when the procedure needs to be
called.
 Control at that point passed to the body of the method, which is having a series of instructions.
 The procedure body is run in a recently created execution environment that has duplicates of the
calling instruction’s arguments.
 At the end, after the completion of the operation, the calling point gets back the control, which returns
a result.

Mrs. V.DHIVYALAKSHMI AP/MCA Page 12


MC4203 – CLOUD COMPUTING TECHNOLOGIES
 The call to a procedure is possible only for those procedures that are not within the caller’s
address space because both processes (caller and callee) have distinct address space and the
access is restricted to the caller’s environment’s data and variables from the remote procedure.
 The caller and callee processes in the RPC communicate to exchange information via the
message-passing scheme.
 The first task from the server-side is to extract the procedure’s parameters when a request
message arrives, then the result, send a reply message, and finally wait for the next call
message.
 Only one process is enabled at a certain point in time.
 The caller is not always required to be blocked.
 The asynchronous mechanism could be employed in the RPC that permits the client to work
even if the server has not responded yet.
 In order to handle incoming requests, the server might create a thread that frees the server for
handling consequent requests.

Types of RPC:
Callback RPC: In a Callback RPC, a P2P (Peer-to-Peer)paradigm opts between participating processes. In
this way, a process provides both client and server functions which are quite helpful. Callback RPC’s
features include:
 The problems encountered with interactive applications that are handled remotely
 It provides a server for clients to use.
 Due to the callback mechanism, the client process is delayed.
 Deadlocks need to be managed in callbacks.
 It promotes a Peer-to-Peer (P2P) paradigm among the processes involved.
RPC for Broadcast: A client’s request that is broadcast all through the network and handled by all servers
that possess the method for handling that request is known as a broadcast RPC. Broadcast RPC’s features
Mrs. V.DHIVYALAKSHMI AP/MCA Page 13
MC4203 – CLOUD COMPUTING TECHNOLOGIES
include:
 You have an option of selecting whether or not the client’s request message ought to be broadcast.
 It also gives you the option of declaring broadcast ports.
 It helps in diminishing physical network load.
Batch-mode RPC: Batch-mode RPC enables the client to line and separate RPC inquiries in a transmission
buffer before sending them to the server in a single batch over the network. Batch-mode RPC’s features
include:
 It diminishes the overhead of requesting the server by sending them all at once using the network.
 It is used for applications that require low call rates.
 It necessitates the use of a reliable transmission protocol.
Local Procedure Call Vs Remote Procedure Call:
 Remote Procedure Calls have disjoint address space i.e. different address space, unlike Local
Procedure Calls.
 Remote Procedure Calls are more prone to failures due to possible processor failure or
communication issues of a network than Local Procedure Calls.
 Because of the communication network, remote procedure calls take longer than local procedure
calls.
Advantages of Remote Procedure Calls:
 The technique of using procedure calls in RPC permits high-level languages to provide
communication between clients and servers.
 This method is like a local procedure call but with the difference that the called procedure is executed
on another process and a different computer.
 The thread-oriented model is also supported by RPC in addition to the process model.
 The RPC mechanism is employed to conceal the core message passing method.
 The amount of time and effort required to rewrite and develop the code is minimal.
 The distributed and local environments can both benefit from remote procedure calls.
 To increase performance, it omits several of the protocol layers.
 Abstraction is provided via RPC. To exemplify, the user is not known about the nature of message-
passing in network communication.
 RPC empowers the utilization of applications in a distributed environment.
Disadvantages of Remote Procedure Calls:
 In Remote Procedure Calls parameters are only passed by values as pointer values are not allowed.
 It involves a communication system with another machine and another process, so this mechanism is
extremely prone to failure.
 The RPC concept can be implemented in a variety of ways, hence there is no standard.
 Due to the interaction-based nature, there is no flexibility for hardware architecture in RPC.
 Due to a remote procedure call, the process’s cost has increased.

1.7 REMOTE METHOD INVOCATION

 RMI is similar to RPC, but extended to distributed objects


 a calling object is able to invoke a method in a remote object
 underlying details are generally hidden from the user

Similarities: RMI and RPC

Mrs. V.DHIVYALAKSHMI AP/MCA Page 14


MC4203 – CLOUD COMPUTING TECHNOLOGIES
 support programming with interfaces
 constructed on top of Request-Reply protocols
 can offer range of call semantics
 similar level of transparency

Architectures

 client-server architecture: can be adopted for a distributed object system


o server manages objects
o clients invoke methods using RMI: the request is sent in a message to the server managing the object
o invocation carried out, and result returned to the client
o client/server in different processes enforces encapsulation: unauthorized method invocations are not
possible
 replicated architecture: objects can be replicated to improve fault tolerance and performance

Remote Objects

 remote object: an object able to receive and make local/remote invocations


 remote object references: other objects are able to invoke the methods of a remote object if they have
access to its remote object reference
o identifier used throughout distributed system to refer to particular, unique remote object
 remote interface: every remote object has a remote interface, specifying which of its methods can be
invoked remotely
o class of a remote object implements the methods of the remote interface
o CORBA Interface Definition Language: allows definition of remote interfaces. Clients don’t
need to use the same programming language as the remote object in order to invoke its methods remotely
o Java RMI: become remote interfaces by extending Remote

Mrs. V.DHIVYALAKSHMI AP/MCA Page 15


MC4203 – CLOUD COMPUTING TECHNOLOGIES

Distributed Actions

 actions can be performed on remote objects (i.e. in different processes):


o e.g. executing a remote method defined in the remote interface
o e.g. creating a new object in the target process
 actions are invoked using RMI

Garbage Collection

 if underlying language (e.g. Java) supports GC, any associated RMI system should allow GC of remote
objects
 Distributed GC: local, existing GC cooperates with additional module that counts references to do
distributed GC

Exceptions

 remote invocation may fail


o process may have crashed
o process may be too busy to reply
o result message may have been lost
 need to have all usual exceptions for local invocations
 extra exceptions for remote invocation: e.g. timeouts
 CORBA IDL allows you to specify application-level exceptions

Mrs. V.DHIVYALAKSHMI AP/MCA Page 16


MC4203 – CLOUD COMPUTING TECHNOLOGIES
Implementation

 communication module: communicates messages (requests, replies) between client and server
 two cooperating modules implement request-reply protocol
responsible for implementing invocation semantics
 remote reference module: creates remote object references
maintains remote object table which maps between local and remote object references
 remote object table: has an entry for each
remote object reference held by the process
local proxy
 entries get added to the remote object table when
remote object reference is passed for the first time
remote object reference is received, and an entry is not present in the table
 servant: objects in process receiving the remote invocation
instance of a class that provides the body of a remote object
live within a server process

RMI Software

 software layer between application and the communication and object reference modules, composed of
proxies, dispatchers, and skeletons
 proxy: behaves like a local object to the invoker making RMI transparent to clients
usually an interface
lives in the client
instead of executing the invocation, it forwards it in a message to a remote object
hides details of remote object reference, marshalling, unmarshalling, sending/receiving messages from client
one proxy per remote object reference the process holds
 dispatcher: translates method ID to the real method
server has 1 dispatcher + 1 skeleton for each class representing a remote object
receives requests from the communication module
uses operationId to select appropriate method in the skeleton
 skeleton: skeleton class of remote object implementing methods of remote interface
handles marshalling/unmarshalling
skeleton methods unmarshal arguments in the request and invoke the corresponding method in the servant

Mrs. V.DHIVYALAKSHMI AP/MCA Page 17


MC4203 – CLOUD COMPUTING TECHNOLOGIES
marshals the result in a reply message to the sending proxy’s method

Development

1. definition of interface for remote objects: defined using supported mechanism of the particular RMI
software
2. compile interface: generate proxy, dispatcher, skeleton classes
3. writing server: remote object classes are implemented and compiled with classes for dispatchers and
skeletons. Server is also responsible for creating/initialising objects, and registering them with the
binder.
4. writing client: client programs implement invoking code and contain proxies for all remote classes.
Binder used to lookup remote objects

Server and client program

 server contains
o classes for dispatchers, skeletons
o initialisation section for creating/initialising at least one servant
o code for registering servants with the binder
 client contains
o classes for all proxies of remote objects

Activation of Remote Objects

 some applications require information survive for long periods, but its not practical to be kept in
running processes indefinitely
 to avoid wasting resources from running all servers that manage remote objects simultaneously, servers
can be started whenever needed by clients
 activator: processes that start server processes to host remote objects
o registers passive objects available for activation
o starts named server processes and activates remote objects in them
o keeps track of locations of servers for remote objects that have already been activated
 active remote object: is one that is available for invocation in the process
 passive remote object: is not currently active, but can be made active. Contains
o implementation of the methods
o state in marshalled form
 activation: creating an active object from the corresponding passive object

Java RMI

 Java RMI extends Java object model to provide support for distributed objects,
 allowing objects to invoke methods on remote objects using the same syntax as for local invocations
 objects making remote invocations are aware their target is remote as it must handle RemoteExceptions
 implementer of a remote object is aware an object is remote as it extends the Remote interface

Mrs. V.DHIVYALAKSHMI AP/MCA Page 18


MC4203 – CLOUD COMPUTING TECHNOLOGIES
Developing a Java RMI server:

1. specify remote interface


2. implement Servant class
3. compile interface and servant classes
4. generate skeleton and stub classes
5. implement server
6. compile server

Developing a Java RMI client:

1. implement client program


2. compile client program

Goals of RMI
Following are the goals of RMI −
 To minimize the complexity of the application.
 To preserve type safety.
 Distributed garbage collection.
 Minimize the difference between working with local and remote objects.

1.8 GROUP COMMUNICATION

Communication between two processes in a distributed system is required to exchange


various data, such as code or a file, between the processes. When onesource process tries to
communicate with multiple processes at once, it is called Group Communication. A group
is a collection of interconnected processes with abstraction. This abstraction is to hide the
message passing so that the communication looks like a normal procedure call. Group
communication also helpsthe processes from different hosts to work together and perform
operations in a synchronized manner, therefore increases the overall performance of the
system.

Types of Group Communication in a Distributed System :


 Broadcast Communication :
When the host process tries to communicate with every process in a distributed system at
same time. Broadcast communication comes in handy when a commonstream of

Mrs. V.DHIVYALAKSHMI AP/MCA Page 19


MC4203 – CLOUD COMPUTING TECHNOLOGIES
information is to be delivered to each and every process in most efficient manner possible.
Since it does not require any processing whatsoever, communication is very fast in
comparison to other modes of communication.
However, it does not support a large number of processes and cannot treat aspecific
process individually.

A broadcast Communication: P1 process communicating with every process inthe


system
 Multicast Communication :
When the host process tries to communicate with a designated group of processesin a
distributed system at the same time. This technique is mainly used to find a way to address
problem of a high workload on host system and redundant information from process in
system. Multitasking can significantly decrease timetaken for message handling.

A multicast Communication: P1 process communicating with only a group of theprocess in the


system
 Unicast Communication :
When the host process tries to communicate with a single process in a distributed system
at the same time. Although, same information may be passed to multiple processes. This
works best for two processes communicating as onlyit has to treat a specific process only.
However, it leads to overheads as it has to find exact process and then exchange
information/data.

Mrs. V.DHIVYALAKSHMI AP/MCA Page 20


MC4203 – CLOUD COMPUTING TECHNOLOGIES

A broadcast Communication: P1 process communicating with only P3 process

1.9 COORDINATION IN DISTRIBUTED SYSTEMS

High Availability
High available systems mean a set of computers who work, act and appear as a onecomputer
to the outside world.
Consensus
simple algorithm forconsensus describes what is consensus in simple terms.Consensus is a
fundamental problem in fault-tolerant distributed systems.
Distributed Coordination Systems
There are many popular distributed coordination schemes available like Apache Zookeeper
and etcd. All these systems provide the above mentioned functionalityof a distributed
coordination system without letting the users to worry about the consistency of the values
they are storing in those distributed storages.
1. Examples of distributed systems
Here are the four main distributed systems normal people actually use:
1. The vast collection of routing mechanisms that make up the Internet.
2. The domain name system (DNS). Essentially a large hierarchical databasefor
translating domain names like www.cs.yale.edu into IP addresses
3. The World Wide Web. Structurally not all that different from the bottom layerof DNS,
with webservers replacing nameservers.

Mrs. V.DHIVYALAKSHMI AP/MCA Page 21


MC4203 – CLOUD COMPUTING TECHNOLOGIES
4. The SMTP-based email system. This is to packet routing what the web is to DNS: a
store-and-forward system implemented at a high level in the protocolstack rather than
down in the routers. Reliability is important
2. Distributed coordination
For some distributed systems, it's necessary to get more coordination between components. The
classic example is banking
3. Timestamps
We can clear up the confusion somewhat by assigning our own synthetic times, or timestamps to all events.
Distributed mutual exclusion
There are several options:
Centralized coordinator
To obtain a lock, a process sends a request message to the coordinator. Thecoordinator marks
the lock as acquired and responds with a reply message.
Token passing
We give a unique token to some process initially. When that process is done with the lock (or if
it didn't need to acquire it in the first place), it passes the token on to some other process
Timestamp algorithm
This is similar to the ticket machine approach used in delicatessens
4. Distributed transactions
A simple mutual exclusion approach and provide actual atomic transactions. Here atomicity means
that the transaction either occurs in full (i.e. every participant updates its local state)or not at all (no
local state changes).
5.Distributed commit protocol
Phase 1
1. Coordinator writes prepare(T) to its log.
2. Coordinator sends prepare(T) message to all the participants in T.
3. Each participant replies by writing fail(T) or ready(T) to its log.
4. Each participant then sends a message fail(T) or ready(T).
Phase 2
1. Coordinator waits to receive replies from all participants or until atimeout expires.
2. If it gets ready(T) from every participant, it may commit the transaction by writing
commit(T) to its log and sending commit(T) to allparticipants; otherwise, it writes and sends
abort(T).
3. Each participant records the message it received from the coordinator in its log. In the
case of an abort, it also undoes any changes it made toits state as part of the transaction.

6. Agreement protocols
There are various ways to get around the FLP impossibility result; the most practicaluse mechanisms
that in the theoretical literature are modeled as abstract Failure Detectors but that in practice tend to
specifically involve using timeouts.

1.10 ORDERED MULTICAST

Mrs. V.DHIVYALAKSHMI AP/MCA Page 22


MC4203 – CLOUD COMPUTING TECHNOLOGIES

i.FIFO-ORDERED MULTICAST

Mrs. V.DHIVYALAKSHMI AP/MCA Page 23


MC4203 – CLOUD COMPUTING TECHNOLOGIES

Mrs. V.DHIVYALAKSHMI AP/MCA Page 24


MC4203 – CLOUD COMPUTING TECHNOLOGIES

Mrs. V.DHIVYALAKSHMI AP/MCA Page 25


MC4203 – CLOUD COMPUTING TECHNOLOGIES

Mrs. V.DHIVYALAKSHMI AP/MCA Page 26


MC4203 – CLOUD COMPUTING TECHNOLOGIES

ii.TOTAL-ORDERED MULTICAST

Mrs. V.DHIVYALAKSHMI AP/MCA Page 27


MC4203 – CLOUD COMPUTING TECHNOLOGIES

Mrs. V.DHIVYALAKSHMI AP/MCA Page 28


MC4203 – CLOUD COMPUTING TECHNOLOGIES

1.11 TIME ORDERING

A total order is a binary relation that defines an order for every element in someset. Two
distinct elements are comparable when one of them is greater than the other. In a partially
ordered set, some pairs of elements are not comparable and hence a partial order doesn't
specify the exact order of every item.

Mrs. V.DHIVYALAKSHMI AP/MCA Page 29


MC4203 – CLOUD COMPUTING TECHNOLOGIES

i.Time and clocks and ordering of events in a distributed system

One thing “happened before” another is a relation that iscodified by time, using physical clocks as a
mechanism to deliver time. So ordering of events is done using physical clocks in the real world. In a
distributed system, this notion of time needs to be understood more carefully.
Each node can establish their own local order of events. But how can one establish a coherent order of
events across multiple nodes.

ii.impartial ordering of events in a distributed system

One way to define an order of events in a distributed system would be to have a physical
clock. So “happened before” event can be described using t1 < t2. But clocks are not
accurate, can drift and are not necessarily synchronized in lock-step.So this paper takes
another approach to define “happened before” relation. (The meaning of “partial order” will
become clear later. )
A distributed system can be defined as a collection of processes. A process essentially
consists of a queue of events(which can be anything — like an instruction, or a subprogram,
or anything meaningful) and has apriori order. In thissystem, when processes communicate
with each other, sending of a message is defined as an event. Let’s establish a more precise
definition of “happens before” (⇢) in such a system.

1. In the same process, if event a comes before event b, then a ⇢ b


2. When process i sends some message at a to process j and process j
acknowledges this message at b, then a ⇢ b
3. Happens before is transitive. a ⇢ b and b ⇢ c, then a ⇢ c.
4. Two events a and b are described as concurrent when a hasn’t happened beforeb and b
hasn’t happened before a. This condition generally reflects the fact that process i may not
have knowledge about all events that could have happened in process j. So we cannot
establish authoritative order in the system — this also makes it clear that we have only a

Mrs. V.DHIVYALAKSHMI AP/MCA Page 30


MC4203 – CLOUD COMPUTING TECHNOLOGIES
partial order in the system. There could be a lot of events in the system for which no
meaningful order can be established without relying on physical time.
Taking some examples might be useful.
Partial ordering of events in the system. Vertical bars are processes. Wavy arrowsare
messages. A dot is an event. Time elapses bottom up.

As you can see in the figure on the left above, p1 ⇢ p2 as those are the events in the same

process. q1 ⇢ p2 due to sending of the message from process Q to process P. Ifwe consider that
time is elapsing bottom up and q3 seems to happen before p3 if we were to consider physical
time. But in this system, we will call these events as concurrent. Both processes don’t know
about these events — there is not causal relationship at this time. But what we can say is that q3
⇢ p4, because q3⇢q5 and q5
⇢ p4. Similarly we can say that p1 ⇢ r3. Now having gone through some examples,it
becomes clear that, another way to envisage a ⇢ b is that event a can causally affect event b.
On a similar note, q3 and p3 mentioned above are not causally related.

1.12 PHYSICAL CLOCK SYNCHRONIZATION

Distributed System is a collection of computers connected via the high speed communication
network. In the distributed system, the hardware and software components communicate and
coordinate their actions by message passing. Each node in distributed systems can share their
resources with other nodes. So, there is need of proper allocation of resources to preserve the state
of resources and help coordinate between the several processes. To resolve such conflicts,
synchronizationis used. Synchronization in distributed systems is achieved via clocks.

The physical clocks are used to adjust the time of nodes. Each node in the system can share its local

Mrs. V.DHIVYALAKSHMI AP/MCA Page 31


MC4203 – CLOUD COMPUTING TECHNOLOGIES
time with other nodes in the system. The time is set based on UTC (Universal Time Coordination).
UTC is used as a reference time clock for thenodes in the system.

The clock synchronization can be achieved by 2 ways: External and Internal Clock
Synchronization.

1. External clock synchronization is the one in which an external reference clock is


present. It is used as a reference and the nodes in the system can set andadjust their time
accordingly.
2. Internal clock synchronization is the one in which each node shares its timewith other
nodes and all the nodes set and adjust their times accordingly.
There are 2 types of clock synchronization algorithms: Centralized and Distributed.

Mrs. V.DHIVYALAKSHMI AP/MCA Page 32


MC4203 – CLOUD COMPUTING TECHNOLOGIES
 Centralized is the one in which a time server is used as a reference. The single time server
propagates it’s time to the nodes and all the nodes adjust the time accordingly. Examples of
centralized are- BerkeleyAlgorithm, Passive Time Server, Active Time Server etc.
 Distributed is the one in which there is no centralized time server present. Instead the nodes
adjust their time by using their local time and then, taking the

Average of the differences of time with other nodes. Distributed algorithms overcome the
issue of centralized algorithms like the scalability and single pointfailure. Examples of
Distributed algorithms are – Global Averaging Algorithm, Localized Averaging Algorithm,
NTP (Network time protocol) etc.

1.13 LOGICAL TIME AND LOGICAL CLOCKS


13.1 Logical Time
Let’s call this function as Ci(a) as a counter in process i for event a. There are no physical
clocks in the system. The function C(a) establishes the invariant that, eventa must have
happened before b, then C(a) < C(b). Using this function, a partial ordering of events in the
system can be established using the following two conditions:

C1: Ci(a) < Ci(b) if a happens before b in the same process i. This can beimplemented using a
simple counter in the given process.

C2: When process i sends message at event a and process j acknowledges themessage at event
b, then Ci(a) < Cj(b)

These clock functions can be thought of as ticks that occur regularly in a process. Between any
two events, there needs to be at least one such tick. Each tick essentially increments the
number assigned to the previous tick. Across processes when messages are sent, that message
needsto cross or touch a tick boundary to define the “happens before” event.

This can be implemented in the following way:


IR1: This one obeys C1. This can be done by incrementing the Ci(a) between anytwo
successive events in the system.

IR2: This is one implements C2. It can be done by sending Ci(a) as a timestamp to process j.
When Process j acknowledges the receipt of this message at event b, it needs to set Cj(b).
Cj(b) will be set to a value ≥ the current Cj and also greater thanCi(a)/timestamp.

i. Total ordering of events in the system


So far the system of logical clocks has established a partial order of events in the system.
There are still events which are concurrent and it would be useful to breakties, specially for
the locking problem described in the introduction — someone needs to get the lock. This
can be done by introducing an arbitrary process priority.In case of a tie, the process with
lower priority gets the event that happened before.More formally, a total order a ⇥ b (notice
the new type of arrow)can be defined as:

Mrs. V.DHIVYALAKSHMI AP/MCA Page 33


MC4203 – CLOUD COMPUTING TECHNOLOGIES

1. Ci(a) < Cj(b).


2. If Ci(a) = Cj(b), then use Pi < Pj.

These two conditions imply that ⇥ completes the partial relationship ⇢. If two events
are partially ordered then they are totally ordered already. While partial ordering is
unique in the given system of events, total ordering may not be.

ii. Distributed locks using total ordering


Consider the following problem which can be quite common in distributed systems. The
central idea of the problem is to access a shared resource, but only one process can access it
at any time. More formal conditions can be specified as:

1. A process which has been granted a resource, must release it before any otherprocess
can acquire it.
2. Resource access requests should obey the order in which requests are made
3. If every process releases the resource it asked for, then eventually access isgranted
for that resource.

One possible solution could be to introduce a centralized scheduler. While one issueis that it is a
completely centralized, another is that ordering condition 2 may not work.
This means that event order was not obeyed. To address this issue, we can use total ordering
based off of IR1 and IR2.With this, every event is totally ordered in the system. As long as all
the processes know about requests made by other processes, the correct ordering can be
enforced.
A decentralized solution can be designed such that each process keeps a queue oflock and
unlock operations.

1. Process i asking for a resource lock, uses the current timestamp and puts
lock(T,Pi) in the queue. It also sends this message to all other processes.
2. All other processes put this message in their queue and send the response backwith a
new timestamp Tn.
3. To unlock a resource, process i, sends unlock(T, Pi) message to all processesand
removes the lock(T, Pi) message from its own queue.
4. Process Pj, upon getting unlock message, removes lock(T, Pi) message fromits
queue.
5. Process Pi is free to use the resource i.e. gets its lock request granted when: ithas the
lock(T, Pi) messages in its queue with T enforcing the total order such that T is before
any other message in the queue. In addition, process Pi needs to wait until it has received
messages from all the processes in the system timestamped later than T.

13.2 Logical Clock in Distributed System


Logical Clocks refer to implementing a protocol on all machines within your distributed system, so
Mrs. V.DHIVYALAKSHMI AP/MCA Page 34
MC4203 – CLOUD COMPUTING TECHNOLOGIES
that the machines are able to maintain consistent ordering ofevents within some virtual timespan. A
logical clock is a mechanism for capturingchronological and causal relationships in a distributed
system. Distributed systemsmay have no physically synchronous global clock, so a logical clock allows
globalordering on events from different processes in such systems.
Example :
Suppose, we have more than 10 PCs in a distributed system and every PC is doing it’s own work but
then how we make them work together. There comes a solution tothis i.e. LOGICAL CLOCK.
Method-1:
To order events across process, try to sync clocks in one approach.

This means that if one PC has a time 2:00 pm then every PC should have the sametime which is quite
not possible. Not every clock can sync at one time. Then we can’t follow this method.

Method-2:
Another approach is to assign Timestamps to events.
Taking the example into consideration, this means if we assign the first place as 1,second place as 2,
third place as 3 and so on. Then we always know that the first place will always come first and then so
on. Similarly, If we give each PC their individual number than it will be organized in a way that 1st PC
will complete its process first and then second and so on.
BUT, Timestamps will only work as long as they obey causality.
Causality
Causality is fully based on HAPPEN BEFORE RELATIONSHIP.
 Taking single PC only if 2 events A and B are occurring one by one then TS(A) < TS(B). If A
has timestamp of 1, then B should have timestamp morethan 1, then only happen before relationship
occurs.
 Taking 2 PCs and event A in P1 (PC.1) and event B in P2 (PC.2) then also thecondition will be
TS(A) < TS(B). Taking example- suppose you are sending message to someone at 2:00:00 pm, and
the other person is receiving it at 2:00:02pm.Then it’s obvious that TS(sender) < TS(receiver).

Properties Derived from Happen Before Relationship –


 Transitive Relation –
If, TS(A) <TS(B) and TS(B) <TS(C), then TS(A) < TS(C)
 Causally Ordered Relation –
a->b, this means that a is occurring before b and if there is any changes in a itwill surely
reflect on b.
 Concurrent Event –
This means that not every process occurs one by one, some processes are made tohappen
simultaneously i.e., A || B.

Mrs. V.DHIVYALAKSHMI AP/MCA Page 35


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

UNIT II INTRODUCTION TO CLOUD COMPUTING


Cloud Computing Basics – Desired Features Of Cloud Computing – Elasticity In Cloud – On
Demand Provisioning – Applications – Benefits – Cloud Components-: Clients , Datacenters &
Distributed Servers – Characterization Of Distributed Systems – Distributed Architectural
Models – Principles Of Parallel And Distributed Computing – Applications Of Cloud Computing –
Benefits – Cloud Services- Open Source Cloud Software : Eucalyptus, Open Nebula, Open Stack,
Aneka, CloudSim.

CLOUD COMPUTING
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a
shared pool of configurable computing resources (e.g., networks, servers, storage, applications,
and services) that can be rapidly provisioned and released with minimal management effort or
service provider interaction.
2.1 CLOUD COMPUTING BASICS
Cloud Computing is the delivery of computing services such as servers, storage, databases,
networking, software, analytics, intelligence, and more, over the Cloud (Internet). We can take any
required services on rent. ... The cloud computing services will be charged based on usage.

The cloud environment provides an easily accessible online portal that makes handy for the user
to manage the compute, storage, network, and application resources.

i. Advantages of cloud computing


o Cost
o Speed
o Scalability
o Productivity
o Reliability

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 1


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

o Security

ii.Types of Cloud Computing

o Public Cloud: The cloud resources that are owned and operated by a third-party cloud
service provider are termed as public clouds. It delivers computing resources such as
servers, software, and storage over the internet
o Private Cloud: The cloud computing resources that are exclusively used inside a single
business or organization are termed as a private cloud. A private cloud may physically be
located on the company’s on-site datacentre or hosted by a third-party service provider.
o Hybrid Cloud: It is the combination of public and private clouds, which is bounded
together by technology that allows data applications to be shared between them. Hybrid
cloud provides flexibility and more deployment options to the business.

iii. Types of Cloud Services

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 2


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

1. Infrastructure as a Service (IaaS): In IaaS, we can rent IT infrastructures like servers


and virtual machines (VMs), storage, networks, operating systems from a cloud service
vendor.
2. Platform as a Service (PaaS): This service provides an on-demand environment for
developing, testing, delivering, and managing software applications. The developer is
responsible for the application, and the PaaS vendor provides the ability to deploy and
run it.
3. Software as a Service (SaaS): It provides a centrally hosted and managed software
services to the end-users. It delivers software over the internet, on-demand, and typically
on a subscription basis. E.g., Microsoft One Drive, Dropbox, WordPress, Office 365, and
Amazon Kindle.

iv. Reason for Cloud Computing

Small as well as large IT companies, follow the traditional methods to provide the IT
infrastructure.

In that server room, there should be a database server, mail server, networking, firewalls, routers,
modem, switches, QPS (Query Per Second means how much queries or load will be handled by
the server), configurable system, high net speed, and the maintenance engineers.

To establish such IT infrastructure, we need to spend lots of money. To overcome all these
problems and to reduce the IT infrastructure cost, Cloud Computing comes into existence.

v. Characteristics of Cloud Computing


The characteristics of cloud computing are given below:
1) Agility
2) High availability and reliability
3) High Scalability

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 3


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

4) Multi-Sharing
5) Device and Location Independence
6) Maintenance
7) Low Cost
8) Services in the pay-per-use mode

2.2 DESIRED FEATURES OF CLOUD COMPUTING


Following are the characteristics of Cloud Computing:
1. Resources Pooling
2. On-Demand Self-Service
3. Easy Maintenance
4. Large Network Access
5. Availability
6. Automatic System
7. Economical
8. Security
9. Pay as you go
10. Measured Service\

2.3. ELASTICITY ON CLOUD


Cloud elasticity is one of the most important features of cloud computing, and a major selling
point for organizations to migrate from their on-premises infrastructure.

i.Cloud Elasticity
Cloud elasticity is the ability to rapidly and dynamically allocate cloud resources, including
compute, storage, and memory resources, in response to changing demands.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 4


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

The goal of cloud elasticity is to avoid either over provisioning or under provisioning a particular
service or application. Over provisioning (i.e. allocating too many resources) results in higher
expenditures than necessary, while under provisioning (allocating too few resources) means that
not all users will be able to access the service.

ii. Cloud Elasticity vs. Cloud Scalability


Both cloud elasticity and cloud scalability are part of a larger concern about system adaptability,
i.e. the ability of a system to adapt to a changing environment.
Cloud scalability refers to the ability of a system to remain operational and responsive in
response to growth and gradual changes in user demand over time. As such, scalability tends to
be a strategic action that increases or decreases resource consumption on an as-needed basis.
Cloud scalability is useful for infrastructure or applications that undergo regular or predictable
changes in demand—for example, a costume website receiving most of its traffic in October
before Halloween.
Cloud elasticity, on the other hand, refers to the ability of a system to remain operational and
responsive during rapid and/or unexpected spikes in user demand. Elasticity is a tactical action
that ensures uninterrupted access, even during usage peaks. Cloud elasticity is a necessity for any
infrastructure or applications that may experience sudden bursts in popularity—for example,
websites for auctions or concert tickets that have large amounts of traffic in a very short period
of time.

iii. Cloud Elasticity Use Cases

Cloud elasticity is helpful in a wide variety of use cases. For example:

• E-commerce websites may have events such as sales, promotions, and the release of
special items that attract a much larger number of customers than usual. Cloud elasticity
helps these websites allocate resources appropriately during times of high demand so that
customers can still check out their purchases.
• Streaming services need to appropriately handle events such as the release of a popular
new album or TV series. Netflix, for example, claims that it can add “thousands of virtual
servers and petabytes of storage within minutes,” so that users can keep enjoying their
favorite entertainment no matter how many other people are watching.

iv. The Benefits of Cloud Elasticity


The benefits of cloud elasticity include:

• Flexibility:
• “Pay as you go” cost model
• Fault tolerance and high availability

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 5


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

Solutions Review covers the basics of cloud elasticity: what it is, how it affects cloud
solutions, and how it provides advantages to your company.

Elasticity refers to the dynamic allocation of cloud resources to projects, workflows, and
processes.
v.Cloud elasticity
Cloud elasticity is the process by which a cloud provider will provision resources to an
enterprise’s processes based on the needs of that process.
Cloud provides have systems in place to automatically deliver or remove resources in order to
provide just the right amount of assets for each project.

vi. The Pros of Cloud Elasticity


The pros of cloud elasticity include:
• High availability and reliability
• Facilitating growth
• Automation
• Cost-effectiveness
vii. The Cons of Cloud Elasticity
The cons of cloud elasticity include:
• Finding the right talent
• Learning curve
• Security issues
• Risk of vendor lock in

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 6


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

2.4 ON DEMAND PROVISIONING


i. On-Demand Self-Service
On-demand self-service means that a consumer can request and receive access to a service
offering, without an administrator or some sort of support staff having to fulfill the request
manually.
The request processes and fulfillment processes are all automated. This offers advantages for
both the provider and the consumer of the service.
Implementing user self-service allows customers to quickly procure and access the services they
want.
This is a very attractive feature of the cloud. It makes getting the resources you need very quick
and easy.
With traditional environments, requests often took days or weeks to be fulfilled, causing delays
in projects and initiatives.
User self-service is generally implemented via a user portal.
When implementing user self-service, you need to be aware of potential compliance and
regulatory issues.
It’s important that you understand which process can or cannot be automated in implementing
self-service in your environment.
On-Demand Self-Service
Cloud computing provides resources on demand, i.e. when the consumer wants it. This is made
possible by self-service and automation. Self-service means that the consumer performs all the
actions needed to acquire the service herself, instead of going through an IT department, for
example. The consumer’s request is then automatically processed by the cloud infrastructure,
without human intervention on the provider’s side.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 7


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

On-demand self-service computing implies a high level of planning. For instance, a cloud
consumer can request a new virtual machine at any time, and expects to have it working in a
couple of minutes. The underlying hardware, however, might take 90 days to get delivered to the
provider. It is therefore necessary to monitor trends in resource usage and plan for future
situations well in advance.
Simple User Interfaces
The cloud provider can’t assume much specialized knowledge on the consumer’s part. In a
traditional enterprise IT setting, IT specialists process requests from business. They know, for
instance, how much RAM is going to be needed for a given use case. No such knowledge can be
assumed on the part of a cloud service consumer.
Policies
The high level of automation required for operating a cloud means that there is no opportunity
for humans to thoroughly inspect the specifics of a given situation and make an informed
decision for a request based on context.
Cloud On-Demand Self-Service
The first of the essential characteristics that I want to cover is on-demand self-service. The
definition from the NIST is, “A consumer can unilaterally provision computing capabilities, such
as server time and network storage, as needed automatically without requiring human interaction
with each service provider.”
Cloud On-Demand Self-Service Example – AWS EC2 Walkthrough
The easiest way to describe this is to show you how it actually works, so let’s have a look
at Amazon Web Services and I will automatically provision a virtual machine running in the
AWS cloud.
Cloud On-Demand Self-Service In Action
On the server (compute) side, it’s creating a virtual machine with the amount of vCPUs and
memory I selected it. It’s also going to install the Windows operating system in there for me. It’s
going to install that on a 30 GB boot drive because that’s what I selected for the storage and it’s
going to configure my firewall rules as well.
On-Demand Self-Service Benefits
to different IT teams to configure all of those different settings.
The server team would physically rack up the server, install the operating system, do the
software patching and install any applications.
The networking team would configure the VLAN and IP subnet that the virtual machine is going
to be in and also the firewall rules to allow the incoming RDP connection.
The storage team would provision the 30 GB boot disk and attach it to this particular server.
That would all take a lot of time and be done as individual manual tasks by the different teams.
2.5 APPLICATIONS

Cloud service providers provide various applications in the field of art, business, data storage and
backup services, education, entertainment, management, social networking, etc.

The most widely used cloud computing applications are given below

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 8


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

1. Art Applications

Cloud computing offers various art applications for quickly and easily design attractive cards,
booklets, and images. Some most commonly used cloud art applications are given below:
i Moo
Moo is one of the best cloud art applications. It is used for designing and printing business cards,
postcards, and mini cards.
ii. Vistaprint
Vistaprint allows us to easily design various printed marketing products such as business cards,
Postcards, Booklets, and wedding invitations cards.
iii. Adobe Creative Cloud
Adobe creative cloud is made for designers, artists, filmmakers, and other creative professionals.
It is a suite of apps which includes PhotoShop image editing programming, Illustrator, InDesign,
TypeKit, Dreamweaver, XD, and Audition.

2. Business Applications

Business applications are based on cloud service providers. Today, every organization requires
the cloud business application to grow their business. It also ensures that business applications
are 24*7 available to users.

There are the following business applications of cloud computing -


i. MailChimp
MailChimp is an email publishing platform which provides various options to design,
send, and save templates for emails.
iii. Salesforce

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 9


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

Salesforce platform provides tools for sales, service, marketing, e-commerce, and more. It also
provides a cloud development platform.
iv. Chatter
Chatter helps us to share important information about the organization in real time.
v. Bitrix24
Bitrix24 is a collaboration platform which provides communication, management, and social
collaboration tools.
vi. Paypal
Paypal offers the simplest and easiest online payment mode using a secure internet account.
Paypal accepts the payment through debit cards, credit cards, and also from Paypal account
holders.
vii. Slack
Slack stands for Searchable Log of all Conversation and Knowledge. It provides a user-
friendly interface that helps us to create public and private channels for communication.
viii. Quickbooks
Quickbooks works on the terminology "Run Enterprise anytime, anywhere, on any device." It
provides online accounting solutions for the business. It allows more than 20 users to work
simultaneously on the same system.
3. Data Storage and Backup Applications
Cloud computing allows us to store information (data, files, images, audios, and videos) on the
cloud and access this information using an internet connection. As the cloud provider is
responsible for providing security, so they offer various backup recovery application for
retrieving the lost data.
A list of data storage and backup applications in the cloud are given below -
i. Box.com
Box provides an online environment for secure content management,
workflow, and collaboration. It allows us to store different files such as Excel, Word, PDF, and
images on the cloud. The main advantage of using box is that it provides drag & drop service for
files and easily integrates with Office 365, G Suite, Salesforce, and more than 1400 tools.
ii. Mozy
Mozy provides powerful online backup solutions for our personal and business data. It
schedules automatically back up for each day at a specific time.
iii. Joukuu
Joukuu provides the simplest way to share and track cloud-based backup files. Many users use
joukuu to search files, folders, and collaborate on documents.
iv. Google G Suite
Google G Suite is one of the best cloud storage and backup application. It includes Google
Calendar, Docs, Forms, Google+, Hangouts, as well as cloud storage and tools for managing
cloud apps. The most popular app in the Google G Suite is Gmail. Gmail offers free email
services to users.
4. Education Applications
Cloud computing in the education sector becomes very popular. It offers various online distance
learning platforms and student information portals to the students. The advantage of using

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 10


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

cloud in the field of education is that it offers strong virtual classroom environments, Ease of
accessibility, secure data storage, scalability, greater reach for the students, and minimal
hardware requirements for the applications.
There are the following education applications offered by the cloud -
i. Google Apps for Education
Google Apps for Education is the most widely used platform for free web-based email, calendar,
documents, and collaborative study.
ii. Chromebooks for Education
Chromebook for Education is one of the most important Google's projects. It is designed for the
purpose that it enhances education innovation.
iii. Tablets with Google Play for Education
It allows educators to quickly implement the latest technology solutions into the classroom and
make it available to their students.
iv. AWS in Education
AWS cloud provides an education-friendly environment to universities, community colleges, and
schools.
5. Entertainment Applications
Entertainment industries use a multi-cloud strategy to interact with the target audience. Cloud
computing offers various entertainment applications such as online games and video
conferencing.
i. Online games
Today, cloud gaming becomes one of the most important entertainment media. It offers various
online games that run remotely from the cloud. The best cloud gaming services are Shaow,
GeForce Now, Vortex, Project xCloud, and PlayStation Now.
ii. Video Conferencing Apps
Video conferencing apps provides a simple and instant connected experience. It allows us to
communicate with our business partners, friends, and relatives using a cloud-based video
conferencing. The benefits of using video conferencing are that it reduces cost, increases
efficiency, and removes interoperability.
6. Management Applications
Cloud computing offers various cloud management tools which help admins to manage all types
of cloud activities, such as resource deployment, data integration, and disaster recovery. These
management tools also provide administrative control over the platforms, applications, and
infrastructure.
Some important management applications are -
i. Toggl
Toggl helps users to track allocated time period for a particular project.
ii. Evernote
Evernote allows you to sync and save your recorded notes, typed notes, and other notes in one
convenient place. It is available for both free as well as a paid version.
It uses platforms like Windows, macOS, Android, iOS, Browser, and Unix.
iii. Outright

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 11


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

Outright is used by management users for the purpose of accounts. It helps to track income,
expenses, profits, and losses in real-time environment.
iv. GoToMeeting
GoToMeeting provides Video Conferencing and online meeting apps, which allows you to
start a meeting with your business partners from anytime, anywhere using mobile phones or
tablets. Using GoToMeeting app, you can perform the tasks related to the management such as
join meetings in seconds, view presentations on the shared screen, get alerts for upcoming
meetings, etc.
7. Social Applications
Social cloud applications allow a large number of users to connect with each other using social
networking applications such as Facebook, Twitter, Linkedln, etc.
There are the following cloud based social applications -
i. Facebook
Facebook is a social networking website which allows active users to share files, photos,
videos, status, more to their friends, relatives, and business partners using the cloud storage
system. On Facebook, we will always get notifications when our friends like and comment on
the posts.
ii. Twitter
Twitter is a social networking site. It is a microblogging system. It allows users to follow high
profile celebrities, friends, relatives, and receive news. It sends and receives short posts called
tweets.
iii. Yammer
Yammer is the best team collaboration tool that allows a team of employees to chat, share
images, documents, and videos.
iv. LinkedIn
LinkedIn is a social network for students, freshers and professionals.
2.6 BENEFITS

Cloud computing offers your business many benefits. It allows you to set up what is essentially a
virtual office to give you the flexibility of connecting to your business anywhere, any time. With
the growing number of web-enabled devices used in today's business environment (e.g.
smartphones, tablets), access to your data is even easier.

There are many benefits to moving your business to the cloud:


Reduced IT costs
Moving to cloud computing may reduce the cost of managing and maintaining your IT systems.
Rather than purchasing expensive systems and equipment for your business, you can reduce your
costs by using the resources of your cloud computing service provider. You may be able to
reduce your operating costs because:
• the cost of system upgrades, new hardware and software may be included in your
contract
• you no longer need to pay wages for expert staff
• your energy consumption costs may be reduced

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 12


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

• There are fewer time delays.


Scalability
Your business can scale up or scale down your operation and storage needs quickly to suit your
situation, allowing flexibility as your needs change.

Business continuity
Protecting your data and systems is an important part of business continuity planning. Whether
you experience a natural disaster, power failure or other crisis, having your data stored in the
cloud ensures it is backed up and protected in a secure and safe location.
Collaboration efficiency
Collaboration in a cloud environment gives your business the ability to communicate and share
more easily outside of the traditional methods. If you are working on a project across different
locations, you could use cloud computing to give employees, contractors and third parties access
to the same files.
Flexibility of work practices
Cloud computing allows employees to be more flexible in their work practices. For example, you
have the ability to access data from home, on holiday, or via the commute to and from work
(providing you have an internet connection). If you need access to your data while you are off-
site, you can connect to your virtual office, quickly and easily.
Access to automatic updates
Access to automatic updates for your IT requirements may be included in your service fee.
Depending on your cloud computing service provider, your system will regularly be updated
with the latest technology.

12 business benefits of cloud computing.

1. Cost Savings
2. Security
3. Flexibility
4. Mobility

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 13


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

5. Insight
6. Increased Collaboration
7. Quality Control
8. Disaster Recovery
9. Loss Prevention
10. Automatic Software Updates
11. Competitive Edge
12. Sustainability

2.7 CLOUD COMPONENTS


Components in a cloud refers to the platforms, like front end, back end and cloud based delivery
and the network that used

The basic components of cloud computing in a simple topology are divided into 3 (three) parts,
namely clients, datacenter, and distributed servers. The three basic components have specific
goals and roles in running cloud computing operations. The concept of the three components can
be described as follows:
• Clients on cloud computing architecture are said to be the exact same things that are plain, old,
everyday local area networks (LANs). Clients are interacting with to manage their information
on the cloud.
• Datacenter is collection of servers where the application to which you subscribe is housed. It
could be a large room in the basement of your building full of servers on the other side of the
world that you access via the Internet. A growing trend in the IT world is virtualizing servers
• Distributed Servers is a server placement in a different location. But the servers don't have to be
housed in the same location. Often, servers are in geographically disparate locations. But to you,
the cloud subscribers, these servers act as if they're humming away right next to each other.
• Another component of cloud computing is Cloud Applications cloud computing in terms
of software architecture. Cloud Platform is a service in the form of a computing platform
BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 14
MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

that contains hardware infrastructure and software. Usually have certain business
applications and use services PaaS as its business application infrastructure. Cloud
Storage involves processes delivering data storage as a service. Cloud Infrastructure is
the delivery of computing infrastructure as a service.
• Cloud Computing services have several components required, namely:
Cloud Computing services have several components required, namely:
a. Cloud Clients, a computer or software specifically designed for the use of cloud computing
based services.
Example :
• Mobile - Windows Mobile, Symbian
• Thin Client - Windows Terminal Service, CherryPal
• Thick Client - Internet Explorer, FireFox, Chrome

b. Cloud Services, products, services and solutions that are used and delivered real-time via
internet media.
Example :
• Identity - OpenID, OAuth, etc.
• Integration - Amazon Simple Queue Service.
• Payments - PayPal, Google Checkout.
• Mapping - Google Maps, Yahoo! Maps.

c. Cloud Applications, applications that use Cloud Computing in software architecture so that
users don't need to install but they can use the application using a computer.
Example :
• Peer-to-peer - BitTorrent, SETI, and others.
• Web Application - Facebook.
• SaaS - Google Apps, SalesForce.com, and others

d. Cloud Platform, a service in the form of a computing platform consisting of hardware and
infrastructure software. This service is a service in the form of a computing platform which
contains infrastructure hardware and software. Usually has an application certain businesses and
use PaaS services as application infrastructure his business
Example :
• Web Application Frameworks - Python Django, Rubyon Rails, .NET
• Web Hosting
• Propietary - Force.com
e. Cloud Storage, involves the process of storing data as a service.
Example :
• Database - Google Big Table, Amazon SimpleDB.
• Network Attached Storage - Nirvanix CloudNAS, MobileMe iDisk.
f. Cloud Infrastructure, delivery of computing infrastructure as a service.
Example:
• Grid Computing - Sun Grid.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 15


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

• Full Virtualization - GoGrid, Skytap.


• Compute - Amazon Elastic Compute Cloud
The 11 main categories other of cloud computing components are as follows:
• SAAS (Storage-as-a-service) - This refers to the disk space we use when we lack a storage
platform and therefore request it as a service
• Database-as-a-service - This component acts as a database directly from a remote server where
its functionality and other features work as if physical DB is present on the local machine.
• Information-as-a-service - Information that can be accessed remotely from anywhere called
• Information-as-a-Service. Highlight the flexibility of accessing information remotely
• Process-as-a-service - Unlike other components, this component combines various
resources such as data and services. This is mainly used for business processes where
various key services and information are combined to form a process.
• Application-as-a-service (AaaS) - As the name suggests, this is a complete package for
accessing and using applications. This is made to connect end users to the internet and
end users usually use browsers and the internet to access this service. This component is
the main front-end for end users
• Platform-as-a-service (PaaS) - In this component, the entire application development
process takes place including creating, implementing, storing, and testing the database.
• Integration-as-a-service - Mostly related to application components that have been built
but must be integrated with other applications. This helps in mediating between remote
servers and local machines.
• Security-as-a-service - Because security is what most people expect in the cloud, this is
one of the most needed components. There are three-dimensional security principles
found on cloud platforms.
• Management / governance-as-a-service (MaaS and GaaS) - This is related to cloud
management, such as resource utilization, virtualization, and server up and downtime
management.
• Testing-as-a-service (TaaS) - Using these components, remote-hosted applications are
tested in terms of design requirements, database functionality, and security measures
among other testing features.
• Infrastructure-as-a-service (IaaS) - This is a complete virtual consideration of
networks, servers, software, and hardware on cloud platforms. Users will not be able to
monitor the backend process, but they will be presented with a system that is fully
configured with all processes set up for direct use.
2.8 CLIENTS
Client/server networks have servers, workstations and the devices that connect them.
A client/server network has three main components: workstations, servers and the network
devices that connect them.
Workstations are the computers that are subordinate to servers. They send requests to servers to
access shared programs, files and databases, and are governed by policies defined by servers.
A server "services" requests from workstations and can perform many functions as a central
repository of files, programs, databases and management policies. Network devices provide the

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 16


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

communication path for servers and workstations. They act as connectors and route data in and
out of the network.

2.9. DATA CENTERS

At its simplest, a data center is a physical facility that organizations use to house their critical
applications and data.
A data center's design is based on a network of computing and storage resources that enable the
delivery of shared applications and data.
The key components of a data center design include routers, switches, firewalls, storage systems,
servers, and application-delivery controllers.
What defines a modern data center?
Modern data centers are very different than they were just a short time ago. Infrastructure has
shifted from traditional on-premises physical servers to virtual networks that support applications
and workloads across pools of physical infrastructure and into a multicloud environment.
In this era, data exists and is connected across multiple data centers, the edge, and public and
private clouds. The data center must be able to communicate across these multiple sites, both on-
premises and in the cloud. Even the public cloud is a collection of data centers. When
applications are hosted in the cloud, they are using data center resources from the cloud provider.
Why are data centers important to business?
In the world of enterprise IT, data centers are designed to support business applications and
activities that include:
• Email and file sharing
• Productivity applications
• Customer relationship management (CRM)
• Enterprise resource planning (ERP) and databases
• Big data, artificial intelligence, and machine learning
• Virtual desktops, communications and collaboration services
What are the core components of a data center?
Data center provide:
Network infrastructure. This connects servers (physical and virtualized), data center services,
storage, and external connectivity to end-user locations.
Storage infrastructure. Data is the fuel of the modern data center. Storage systems are used to
hold this valuable commodity.
Computing resources. Applications are the engines of a data center. These servers provide the
processing, memory, local storage, and network connectivity that drive applications.
How do data centers operate?
Data center services are typically deployed to protect the performance and integrity of the core
data center components.
Network security appliances. These include firewall and intrusion protection to safeguard the
data center.
Application delivery assurance. To maintain application performance, these mechanisms provide
application resiliency and availability via automatic failover and load balancing.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 17


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

What is in a data center facility?


Data center components require significant infrastructure to support the center's hardware and
software. These include power subsystems, uninterruptible power supplies (UPS), ventilation,
cooling systems, fire suppression, backup generators, and connections to external networks.
What are the standards for data center infrastructure?
The most widely adopted standard for data center design and data center infrastructure is
ANSI/TIA-942. It includes standards for ANSI/TIA-942-ready certification, which ensures
compliance with one of four categories of data center tiers rated for levels of redundancy and
fault tolerance.
Tier1: Basic site infrastructure. A Tier 1 data center offers limited protection against physical
events. It has single-capacity components and a single, non redundant distribution path.
Tier2: Redundant-capacity component site infrastructure. This data center offers improved
protection against physical events. It has redundant-capacity components and a single, non
redundant distribution path.
Tier3: Concurrently maintainable site infrastructure. This data center protects against virtually
all physical events, providing redundant-capacity components and multiple independent
distribution paths. Each component can be removed or replaced without disrupting services to
end users.
Tier4: Fault-tolerant site infrastructure. This data center provides the highest levels of fault
tolerance and redundancy. Redundant-capacity components and multiple independent
distribution paths enable concurrent maintainability and one fault anywhere in the installation
without causing downtime.

Types of data centers


Many types of data centers and service models are available. Their classification depends on
whether they are owned by one or many organizations, how they fit (if they fit) into the topology
of other data centers, what technologies they use for computing and storage, and even their
energy efficiency.
There are four main types of data centers:
Enterprise data centers
These are built, owned, and operated by companies and are optimized for their end users. Most
often they are housed on the corporate campus.

Managed services data centers


These data centers are managed by a third party (or a managed services provider) on behalf of a
company. The company leases the equipment and infrastructure instead of buying it.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 18


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

Colocation data centers


In colocation ("colo") data centers, a company rents space within a data center owned by others
and located off company premises. The colocation data center hosts the infrastructure: building,
cooling, bandwidth, security, etc., while the company provides and manages the components,
including servers, storage, and firewalls.

Cloud data centers


In this off-premises form of data center, data and applications are hosted by a cloud services
provider such as Amazon Web Services (AWS), Microsoft (Azure), or IBM Cloud or other
public cloud provider.

Distributed cloud
Distributed cloud enables a geographically distributed, centrally managed distribution of public
cloud services optimized for performance, compliance, and edge computing.

2.10 DISTRIBUTED SERVERS

The demand for distributed cloud and edge computing is driven primarily by Internet of Things
(IoT), artificial intelligence (AI), telecommunications (telco) and other applications that need to
process huge amounts of data in real time. But distributed cloud is also helping companies
surmount the challenges of complying with country- or industry-specific data privacy regulations
- and, more recently, providing IT services to employees and end-users redistributed by the
COVID-19 pandemic.

How distributed cloud works


BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 19
MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

You may have heard of distributed computing, in which application components are spread
across different networked computers, and communicate with one another through messaging
or APIs, with the goal of improving overall application performance or maximize computing
efficiency.

Distributed cloud goes a giant step further by distributing a public cloud provider's entire
compute stack to wherever a customer might need it - on-premises in the customer's own data
center or private cloud, or off-premises in one or more public cloud data centers that may or may
not belong to the cloud provider.

In effect, distributed cloud extends the provider's centralized cloud with geographically
distributed micro-cloud satellites. The cloud provider retains central control over the operations,
updates, governance, security and reliability of all distributed infrastructure. And the customer
accesses everything - the centralized cloud services, and the satellites wherever they are located -
as a single cloud and manages it all from a single control plane. In this way, as industry analyst
Gartner puts it, distributed cloud fixes with hybrid cloud and hybrid multicolor breaks.

2.11. CHARACTERIZATION OF DISTRIBUTED COMPUTING

Key characteristics of distributed systems

1. Resource sharing
2. Openess
3. Concurrency
4. Scalability
5. Fault Tolerance
6. Transparency

11.1 Resource Sharing


Resource:
hardware - disks and printers
software - files, windows, and data objects
Hardware sharing for:
convenience
reduction of cost
Data sharing for:
consistency - compilers and libraries
exchange of information - database
cooperative work - groupware

Resource Manager

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 20


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

-> Software module that manages a set of resources


-> Each resource requires its own management policies and methods
-> Client server model - server processes act as resource managers for a set of resources and a set
of clients
Object based model- resources are objects that can move
Object manager is movable.
Request for a task on an object is sent to the current manager.
Manager must be collocated with object.
Examples: ARGUS, Amoeba, Mach,
migratory improvements - Arjuna, Clouds, Emerald

11.2 Openess
How it can be extended.
Open or closed with respect to
Hardware or software
Open
- published specifications and interfaces
- standardization of interfaces
UNIX was a relatively open operating system
C language readily available
System calls documented
New hardware drivers were easy to add
Applications were hardware independent
IPC allowed extension of services and resources

Open distributed system


Published specifications
Provision for uniform inter-process communications and published interfaces for access
Conformance of all vendors to published standard must be tested and certified if users are to be
protected from responsibility for resolving system integration problems

11.3 Concurrency
Multi-programming
Multi-processing
Parallel executions in distributed systems
1. Many users using the same resources, application interactions
2. Many servers responding to client requests

11.4 Scalability
How the system handles growth
Small system - two computers and a file server on a single network
Large system - current Internet
Scalability

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 21


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

- software should not change to support growth


- research area - for large, high-performance networks
Avoid centralization to support scalability
Choose your naming or numbering scheme carefully
Handle timing problems with caching and data replication

11.5 Fault Tolerance


Computers fail therefore we need:
Hardware redundancy
Software recovery
Increase in availability for services
Network is not normally redundant
Program recovery via the process group

11.6 Transparency
Transparency of

• access
• location
• concurrency
• replication
• failure
• migration
• performance
• scaling
Key characteristics of distributed systems
1. Resource sharing
2. Openess
3. Concurrency
4. Scalability
5. Fault tolerance
6. Transparency
Key design goals
• High performance
• Reliability
• Scalability
• Consistency
• Security
Basic design issues
• naming
• communications
• software structure
• workload allocation

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 22


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

• consistency maintenance
Naming
• communication identifier
• name service
• contextual resolution of name
• name mapping
• pure names Vs names with meaning
Communication
Reasons for communicating:
Transfer of data
Synchronization
Methods of communications
message passing - send and receive primitives
synchronous or asynchronous
blocking or non-blocking
mechanisms of message passing - channels, sockets, ports
client-server communication model
group multicast communication model

2.12. DISTRIBUTED ARCHITECURAL MODELS


In distributed architecture, components are presented on different platforms and several
components can cooperate with one another over a communication network in order to achieve a
specific objective or goal.
• In this architecture, information processing is not confined to a single machine rather it is
distributed over several independent computers.
• A distributed system can be demonstrated by the client-server architecture which forms
the base for multi-tier architectures; alternatives are the broker architecture such as
CORBA, and the Service-Oriented Architecture (SOA).
• There are several technology frameworks to support distributed architectures, including
.NET, J2EE, CORBA, .NET Web services, AXIS Java Web services, and Globus Grid
services.
• Middleware is an infrastructure that appropriately supports the development and
execution of distributed applications. It provides a buffer between the applications and
the network.
• It sits in the middle of system and manages or supports the different components of a
distributed system. Examples are transaction processing monitors, data convertors and
communication controllers etc.
Middleware as an infrastructure for distributed system

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 23


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

The basis of a distributed architecture is its transparency, reliability, and availability.


The following table lists the different forms of transparency in a distributed system −
Sr.No. Transparency & Description

1 Access
Hides the way in which resources are accessed and the differences in data
platform.

2 Location
Hides where resources are located.

3 Technology
Hides different technologies such as programming language and OS from user.

4 Migration / Relocation
Hide resources that may be moved to another location which are in use.

5 Replication
Hide resources that may be copied at several locations.

6 Concurrency
Hide resources that may be shared with other users.

7 Failure
Hides failure and recovery of resources from user.

8 Persistence
Hides whether a resource ( software ) is in memory or disk.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 24


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

Advantages
• Resource sharing − Sharing of hardware and software resources.
• Openness − Flexibility of using hardware and software of different vendors.
• Concurrency − Concurrent processing to enhance performance.
• Scalability − Increased throughput by adding new resources.
• Fault tolerance − The ability to continue in operation after a fault has occurred.

Disadvantages
• Complexity − They are more complex than centralized systems.
• Security − More susceptible to external attack.
• Manageability − More effort required for system management.
• Unpredictability − Unpredictable responses depending on the system organization and
network load.

Centralized System vs. Distributed System


Criteria Centralized system Distributed System

Economics Low High

Availability Low High

Complexity Low High

Consistency Simple High

Scalability Poor Good

Technology Homogeneous Heterogeneous

Security High Low

12.1 Client-Server Architecture


The client-server architecture is the most common distributed system architecture which
decomposes the system into two major subsystems or logical processes −
• Client − This is the first process that issues a request to the second process i.e. the
server.
• Server − This is the second process that receives the request, carries it out, and sends a
reply to the client.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 25


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

In this architecture, the application is modelled as a set of services that are provided by servers
and a set of clients that use these services. The servers need not know about clients, but the
clients must know the identity of servers, and the mapping of processors to processes is not
necessarily 1 : 1
Client-server Architecture can be classified into two models based on the functionality of the
client −

Thin-client model
In thin-client model, all the application processing and data management is carried by the
server. The client is simply responsible for running the presentation software.
• Used when legacy systems are migrated to client server architectures in which legacy
system acts as a server in its own right with a graphical interface implemented on a
client
• A major disadvantage is that it places a heavy processing load on both the server and the
network.

Thick/Fat-client model
In thick-client model, the server is only in charge for data management. The software on the
client implements the application logic and the interactions with the system user.
• Most appropriate for new C/S systems where the capabilities of the client system are
known in advance
• More complex than a thin client model especially for management. New versions of the
application have to be installed on all clients.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 26


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

Advantages
• Separation of responsibilities such as user interface presentation and business logic
processing.
• Reusability of server components and potential for concurrency
• Simplifies the design and the development of distributed applications
• It makes it easy to migrate or integrate existing applications into a distributed
environment.
• It also makes effective use of resources when a large number of clients are accessing a
high-performance server.

Disadvantages
• Lack of heterogeneous infrastructure to deal with the requirement changes.
• Security complications.
• Limited server availability and reliability.
• Limited testability and scalability.
• Fat clients with presentation and business logic together.

12.2 Multi-Tier Architecture (n-tier Architecture)


Multi-tier architecture is a client–server architecture in which the functions such as presentation,
application processing, and data management are physically separated. By separating an
application into tiers, developers obtain the option of changing or adding a specific layer,
instead of reworking the entire application. It provides a model by which developers can create
flexible and reusable applications.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 27


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

The most general use of multi-tier architecture is the three-tier architecture. A three-tier
architecture is typically composed of a presentation tier, an application tier, and a data storage
tier and may execute on a separate processor.

Presentation Tier
Presentation layer is the topmost level of the application by which users can access directly such
as webpage or Operating System GUI (Graphical User interface). The primary function of this
layer is to translate the tasks and results to something that user can understand. It communicates
with other tiers so that it places the results to the browser/client tier and all other tiers in the
network.

Application Tier (Business Logic, Logic Tier, or Middle Tier)


Application tier coordinates the application, processes the commands, makes logical decisions,
evaluation, and performs calculations. It controls an application’s functionality by performing
detailed processing. It also moves and processes data between the two surrounding layers.

Data Tier
In this layer, information is stored and retrieved from the database or file system. The
information is then passed back for processing and then back to the user. It includes the data
persistence mechanisms (database servers, file shares, etc.) and provides API (Application
Programming Interface) to the application tier which provides methods of managing the stored
data.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 28


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

Advantages
• Better performance than a thin-client approach and is simpler to manage than a thick-
client approach.
• Enhances the reusability and scalability − as demands increase, extra servers can be
added.
• Provides multi-threading support and also reduces network traffic.
• Provides maintainability and flexibility
Disadvantages
• Unsatisfactory Testability due to lack of testing tools.
• More critical server reliability and availability.

12.3 Broker Architectural Style


Broker Architectural Style is a middleware architecture used in distributed computing to
coordinate and enable the communication between registered servers and clients. Here, object
communication takes place through a middleware system called an object request broker
(software bus).
• Client and the server do not interact with each other directly. Client and server have a
direct connection to its proxy which communicates with the mediator-broker.
• A server provides services by registering and publishing their interfaces with the broker
and clients can request the services from the broker statically or dynamically by look-up.
• CORBA (Common Object Request Broker Architecture) is a good implementation
example of the broker architecture.
BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 29
MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

Components of Broker Architectural Style


The components of broker architectural style are discussed through following heads −
Broker
Broker is responsible for coordinating communication, such as forwarding and dispatching the
results and exceptions. It can be either an invocation-oriented service, a document or message -
oriented broker to which clients send a message.
• It is responsible for brokering the service requests, locating a proper server, transmitting
requests, and sending responses back to clients.
• It retains the servers’ registration information including their functionality and services
as well as location information.
• It provides APIs for clients to request, servers to respond, registering or unregistering
server components, transferring messages, and locating servers.
Stub
Stubs are generated at the static compilation time and then deployed to the client side which is
used as a proxy for the client. Client-side proxy acts as a mediator between the client and the
broker and provides additional transparency between them and the client; a remote object
appears like a local one.
Skeleton
Skeleton is generated by the service interface compilation and then deployed to the server side,
which is used as a proxy for the server. Server-side proxy encapsulates low-level system-
specific networking functions and provides high-level APIs to mediate between the server and
the broker.
Bridge
A bridge can connect two different networks based on different communication protocols. It
mediates different brokers including DCOM, .NET remote, and Java CORBA brokers.
Bridges are optional component, which hides the implementation details when two brokers
interoperate and take requests and parameters in one format and translate them to another
format.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 30


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

Broker implementation in CORBA


CORBA is an international standard for an Object Request Broker – a middleware to manage
communications among distributed objects defined by OMG (object management group).

12.4 Service-Oriented Architecture (SOA)


A service is a component of business functionality that is well-defined, self-contained,
independent, published, and available to be used via a standard programming interface. Service-
oriented architecture is a client/server design which support business-driven IT approach in
which an application consists of software services and software service consumers (also known
as clients or service requesters).

Features of SOA
A service-oriented architecture provides the following features −
• Distributed Deployment − Expose enterprise data and business logic as loosely,
coupled, discoverable, structured, standard-based, coarse-grained, stateless units of
functionality called services.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 31


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

• Composability − Assemble new processes from existing services that are exposed at a
desired granularity through well defined, published, and standard complaint interfaces.
• Interoperability − Share capabilities and reuse shared services across a network
irrespective of underlying protocols or implementation technology.
• Reusability − Choose a service provider and access to existing resources exposed as
services.

SOA Operation
The following figure illustrates how SOA operates −

Advantages
• Loose coupling of service–orientation provides great flexibility for enterprises to make
use of all available service recourses irrespective of platform and technology
restrictions.
• Each service component is independent from other services due to the stateless service
feature.
• The implementation of a service will not affect the application of the service as long as
the exposed interface is not changed.
• A client or any service can access other services regardless of their platform, technology,
vendors, or language implementations.
• Reusability of assets and services since clients of a service only need to know its public
interfaces, service composition.
• SOA based business application development are much more efficient in terms of time
and cost.
• Enhances the scalability and provide standard connection between systems.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 32


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

• Efficient and effective usage of ‘Business Services’.


• Integration becomes much easier and improved intrinsic interoperability.
• Abstract complexity for developers and energize business processes closer to end users.

2.13 PRINCIPLES OF PARALLEL AND DISTRIBUTED COMPUTING

Difference between Parallel Computing and Distributed Computing

Parallel Computing:
In parallel computing multiple processors performs multiple tasks assigned to them
simultaneously. Memory in parallel systems can either be shared or distributed. Parallel
computing provides concurrency and saves time and money.

Distributed Computing:
In distributed computing we have multiple autonomous computers which seems to the user as
single system. In distributed systems there is no shared memory and computers communicate
with each other through message passing. In distributed computing a single task is divided
among different computers.

What is Distributed Computing?

Distributed computing is different than parallel computing even though the principle is the same.

Distributed computing is a field that studies distributed systems. Distributed systems are systems
that have multiple computers located in different locations.

These computers in a distributed system work on the same program. The program is divided into
different tasks and allocated to different computers.

The computers communicate with the help of message passing. Upon completion of computing,
the result is collated and presented to the user.

Distributed Computing vs. Parallel Computing: A Quick Comparison

Having covered the concepts, let’s dive into the differences between them:

Number of Computer Systems Involved


Parallel computing generally requires one computer with multiple processors. Multiple
processors within the same computer system execute instructions simultaneously.

All the processors work towards completing the same task. Thus they have to share resources
and data.
BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 33
MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

Dependency Between Processes


In parallel computing, the tasks to be solved are divided into multiple smaller parts. These
smaller tasks are assigned to multiple processors.

Here the outcome of one task might be the input of another. This increases dependency between
the processors. We can also say, parallel computing environments are tightly coupled.

Some distributed systems might be loosely coupled, while others might be tightly coupled.

Which is More Scalable?


In parallel computing environments, the number of processors you can add is restricted. This is
because the bus connecting the processors and the memory can handle a limited number of
connections.

Distributed computing environments are more scalable. This is because the computers are
connected over the network and communicate by passing messages.
Resource Sharing
In systems implementing parallel computing, all the processors share the same memory.

They also share the same communication medium and network. The processors communicate
with each other with the help of shared memory.

Distributed systems, on the other hand, have their own memory and processors.

Synchronization
In parallel systems, all the processes share the same master clock for synchronization. Since all
the processors are hosted on the same physical system, they do not need any synchronization
algorithms.

In distributed systems, the individual processing systems do not have access to any central clock.
Hence, they need to implement synchronization algorithms.

Where Are They Used?


Parallel computing is often used in places requiring higher and faster processing power. For
example, supercomputers.

Since there are no lags in the passing of messages, these systems have high speed and efficiency.

Distributed computing is used when computers are located at different geographical locations.

In these scenarios, speed is generally not a crucial matter. They are the preferred choice when
scalability is required.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 34


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

Distributed Computing vs. Parallel Computing’s Tabular Comparison

All in all, we can say that both computing methodologies are needed. Both serve different
purposes and are handy based on different circumstances.

It is up to the user or the enterprise to make a judgment call as to which methodology to opt for.
Generally, enterprises opt for either one or both depending on which is efficient where. It is all
based on the expectations of the desired result.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 35


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 36


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

2.14. APPLICATIONS OF CLOUD COMPUTING


• Art Applications
Applications of Cloud computing provide various types of art application services for designing
purposes which helps to create attractive designs for books, cards, and other images.
• File Storage Platform
There are various online file storage platforms such as Media fire, Hot file, Rapid share, etc.,
which are perfect examples of cloud-based applications that help host files such as documents,
images, and videos
• Image Editing Applications
These days we see many applications that provide free editing of pictures. These Cloud
Computing services have various features which include image resizing, editing, cropping,
special effects, etc including Graphic User Interface (GUI). These applications also provide
brightness and contrast editable features. They also provide high-level complex features that are
easy to use. Adobe Creative Cloud and Fotor are some of the popular examples.
• Data Storage Applications
Data Storage applications of the computer are also one of the options for Applications of Cloud
Computing. It is also one of the various cloud applications. These Applications of Cloud
Computing are created for security and ensuring data is backed up securely.
• Antivirus Applications
Various antivirus applications are also available for support service. These cloud application
services provide the smooth functioning of the system.
• Entertainment Applications
There are also entertainment applications that use a multi-cloud strategy to interact with a
targeted audience. The Applications of Cloud Computing services provide online gaming and
entertainment services. Project Atlas and Google Stadia are two entertainment applications
created using cloud computing.
• URL conversion Applications
There are several social media applications; out of them is a Twitter-related application that
helps to convert long-sized URLs in to short URLs. The purpose of the application such as bitly
is to convert the long URL into shorter ones which in turn redirects the user to the original
website. It helps in micro blogging and also secures the application from any kind of malware
and hacking activity.
• Meeting Applications
Applications of Cloud Computing also provide go-to-meeting facilities such as video
conferencing and other online meeting apps. These are cloud application services that allow you
to start a meeting for personal and professional requirements. ‘GoTo Meeting’ and ‘Zoom’ are
some of the applications that facilitate the user with all features for smooth video conferencing.
• Presentation Applications
Certain software is available for presentation services that allow importing PowerPoint
Presentations by creating slides. ‘Sliderocket’ is one such application helping the user to create
formal presentations. It provides a free and premium version of the application.
• Social Media Applications

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 37


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

Multiple social media applications allow a large number of users to connect with each other
every minute. Applications such as Facebook, Twitter, yammer, Linkedin, etc, help connect
users on a real-time basis. These applications allow sharing videos, images, experiences, stories,
etc.
• GPS Application
Users are facilitated with features like GPS which is another advancement of cloud computing
and its applications. These applications help users to guide the directions from the map along
with finding locations on the internet. Sites such as google maps, yahoo maps, etc. are such
applications that provide Cloud services. These applications are used by millions of people and
are free to use.
• Accounting Application
Accounting software is one of the real-time applications of cloud computing that helps
management related to the accounting segment of the business. Outright is one such application
used by larger enterprises helping in real-time day-to-day accounting service. It helps you to
track real-time expenses, profits, and losses. Kash Flow and Zoho Books are other examples of
cloud accounting applications.
• Management Application
One of the popular Cloud Computing applications is ‘Evernote’. This application helps to save
and share notes for the user in a single place which a user can refer to at any time. These can be
accessed from anywhere around the world. This application also comes under management
applications which can be used for personal as well as professional use.
• e-Commerce Application
Another popular application is that of e-commerce software. The application areas of cloud
computing help the e-commerce business users with easy access and smooth functioning of the
business. Most larger businesses prefer this software as less time and effort are involved with
perfect solutions.
• Software as a Service (SaaS) Applications
Different Cloud Computing Applications which include Software as a Service (SaaS)
applications and others such as FedEx, postal service also use cloud platforms for tracking
details and managing the business..
2.15. BENEFITS

1. Omnipresence
Cloud is everywhere, and you cannot escape. Its omnipresence allows easy access to the
functionality, tractable data, and transparency. It enables multiple users to work on the same
project without any glitches. This, in turn, not only reduces cost but also builds a robust work
model.

2. Data Security
Data security is perhaps one of the most significant factors that keep business owners up at night.
Cloud computing can help alleviate this stress.
Heavy encryption of the files and databases can mitigate internal thread. They can be transferred
without the chance of being hacked.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 38


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

3. Cost Reduction
It is no secret that switching over to cloud is an expensive affair. However, if executed properly,
the ROI received during cloud deployment can override the initial capital investment.

ROI can be in the form of product longevity, optimization of resources, to name a few. Once on
the cloud, it will negate the need for additional software and plugins. This helps save time as
well as money.

4. Complete control
Organizations often face immense scrutiny in case of lagged operations, data loss, and
operational dysfunctioning. It is imperative that organizations have sufficient control over the
ongoing actions of the company, which is assured by cloud computing.

In the case of highly sensitive data, the "cloud technology" enables the user to have a bird’s eye
view over the data, giving them the power to track accessibility, usability, and deployability..

5. Flexibility
Focusing primarily on the in-house IT structure can either lead to a breakdown or poor task
execution. The lack of space optimization can lead to low turnaround time and the inability to
cater to the customer requirement.

The ability to make & execute quick business decisions without worrying about the impact on
the IT infrastructure makes cloud platform one of the most desired.

6. Mobility
The omnipresence of the cloud marks mobility as its prime feature. With the help of cloud
connection, companies can connect remotely over an array of devices like smart phones, iPad,
and Laptops with ease.

Remote accessibility enables quick turnaround time, instant solutions, and constant connection. It
is perfect for freelancers, remote employees, organizations with offices in different locations and
different sectors.

7. Disaster Recovery
The biggest disaster a company can undergo is "loss of data." However, the cloud is a repository
for backed up data, which helps companies recover their lost data with ease and security.

8. Scalability
Scalability can be defined as the ability of the product to amplify its impact on the customers and
meet the demands posed. Scalability is one of the most sought-after attributes of cloud
computing that can help increase business.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 39


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

9. Automated Updates on Software


Imagine working on an important project and you later come to a "Software update required"
message? Nothing is more tedious than manual updates. But cloud takes care of that as well.
With automatic software updates and cyclic upgrades, users can now focus their time & energy
into work.

10. Enhanced Collaboration


The points, as mentioned earlier, guarantee ease of use and remote accessibility, which makes it
an instant hit with the technical world. The immediate connection leads to sound business
opportunities, which further enhances the chances of collaborations within the team
membersCloud, being an all visible platform, provides easy access to the tasks in hand.

11. Reduced Carbon Footprint/Sustainable


Since organizations have deployed a cloud platform, there has been a reduction in the usage of
tools and physical objects. The cloud empowers the deployment of virtual objects than physical
elements and improves proactivity on a digital level.

It cuts down on paper waste, electrical consumption, and computer-related emissions. After all,
merely placing a recycling bin in your office is not going to cut the current environmental crisis.

12. Easy Management


Since the organizational data is stored in the cloud, it is easy to assess the work, utilities, and
tasks. Such a clear task path makes it easy to manage the services.

13. Loss Prevention


Crisis cannot be foretold, and one of the worst ones that can befall an organization is the loss of
assets. If an organization is not leveraging the cloud platform, it subjecting its data to undue risk.
The cloud allows the data to be automatically backed in its encrypted silos, which can avoid data
loss.

14. A Step Beyond


Several companies prefer to go old school. However, they miss out on several advantages that
can give them a boost concerning the competitors. As discussed, remote accessibility results in
higher production, trackability leads an analytical approach to work, and the streamlined ability
of the platform allows users to focus on a project and stay result oriented.

15. Greater Insights


Thanks to the cloud's transparency because companies will now have a greater insight into the
processing and will be able to produce scalable results based on an analytical approach.

2.16.CLOUD SERVICES
The term "cloud services" refers to a wide range of services delivered on demand to companies and
customers over the internet. These services are designed to provide easy, affordable access to

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 40


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

applications and resources, without the need for internal infrastructure or hardware. From checking
email to collaborating on documents, most employees use cloud services throughout the workday,
whether they’re aware of it or not.
Cloud Service Models

There are the following three types of cloud service models -


1. Infrastructure as a Service (IaaS)
2. Platform as a Service (PaaS)
3. Software as a Service (SaaS)

Infrastructure as a Service (IaaS)

IaaS is also known as Hardware as a Service (HaaS). It is a computing infrastructure managed


over the internet. The main advantage of using IaaS is that it helps users to avoid the cost and
complexity of purchasing and managing the physical servers.

Characteristics of IaaS

There are the following characteristics of IaaS -


o Resources are available as a service
o Services are highly scalable
o Dynamic and flexible
o the organization's need.
o Support multiple languages and frameworks.
o Provides ability to "Auto-scale".

Example: AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, Google App Engine,
Apache Stratos, Magento Commerce Cloud, and OpenShift.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 41


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

Software as a Service (SaaS)

SaaS is also known as "on-demand software". It is a software in which the applications are
hosted by a cloud service provider. Users can access these applications with the help of internet
connection and web browser.

Characteristics of SaaS

There are the following characteristics of SaaS -


o Managed from a central location
o Hosted on a remote server
o Accessible over the internet
o Users are not responsible for hardware and software updates. Updates are applied
automatically.
o The services are purchased on the pay-as-per-use basis

Example: BigCommerce, Google Apps, Salesforce, Dropbox, ZenDesk, Cisco WebEx,


ZenDesk, Slack, and GoToMeeting.
o Difference between IaaS, PaaS, and SaaS
The below table shows the difference between IaaS, PaaS, and SaaS -
o
o GUI and API-based access
o Automated administrative tasks

Example: DigitalOcean, Linode, Amazon Web Services (AWS), Microsoft Azure, Google
Compute Engine (GCE), Rackspace, and Cisco Metacloud.

Platform as a Service (PaaS)

PaaS cloud computing platform is created for the programmer to develop, test, run, and manage
the applications.

Characteristics of PaaS

There are the following characteristics of PaaS -


o Accessible to various users via the same development application.
o
o Integrates with web services and databases.
o Builds on virtualization technology, so resources can easily be scaled up or down as per

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 42


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

Difference between IAAS, PAAS and SAAS :


Basis Of IAAS PAAS SAAS

Infrastructure as a
Platform as a service. Software as a service.
Stands for service.

IAAS is used by network PAAS is used by SAAS is used by the


Uses architects. developers. end user.

PAAS gives access to


IAAS gives access to the
run time environment to
resources like virtual SAAS gives access to
deployment and
machines and virtual the end user.
development tools for
storage.
Access application.

It is a cloud computing
It is a service model that It is a service model in
model that delivers tools
provides virtualized cloud computing that
that are used for the
computing resources hosts software to make
development of
over the internet. it available to clients.
Model applications.

There is no requirement
Some knowledge is
It requires technical about technicalities
required for the basic
Technical knowledge. company handles
setup.
understanding. everything.

It is popular among
It is popular among
It is popular among consumers and
developers who focus
developers and companies, such as file
on the development of
researchers. sharing, email, and
apps and scripts.
Popularity networking.

It has about a 27 % rise


It has around a 12% It has around 32%
Percentage in the cloud computing
increment. increment.
rise model.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 43


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

Basis Of IAAS PAAS SAAS

Used by the skilled Used by mid-level


Used among the users of
developer to develop developers to build
entertainment.
Usage unique applications. applications.

MS Office web,
Amazon Web Services, Facebook, and Google
Facebook and Google
sun, vCloud Express. search engine.
Cloud services. Apps.

Enterprise AWS virtual private


Microsoft Azure. IBM cloud analysis.
services. cloud.

Outsourced
Salesforce Force.com, Gigaspaces. AWS, Terremark
cloud services.

Operating System,
Runtime, Middleware, Data of the application Nothing
User Controls and Application data

It is highly scalable to
It is highly scalable to
It is highly scalable and suit the different
suit the small, mid and
flexible. businesses according to
enterprise level business
Others resources.

1. Eucalyptus:

Eucalyptus฀ (Elastic Utility Computing Architecture for Linking Your Programs to Useful
Systems)
an open source software infrastructure that implements IaaS-style cloud computing.
The goal of Eucalyptus is to allow sites with existing clusters and server infrastructure to host a
cloud that is interface-compatible with Amazon's AWS and (soon) the Sun Cloud open API.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 44


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

In addition, through its interfaces Eucalyptus is able to host cloud platform services such as
AppScale (an open source implementation of Google's AppEngine) and Hadoop, making it
possible to "mix and match" different service paradigms and configurations within the cloud.
Finally, Eucalyptus can leverage a heterogeneous collection of virtualization technologies within
a single cloud, to incorporate resources that have already been virtualized without modifying their
configuration.
It will also cover experiences of hosting the Eucalyptus Public Cloud (EPC) as a free public cloud
platform for experimental use, as well as the ability to use the EPC in conjunction with commercial
Web development services, such as Rightscale, that target AWS.
Eucalyptus

Eucalyptus is an open source software platform for implementing Infrastructure as a Service (IaaS)
in a private or hybrid cloud computing environment.

The Eucalyptus cloud platform pools together existing virtualized infrastructure to create cloud
resources for infrastructure as a service, network as a service and storage as a service. The name
Eucalyptus is an acronym for Elastic Utility Computing Architecture for Linking Your Programs
To Useful Systems.

Eucalyptus was founded out of a research project in the Computer Science Department at the
University of California, Santa Barbara, and became a for-profit business called Eucalyptus
Systems in 2009. Eucalyptus Systems announced a formal agreement with Amazon Web Services
(AWS) in March 2012, allowing administrators to move instances between a Eucalyptus private
cloud and the Amazon Elastic Compute Cloud (EC2) to create a hybrid cloud. The partnership
also allows Eucalyptus to work with Amazon’s product teams to develop unique AWS-compatible
features.

Eucalyptus features include:

● Supports both Linux and Windows virtual machines (VMs).


● Application program interface- (API) compatible with Amazon EC2 platform.
● Compatible with Amazon Web Services (AWS) and Simple Storage Service (S3).
● Works with multiple hypervisors including VMware, Xen and KVM.
● Can be installed and deployed from source code or DEB and RPM packages.
● Internal processes communications are secured through SOAP and WS-Security.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 45


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

● Multiple clusters can be virtualized as a single cloud.


● Administrative features such as user and group management and reports.

Version 3.3, which became generally available in June 2013, adds the following features:

● Auto Scaling
● Elastic Load Balancing:
● CloudWatch:
● Resource Tagging
● Expanded Instance Types
● Maintenance Mode

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 46


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

2. OpenNebula: A Free Solution for Building Clouds


OpenNebula is a free and open source software solution for building clouds and for data centre
virtualisation. It is based on open technologies and is distributed under the Apache License 2.
OpenNebula has features for scalability, integration, security and accounting. It offers cloud
users and administrators a choice of interfaces.

OpenNebula is an open source platform for constructing virtualised private, public and hybrid
clouds. It is a simple yet feature-rich, flexible solution to build and manage data centre
virtualisation and enterprise clouds. So, with OpenNebula, virtual systems can be administered
and monitored centrally on different Hyper-V and storage systems. When a component fails,
OpenNebula takes care of the virtual instances on a different host system. The integration and
automation of an existing heterogeneous landscape is highly flexible without further hardware
investments.

Benefits of OpenNebula
The plurality of support to Hyper-V and platform-independent architecture makes OpenNebula
the ideal solution for heterogeneous computing centre environments.
The main advantages of OpenNebula are:

● It is 100 per cent open source and offers all the features in one edition.
● It provides control via the command line or Web interface, which is ideal for a variety
of user groups and needs.
● OpenNebula is available for all major Linux distributions, thus simplifying
installation.
● The long-term use of OpenNebula in large scale production environments has proven
its stability and flexibility.
● OpenNebula is interoperable and supports OCCI (Open Cloud Computing Interface)
and AWS (Amazon Web Services).
Key features of OpenNebula
OpenNebula has features for scalability, integration, security and accounting. The developers
also claim that it supports standardisation, interoperability and portability. It allows cloud users
and administrators to choose from several cloud interfaces. Figure 1 shows the important features
of OpenNebula.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 47


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

Figure 1: Key features of OpenNebula

Why OpenNebula?

● Web interface or CLI – the choice is yours


By using the OpenNebula CLI or Web interface, you can keep track of activities at
any time. There is a central directory service through which you can add new users,
and those users can be individually entitled. Managing systems, configuring new
virtual systems or even targeting the right users and groups is easy in OpenNebula.
● Availability at all times
OpenNebula not only takes care of the initial provisioning, but the high availability of
its cloud environment is much better compared to other cloud solutions. Of course, the
central OpenNebula services can be configured for high availability, but this is not
absolutely necessary. All systems continue to operate in their original condition and
are automatically included in the restored availability of the control processes.
● Easy remote access
In virtual environments, one lacks the ability to directly access the system when there
are operational problems or issues with the device. Here, OpenNebula offers an easy
solution — using the browser, one can access the system console of the host system
with a VNC integrated server.
● Full control and monitoring
All host and guest systems are constantly monitored in OpenNebula, which keeps the
host and VM dashboards up to date at all times. Depending on the configuration, a
virtual machine is to be restarted in case of the host system failing or if migrating to a
different system. If a data store is used with parallel access, the systems can of course
be moved, while in operation, on to other hardware. The maintenance window can be
minimised and can often be completely avoided.
BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 48
MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

● Open standards
OpenNebula is 100 per cent open source under the Apache License. By supporting
open standards such as OCCI and a host of other open architecture, OpenNebula
provides the security, scalability and freedom of a reliable cloud solution without
vendor lock-in, which involves considerable support and follow-up costs.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 49


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

Figure 2: OpenNebula architecture

Figure 3: OpenNebula components

OpenNebula architecture
To control a VM’s life cycle, the OpenNebula core coordinates with the following three areas of

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 50


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

management:
1) Image and storage technologies — to prepare disk images
2) The network fabric — to provide the virtual network environment
3) Hypervisors — to create and control VMs
Through pluggable drivers, the OpenNebula core can perform the above operations. It also
supports the deployment of services.
VM placement decisions are taken by a separate scheduler component. It follows the rank
scheduling policy, which makes place for VMs on a physical host according to the rank given by
the scheduler. These ranks are decided by the scheduler, using a ranking algorithm.
OpenNebula uses cloud drivers to interact with external clouds, and also integrates the core with
other management tools by using management interfaces.

Components of OpenNebula
Based on the existing infrastructure, OpenNebula provides various services and resources. You
can view the components in Figure 3.

● APIs and interfaces: These are used to manage and monitor OpenNebula
components. |To manage physical and virtual resources, they work as an interface.
● Users and groups: These support authentication, and authorise individual users and
groups with the individual permissions.
● Hosts and VM resources: These are a key aspect of a heterogeneous cloud that is
managed and monitored, e.g., Xen, VMware.
● Storage components: These are the basis for centralised or decentralised template
repositories.
● Network components: These can be managed flexibly. Naturally, there is support for
VLANs and Open vSwitch.
The front-end

● The machine that has OpenNebula installed on it is known as the front-end machine,
which is also responsible for executing OpenNebula services.
● The front-end needs to have access to the image repository and network connectivity
to each node.
● OpenNebula’s services are listed below:
1. Management daemon (Oned) and scheduler (mm_sched)
2. Monitoring and accounting daemon (Onecctd)
3. Web interface server (Sunstone)
4. Cloud API servers (EC2- query or OCCI)
Virtualisation hosts

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 51


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

● To run the VMs, we require some physical machines, which are called hosts.
● The virtualisation sub-system is responsible for communicating with the hypervisor
and taking the required action for any node in the VM life cycle.
● During the installation, the admin account should be enabled to execute commands
with root privileges.
Storage
Data stores are used to handle the VM images, and each data store must be accessible by the
front-end, using any type of storage technology.
OpenNebula has three types of data stores:

● File data store – used to store the plain files (not disk images)
● Image data store – repository for images only
System data store – used to hold the running VM images
The image data store type depends on the storage technology used. There are three
types of image data stores available:
● File system – stores VM images in file formats
● LVM – reduces the overhead of having the file system in place; the LVM is used to
store virtual images instead of plain files
● Ceph – stores images using Ceph blocks
OpenNebula can handle multiple storage scenarios, either centralised or decentralised.
Networking
There must be at least two physical networks configured in OpenNebula:

● Service network – to access the hosts to monitor and manage hypervisors, and to
move VM images.
● Instance network – to offer network connectivity between the VMs across the
different hosts.
Whenever any VM gets launched, OpenNebula will connect its network interfaces to the bridge
described in the virtual network definition.
OpenNebula supports four types of networking modes:

● Bridged–where the VM is directly attached to the physical bridge in the hypervisor.


● VLAN–where the VMs are connected by using 802.1Q VLAN tagging.
● Open vSwitch–which is the same as VLAN but uses an open vSwitch instead of a
Linux bridge.
● VXLAN–which implements VLAN using the VXLAN protocol.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 52


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

3. OPENSTACK
OpenStack is a cloud operating system that controls large pools of compute, storage, and
networking resources throughout a datacenter, all managed and provisioned through APIs with
common authentication mechanisms.
A dashboard is also available, giving administrators control while empowering their users to
provision resources through a web interface.
Beyond standard infrastructure-as-a-service functionality, additional components provide
orchestration, fault management and service management amongst other services to ensure high
availability of user applications.
THE OPENSTACK LANDSCAPE
OpenStack is broken up into services to allow you to plug and play components depending on
your needs. The openstack map gives you an “at a glance” view of the openstack landscape to
see where those services fit and how they can work together.
What is OpenStack?

OpenStack is a set of software tools for building and managing cloud computing platforms for
public and private clouds. Backed by some of the biggest companies in software development
and hosting, as well as thousands of individual community members, many think that OpenStack
is the future of cloud computing. OpenStack is managed by the OpenStack Foundation, a non-
profit that oversees both development and community-building around the project.

Introduction to OpenStack

OpenStack lets users deploy virtual machines and other instances that handle different tasks for
managing a cloud environment on the fly. It makes horizontal scaling easy, which means that
tasks that benefit from running concurrently can easily serve more or fewer users on the fly by
just spinning up more instances

How is OpenStack used in a cloud environment?

The cloud is all about providing computing for end users in a remote environment, where the
actual software runs as a service on reliable and scalable servers rather than on each end-user's
computer. Cloud computing can refer to a lot of different things, but typically the industry talks
about running different items "as a service"—software, platforms, and infrastructure. OpenStack
falls into the latter category and is considered Infrastructure as a Service (IaaS). Providing
infrastructure means that OpenStack makes it easy for users to quickly add new instance, upon
which other cloud components can run. Typically, the infrastructure then runs a "platform" upon
which a developer can create software applications that are delivered to the end users.
BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 53
MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

What are the components of OpenStack?

OpenStack is made up of many different moving parts. Because of its open nature, anyone can
add additional components to OpenStack to help it to meet their needs. But the OpenStack
community has collaboratively identified nine key components that are a part of the "core" of
OpenStack, which are distributed as a part of any OpenStack system and officially maintained by
the OpenStack community.

● Nova is the primary computing engine behind OpenStack. It is used for deploying and managing
large numbers of virtual machines and other instances to handle computing tasks.

● Swift is a storage system for objects and files. Rather than the traditional idea of a referring to
files by their location on a disk drive, developers can instead refer to a unique identifier referring
to the file or piece of information and let OpenStack decide where to store this information. This
makes scaling easy, as developers don’t have the worry about the capacity on a single system

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 54


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

behind the software. It also allows the system, rather than the developer, to worry about how best
to make sure that data is backed up in case of the failure of a machine or network connection.

● Cinder is a block storage component, which is more analogous to the traditional notion of a
computer being able to access specific locations on a disk drive. This more traditional way of
accessing files might be important in scenarios in which data access speed is the most important
consideration.

● Neutron provides the networking capability for OpenStack. It helps to ensure that each of the
components of an OpenStack deployment can communicate with one another quickly and
efficiently.

● Horizon is the dashboard behind OpenStack. It is the only graphical interface to OpenStack, so
for users wanting to give OpenStack a try, this may be the first component they actually “see.”
Developers can access all of the components of OpenStack individually through an application
programming interface (API), but the dashboard provides system administrators a look at what is
going on in the cloud, and to manage it as needed.

● Keystone provides identity services for OpenStack. It is essentially a central list of all of the
users of the OpenStack cloud, mapped against all of the services provided by the cloud, which
they have permission to use. It provides multiple means of access, meaning developers can easily
map their existing user access methods against Keystone.

● Glance provides image services to OpenStack. In this case, "images" refers to images (or virtual
copies) of hard disks. Glance allows these images to be used as templates when deploying new
virtual machine instances.

● Ceilometer provides telemetry services, which allow the cloud to provide billing services to
individual users of the cloud. It also keeps a verifiable count of each user’s system usage of each
of the various components of an OpenStack cloud. Think metering and usage reporting.

● Heat is the orchestration component of OpenStack, which allows developers to store the
requirements of a cloud application in a file that defines what resources are necessary for that
application. In this way, it helps to manage the infrastructure needed for a cloud service to run.

4. ANEKA

Aneka is a platform and a framework for developing distributed applications on the Cloud. It
harnesses the spare CPU cycles of a heterogeneous network of desktop PCs and servers or
datacenters on demand

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 55


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

Aneka includes an extensible set of APIs associated with programming models like MapReduce.

These APIs support different cloud models like a private, public, hybrid Cloud.

Manjrasoft focuses on creating innovative software technologies to simplify the development and
deployment of private or public cloud applications. Our product plays the role of an application
platform as a service for multiple cloud computing.

o Multiple Structures:
o Aneka is a software platform for developing cloud computing applications.
o In Aneka, cloud applications are executed.
o Aneka is a pure PaaS solution for cloud computing.
o Aneka is a cloud middleware product.
o Manya can be deployed over a network of computers, a multicore server, a data center, a
virtual cloud infrastructure, or a combination thereof.

Multiple containers can be classified into three major categories:


o Textile services
o Foundation Services
o Application Services

1. Textile Services:

Fabric Services defines the lowest level of the software stack that represents multiple containers.
They provide access to resource-provisioning subsystems and monitoring features implemented in
many.

2. Foundation Services:

Fabric Services are the core services of Manya Cloud and define the infrastructure management
features of the system. Foundation services are concerned with the logical management of a
distributed system built on top of the infrastructure and provide ancillary services for delivering
applications.

3. Application Services:

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 56


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

Application services manage the execution of applications and constitute a layer that varies
according to the specific programming model used to develop distributed applications on top of
Aneka.

There are mainly two major components in multiple technologies:

The SDK (Software Development Kit) includes the Application Programming Interface (API)
and tools needed for the rapid development of applications. The Anka API supports three popular
cloud programming models: Tasks, Threads and MapReduce;

And

A runtime engine and platform for managing the deployment and execution of applications on a
private or public cloud.

Aneka's potential as a Platform as a Service has been successfully harnessed by its users and
customers in three different areas, including engineering, life sciences, education, and business
intelligence.

Architecture of Aneka

Aneka is a platform and framework for developing distributed applications on the Cloud. It uses
desktop PCs on-demand and CPU cycles in addition to a heterogeneous network of servers or
datacenters. Aneka provides a rich set of APIs for developers to transparently exploit such
resources and express the business logic of applications using preferred programming abstractions.

A multiplex-based computing cloud is a collection of physical and virtualized resources connected


via a network, either the Internet or a private intranet. Services are divided into clothing,
foundation, and execution services. Foundation services identify the core system of Anka
middleware, which provides a set of infrastructure features to enable Anka containers to perform
specific and specific tasks. Fabric services interact directly with nodes through the Platform
Abstraction Layer (PAL) and perform hardware profiling and dynamic resource provisioning.
Execution services deal directly with scheduling and executing applications in the Cloud.

One of the key features of Aneka is its ability to provide a variety of ways to express distributed
applications by offering different programming models; Execution services are mostly concerned
with providing middleware with the implementation of these models. Additional services such as
persistence and security are inverse to the whole stack of services hosted by the container.

At the application level, a set of different components and tools are provided to

o Simplify the development of applications (SDKs),

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 57


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

o Port existing applications to the Cloud, and


o Monitor and manage multiple clouds.

An Aneka-based cloud is formed by interconnected resources that are dynamically modified


according to user needs using resource virtualization or additional CPU cycles for desktop
machines.

Architecture

5. CLOUD SIM
BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 58
MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

CloudSim is an open-source framework, which is used to simulate cloud computing


infrastructure and services. It is developed by the CLOUDS Lab organization and is written
entirely in Java. It is used for modelling and simulating a cloud computing environment as a
means for evaluating a hypothesis prior to software development in order to reproduce tests
and results.
CloudSim is an open-source framework, which is used to simulate cloud computing
infrastructure and services. It is developed by the CLOUDS Lab organization and is written
entirely in Java. It is used for modelling and simulating a cloud computing environment as a
means for evaluating a hypothesis prior to software development in order to reproduce tests
and results.

Benefits of Simulation over the Actual Deployment:

• No capital investment involved. With a simulation tool like CloudSim there is no


installation or maintenance cost.
• Easy to use and Scalable. You can change the requirements such as adding or deleting
resources by changing just a few lines of code.
• Risks can be evaluated at an earlier stage. In Cloud Computing utilization of real
testbeds limits the experiments to the scale of the testbed and makes the reproduction of
results an extremely difficult undertaking. With simulation, you can test your product
against test cases and resolve issues before actual deployment without any limitations.
• No need for try-and-error approaches. Instead of relying on theoretical and imprecise
evaluations which can lead to inefficient service performance and revenue generation, you
can test your services in a repeatable and controlled environment free of cost with
CloudSim.

Why use CloudSim?

Below are a few reasons to opt for CloudSim:


• Open source and free of cost, so it favours researchers/developers working in the field.
• Easy to download and set-up.
• It is more generalized and extensible to support modelling and experimentation.
• Does not require any high-specs computer to work on.
• Provides pre-defined allocation policies and utilization models for managing resources, and
allows implementation of user-defined algorithms as well.
• The documentation provides pre-coded examples for new developers to get familiar with
the basic classes and functions.
• Tackle bottlenecks before deployment to reduce risk, lower costs, increase performance,
and raise revenue.

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 59


MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

CloudSim Architecture:

CloudSim Layered Architecture

CloudSim Core Simulation Engine provides interfaces for the management of resources such
as VM, memory and bandwidth of virtualized Datacenters.
CloudSim layer manages the creation and execution of core entities such as VMs, Cloudlets,
Hosts etc. It also handles network-related execution along with the provisioning of resources
and their execution and management.
User Code is the layer controlled by the user. The developer can write the requirements of the
hardware specifications in this layer according to the scenario.
Some of the most common classes used during simulation are:
BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 60
MC4203 CLOUD COMPUTING TECHNOLOGIES UNIT II

• Datacenter: used for modelling the foundational hardware equipment of any cloud
environment, that is the Datacenter. This class provides methods to specify the functional
requirements of the Datacenter as well as methods to set the allocation policies of the VMs
etc.
• Host: this class executes actions related to management of virtual machines. It also defines
policies for provisioning memory and bandwidth to the virtual machines, as well as
allocating CPU cores to the virtual machines.
• VM: this class represents a virtual machine by providing data members defining a VM’s
bandwidth, RAM, mips (million instructions per second), size while also providing setter
and getter methods for these parameters.
• Cloudlet: a cloudlet class represents any task that is run on a VM, like a processing task, or
a memory access task, or a file updating task etc. It stores parameters defining the
characteristics of a task such as its length, size, mi (million instructions) and provides
methods similarly to VM class while also providing methods that define a task’s execution
time, status, cost and history.
• DatacenterBroker: is an entity acting on behalf of the user/customer. It is responsible for
functioning of VMs, including VM creation, management, destruction and submission of
cloudlets to the VM.
• CloudSim: this is the class responsible for initializing and starting the simulation
environment after all the necessary cloud entities have been defined and later stopping after
all the entities have been destroyed.

Features of CloudSim:

CloudSim provides support for simulation and modelling of:


1. Large scale virtualized Datacenters, servers and hosts.
2. Customizable policies for provisioning host to virtual machines.
3. Energy-aware computational resources.
4. Application containers and federated clouds (joining and management of multiple public
clouds).
5. Datacenter network topologies and message-passing applications.
6. Dynamic insertion of simulation entities with stop and resume of simulation.
7. User-defined allocation and provisioning policies

BY Mrs.V.DHIVYALAKSHMI AP/MCA Page 61


UNIT III CLOUD INFRASTRUCTURE

Cloud Architecture and Design – Architectural design challenges – Technologies for Network
based system - NIST Cloud computing Reference Architecture – Public, Private and Hybrid
clouds – Cloud Models : IaaS, PaaS and SaaS – Cloud storage providers - Enabling
Technologies for the Internet of Things – Innovative Applications of the Internet of Things.

Cloud infrastructure is a term used to describe the components needed for cloud computing,
which includes hardware, abstracted resources, storage, and network resources. Think of cloud
infrastructure as the tools needed to build a cloud. In order to host services and applications in
the cloud, you need cloud infrastructure.

3.1 CLOUD ARCHITECTURE AND DESIGN

As we know, cloud computing technology is used by both small and large organizations to store
the information in cloud and access it from anywhere at anytime using the internet connection.

Cloud computing architecture is a combination of service-oriented architecture and event-


driven architecture.

Cloud computing architecture is divided into the following two parts -

o Front End
o Back End

The below diagram shows the architecture of cloud computing -


Front End

The front end is used by the client. It contains client-side interfaces and applications that are
required to access the cloud computing platforms. The front end includes web servers (including
Chrome, Firefox, internet explorer, etc.), thin & fat clients, tablets, and mobile devices.

Back End

The back end is used by the service provider. It manages all the resources that are required to
provide cloud computing services. It includes a huge amount of data storage, security mechanism,
virtual machines, deploying models, servers, traffic control mechanisms, etc.

Note: Both front end and back end are connected to others through a network, generally using the
internet connection.

Components of Cloud Computing Architecture


There are the following components of cloud computing architecture -
1. Client Infrastructure
Client Infrastructure is a Front end component. It provides GUI (Graphical User Interface) to
interact with the cloud.
2. Application
The application may be any software or platform that a client wants to access.
3. Service
A Cloud Services manages that which type of service you access according to the client’s
requirement.
Cloud computing offers the following three type of services:
i. Software as a Service (SaaS) – It is also known as cloud application services. Mostly, SaaS
applications run directly through the web browser means we do not require to download and install
these applications. Some important example of SaaS is given below –
Example: Google Apps, Salesforce Dropbox, Slack, Hubspot, Cisco WebEx.
ii. Platform as a Service (PaaS) – It is also known as cloud platform services. It is quite similar
to SaaS, but the difference is that PaaS provides a platform for software creation, but using SaaS,
we can access software over the internet without the need of any platform.
Example: Windows Azure, Force.com, Magento Commerce Cloud, OpenShift.
iii. Infrastructure as a Service (IaaS) – It is also known as cloud infrastructure services. It is
responsible for managing applications data, middleware, and runtime environments.
Example: Amazon Web Services (AWS) EC2, Google Compute Engine (GCE), Cisco Metapod.
4. Runtime Cloud
Runtime Cloud provides the execution and runtime environment to the virtual machines.
5. Storage
Storage is one of the most important components of cloud computing. It provides a huge amount
of storage capacity in the cloud to store and manage data.
6. Infrastructure
It provides services on the host level, application level, and network level. Cloud infrastructure
includes hardware and software components such as servers, storage, network devices,
virtualization software, and other storage resources that are needed to support the cloud computing
model.
7. Management
Management is used to manage components such as application, service, runtime cloud, storage,
infrastructure, and other security issues in the backend and establish coordination between them.
8. Security
Security is an in-built back end component of cloud computing. It implements a security
mechanism in the back end.
9. Internet
The Internet is medium through which front end and back end can interact and communicate with
each other.
3.2 ARCHITECTUAL DESIGN CHALLENGES
Cloud computing, an emergent technology, has placed many challenges in different aspects of
data and information handling. Some of these are shown in the following diagram:

Security and Privacy

Security and Privacy of information is the biggest challenge to cloud computing. Security and
privacy issues can be overcome by employing encryption, security hardware and security
applications.

Portability

This is another challenge to cloud computing that applications should easily be migrated from
one cloud provider to another. There must not be vendor lock-in. However, it is not yet made
possible because each of the cloud provider uses different standard languages for their platforms.

Interoperability

It means the application on one platform should be able to incorporate services from the other
platforms. It is made possible via web services, but developing such web services is very complex.
Computing Performance

Data intensive applications on cloud requires high network bandwidth, which results in high cost.
Low bandwidth does not meet the desired computing performance of cloud application.

Reliability and Availability

It is necessary for cloud systems to be reliable and robust because most of the businesses are now
becoming dependent on services provided by third-party.
3.3 TECHNOLOGIES FOR NETWORK BASED SYSTEM
With the concept of scalable computing under our belt, it’s time to explore hardware, software,
and network technologies for distributed computing system design and applications. In particular,
we will focus on viable approaches to building distributed operating systems for handling massive
par-allelism in a distributed environment.
1. Multicore CPUs and Multithreading Technologies
Consider the growth of component and network technologies over the past 30 years. They are
crucial to the development of HPC and HTC systems. In Figure 1.4, processor speed is measured
in millions of instructions per second (MIPS) and network bandwidth is measured in megabits
per second (Mbps) or gigabits per second (Gbps). The unit GE refers to 1 Gbps Ethernet
bandwidth.
1.1 Advances in CPU Processors
Today, advanced CPUs or microprocessor chips assume a multicore architecture with dual, quad,
six, or more processing cores. These processors exploit parallelism at ILP and TLP levels.
Processor speed growth is plotted in the upper curve in Figure 1.4 across generations of
microprocessors or CMPs. We see growth from 1 MIPS for the VAX 780 in 1978 to 1,800 MIPS
for the Intel Pentium 4 in 2002, up to a 22,000 MIPS peak for the Sun Niagara 2 in 2008. As the
figure shows, Moore’s law has proven to be pretty accurate in this case. The clock rate for these
processors increased from 10 MHz for the Intel 286 to 4 GHz for the Pentium 4 in 30 years.

1.2 Multicore CPU and Many-Core GPU Architectures


Multicore CPUs may increase from the tens of cores to hundreds or more in the future. But the
CPU has reached its limit in terms of exploiting massive DLP due to the aforementioned memory
wall problem. This has triggered the development of many-core GPUs with hundreds or more thin
cores. Both IA-32 and IA-64 instruction set architectures are built into commercial CPUs. Now,
x-86 processors have been extended to serve HPC and HTC systems in some high-end server
processors.
1.3 Multithreading Technology
Consider in Figure 1.6 the dispatch of five independent threads of instructions to four pipelined
data paths (functional units) in each of the following five processor categories, from left to right:
These execution patterns closely mimic an ordinary program. The blank squares correspond to no
available instructions for an instruction data path at a particular processor cycle. More blank cells
imply lower scheduling efficiency. The maximum ILP or maximum TLP is difficult to achieve at
each processor cycle. The point here is to demonstrate your understanding of typical instruction
scheduling patterns in these five different micro-architectures in modern processors.

2. GPU Computing to Exascale and Beyond


A GPU is a graphics coprocessor or accelerator mounted on a computer’s graphics card or video
card. A GPU offloads the CPU from tedious graphics tasks in video editing applications. The
world’s first GPU, the GeForce 256, was marketed by NVIDIA in 1999. These GPU chips can
pro-cess a minimum of 10 million polygons per second, and are used in nearly every computer on
the market today. Some GPU features were also integrated into certain CPUs. Traditional CPUs
are structured with only a few cores. For example, the Xeon X5670 CPU has six cores. However,
a modern GPU chip can be built with hundreds of processing cores.
2.1 How GPUs Work
Early GPUs functioned as coprocessors attached to the CPU. Today, the NVIDIA GPU has been
upgraded to 128 cores on a single chip. Furthermore, each core on a GPU can handle eight threads
of instructions. This translates to having up to 1,024 threads executed concurrently on a single
GPU. This is true massive parallelism, compared to only a few threads that can be handled by a
conventional CPU. The CPU is optimized for latency caches, while the GPU is optimized to deliver
much higher throughput with explicit management of on-chip memory.
2.2 GPU Programming Model
Figure 1.7 shows the interaction between a CPU and GPU in performing parallel execution of
floating-point operations concurrently. The CPU is the conventional multicore processor with
limited parallelism to exploit. The GPU has a many-core architecture that has hundreds of simple
processing cores organized as multiprocessors. Each core can have one or more threads.
Essentially, the CPU’s floating-point kernel computation role is largely offloaded to the many-
core GPU. The CPU instructs the GPU to perform massive data processing. The bandwidth must
be matched between the on-board main memory and the on-chip GPU memory. This process is
carried out in NVIDIA’s CUDA programming using the GeForce 8800 or Tesla and Fermi GPUs.
We will study the use of CUDA GPUs in large-scale cluster computing in Chapter 2.

Example 1.1 The NVIDIA Fermi GPU Chip with 512 CUDA Cores
The Fermi GPU is a newer generation of GPU, first appearing in 2011. The Tesla or Fermi GPU
can be used in desktop workstations to accelerate floating-point calculations or for building large-
scale data cen-ters.

2.3 Power Efficiency of the GPU


Bill Dally of Stanford University considers power and massive parallelism as the major benefits
of GPUs over CPUs for the future. By extrapolating current technology and computer architecture,
it was estimated that 60 Gflops/watt per core is needed to run an exaflops system (see Figure 1.10).
Power constrains what we can put in a CPU or GPU chip. Dally has estimated that the CPU chip
consumes about 2 nJ/instruction, while the GPU chip requires 200 pJ/instruction, which is 1/10
less than that of the CPU. The CPU is optimized for latency in caches and memory, while the GPU
is optimized for throughput with explicit management of on-chip memory.
3. Memory, Storage, and Wide-Area Networking
3.1 Memory Technology
The upper curve in Figure 1.10 plots the growth of DRAM chip capacity from 16 KB in 1976 to
64 GB in 2011. This shows that memory chips have experienced a 4x increase in capacity every
three years. Memory access time did not improve much in the past. In fact, the memory wall
problem is getting worse as the processor gets faster.
3.2 Disks and Storage Technology
Beyond 2011, disks or disk arrays have exceeded 3 TB in capacity. The lower curve in Figure 1.10
shows the disk storage growth in 7 orders of magnitude in 33 years. The rapid growth of flash
memory and solid-state drives (SSDs) also impacts the future of HPC and HTC systems. The mor-
tality rate of SSD is not bad at all.
3.3 System-Area Interconnects
The nodes in small clusters are mostly interconnected by an Ethernet switch or a local area
network (LAN). As Figure 1.11 shows, a LAN typically is used to connect client hosts to big
servers. A storage area network (SAN) connects servers to network storage such as disk
arrays. Network attached storage (NAS) connects client hosts directly to the disk arrays. All three
types of networks often appear in a large cluster built with commercial network components. If no
large distributed storage is shared, a small cluster could be built with a multiport Gigabit Ethernet
switch plus copper cables to link the end machines. All three types of networks are commercially
available.

3.4 Wide-Area Networking


The lower curve in Figure 1.10 plots the rapid growth of Ethernet bandwidth from 10 Mbps in
1979 to 1 Gbps in 1999, and 40 ~ 100 GE in 2011. It has been speculated that 1 Tbps network
links will become available by 2013. According to Berman, Fox, and Hey [6], network links with
1,000, 1,000, 100, 10, and 1 Gbps bandwidths were reported, respectively, for international,
national, organization, optical desktop, and copper desktop connections in 2006.
4. Virtual Machines and Virtualization Middleware
A conventional computer has a single OS image. This offers a rigid architecture that tightly couples
application software to a specific hardware platform. Some software running well on one machine
may not be executable on another platform with a different instruction set under a fixed
OS. Virtual machines (VMs) offer novel solutions to underutilized resources, application
inflexibility, software manageability, and security concerns in existing physical machines.
4.1 Virtual Machines
In Figure 1.12, the host machine is equipped with the physical hardware, as shown at the bottom
of the figure. An example is an x-86 architecture desktop running its installed Windows OS, as
shown in part (a) of the figure. The VM can be provisioned for any hardware system. The VM is
built with virtual resources managed by a guest OS to run a specific application. Between the VMs
and the host platform, one needs to deploy a middleware layer called a virtual machine monitor
(VMM).
4.2 VM Primitive Operations
The VMM provides the VM abstraction to the guest OS. With full virtualization, the VMM exports
a VM abstraction identical to the physical machine so that a standard OS such as Windows 2000
or Linux can run just as it would on the physical hardware. Low-level VMM operations are
indicated by Mendel Rosenblum [41] and illustrated in Figure 1.13.

• First, the VMs can be multiplexed between hardware machines, as shown in Figure 1.13(a).
• Second, a VM can be suspended and stored in stable storage, as shown in Figure 1.13(b).
• Third, a suspended VM can be resumed or provisioned to a new hardware platform, as shown
in Figure 1.13(c).
• Finally, a VM can be migrated from one hardware platform to another, as shown in Figure
1.13(d).
4.3 Virtual Infrastructures
Physical resources for compute, storage, and networking at the bottom of Figure 1.14 are mapped
to the needy applications embedded in various VMs at the top. Hardware and software are then
sepa-rated. Virtual infrastructure is what connects resources to distributed applications. It is a
dynamic mapping of system resources to specific applications.
5. Data Center Virtualization for Cloud Computing
In this section, we discuss basic architecture and design considerations of data centers. Cloud
architecture is built with commodity hardware and network devices. Almost all cloud platforms
choose the popular x86 processors. Low-cost terabyte disks and Gigabit Ethernet are used to build
data centers. Data center design emphasizes the performance/price ratio over speed performance
alone. In other words, storage and energy efficiency are more important than shear speed
performance..
5.1 Data Center Growth and Cost Breakdown
A large data center may be built with thousands of servers. Smaller data centers are typically built
with hundreds of servers. The cost to build and maintain data center servers has increased over the
years. According to a 2009 IDC report (see Figure 1.14), typically only 30 percent of data center
costs goes toward purchasing IT equipment (such as servers and disks), 33 percent is attributed to
the chiller, 18 percent to the uninterruptible power supply (UPS), 9 percent to computer room
air conditioning (CRAC), and the remaining 7 percent to power distribution, lighting, and
transformer costs. Thus, about 60 percent of the cost to run a data center is allocated to
management and main-tenance. The server purchase cost did not increase much with time. The
cost of electricity and cool-ing did increase from 5 percent to 14 percent in 15 years.
5.2 Low-Cost Design Philosophy
High-end switches or routers may be too cost-prohibitive for building data centers. Thus, using
high-bandwidth networks may not fit the economics of cloud computing. Given a fixed budget,
commodity switches and networks are more desirable in data centers. Similarly, using commodity
x86 servers is more desired over expensive mainframes. The software layer handles network traffic
balancing, fault tolerance, and expandability. Currently, nearly all cloud computing data centers
use Ethernet as their fundamental network technology.
5.3 Convergence of Technologies
Essentially, cloud computing is enabled by the convergence of technologies in four areas: (1) hard-
ware virtualization and multi-core chips, (2) utility and grid computing, (3) SOA, Web 2.0, and
WS mashups, and (4) atonomic computing and data center automation. Hardware virtualization
and mul-ticore chips enable the existence of dynamic configurations in the cloud. Utility and grid
computing technologies lay the necessary foundation for computing clouds.
3.4 NIST CLOUD COMPUTING REFERENCE ARCHITECURE
NIST Cloud Computing reference architecture defines five major performers:
• Cloud Provider
• Cloud Carrier
• Cloud Broker
• Cloud Auditor
• Cloud Consumer
Each performer is an object (a person or an organization) that contributes to a transaction or
method and/or performs tasks in Cloud computing. There are five major actors defined in the
NIST cloud computing reference architecture, which are described below:

1. Cloud Service Providers: A group or object that delivers cloud services to cloud consumers
or end-users. It offers various components of cloud computing. Cloud computing consumers
purchase a growing variety of cloud services from cloud service providers. There are various
categories of cloud-based services mentioned below:

• IaaS Providers: In this model, the cloud service providers offer infrastructure components
that would exist in an on-premises data center. These components consist of servers,
networking, and storage as well as the virtualization layer.
• SaaS Providers: In Software as a Service (SaaS), vendors provide a wide sequence of
business technologies, such as Human resources management (HRM) software, customer
relationship management (CRM) software, all of which the SaaS vendor hosts and provides
services through the internet.
• PaaS Providers: In Platform as a Service (PaaS), vendors offer cloud infrastructure and
services that can access to perform many functions. In PaaS, services and products are
mostly utilized in software development. PaaS providers offer more services than IaaS
providers. PaaS providers provide operating system and middleware along with application
stack, to the underlying infrastructure.
2. Cloud Carrier: The mediator who provides offers connectivity and transport of cloud
services within cloud service providers and cloud consumers. It allows access to the services of
the cloud through Internet networks, telecommunication, and other access devices. Network
and telecom carriers or a transport agent can provide distribution. A consistent level of services
is provided when cloud providers set up Service Level Agreements (SLA) with a cloud carrier.
In general, Carrier may be required to offer dedicated and encrypted connections.

3. Cloud Broker: An organization or a unit that manages the performance, use, and delivery of
cloud services by enhancing specific capability and offers value-added services to cloud
consumers. It combines and integrates various services into one or more new services. They
provide service arbitrage which allows flexibility and opportunistic choices. There are major
three services offered by a cloud broker:
• Service Intermediation.
• Service Aggregation.
• Service Arbitrage.
4. Cloud Auditor: An entity that can conduct independent assessment of cloud services,
security, performance, and information system operations of the cloud implementations. The
services that are provided by Cloud Service Providers (CSP) can be evaluated by service
auditors in terms of privacy impact, security control, and performance, etc. Cloud Auditor can
make an assessment of the security controls in the information system to determine the extent
to which the controls are implemented correctly, operating as planned and constructing the
desired outcome with respect to meeting the security necessities for the system. There are three
major roles of Cloud Auditor which are mentioned below:
• Security Audit.
• Privacy Impact Audit.
• Performance Audit.
5. Cloud Consumer: A cloud consumer is the end-user who browses or utilizes the services
provided by Cloud Service Providers (CSP), sets up service contracts with the cloud provider.
The cloud consumer pays peruse of the service provisioned. Measured services utilized by the
consumer. In this, a set of organizations having mutual regulatory constraints performs a
security and risk assessment for each use case of Cloud migrations and deployments.

3.5 PUBLIC, PRIVATE AND HYBRID CLOUDS


There are the following 4 types of cloud that you can deploy according to the organization's
needs-

o Public Cloud
o Private Cloud
o Hybrid Cloud
o Community Cloud

Public Cloud

Public cloud is open to all to store and access information via the Internet using the pay-per-usage
method.
In public cloud, computing resources are managed and operated by the Cloud Service Provider
(CSP).
Example: Amazon elastic compute cloud (EC2), IBM SmartCloud Enterprise, Microsoft, Google
App Engine, Windows Azure Services Platform.10.1M170m for Beginners
Advantages of Public Cloud
There are the following advantages of Public Cloud -

o Public cloud is owned at a lower cost than the private and hybrid cloud.
o Public cloud is maintained by the cloud service provider, so do not need to worry about the
maintenance.
o Public cloud is easier to integrate. Hence it offers a better flexibility approach to
consumers.
o Public cloud is location independent because its services are delivered through the internet.
o Public cloud is highly scalable as per the requirement of computing resources.
o It is accessible by the general public, so there is no limit to the number of users.

Disadvantages of Public Cloud

o Public Cloud is less secure because resources are shared publicly.


o Performance depends upon the high-speed internet network link to the cloud provider.
o The Client has no control of data.

Private Cloud
Private cloud is also known as an internal cloud or corporate cloud. It is used by organizations
to build and manage their own data centers internally or by the third party. It can be deployed using
Opensource tools such as Openstack and Eucalyptus.

Based on the location and management, National Institute of Standards and Technology (NIST)
divide private cloud into the following two parts-

o On-premise private cloud


o Outsourced private cloud
Advantages of Private Cloud

There are the following advantages of the Private Cloud -

o Private cloud provides a high level of security and privacy to the users.
o Private cloud offers better performance with improved speed and space capacity.
o It allows the IT team to quickly allocate and deliver on-demand IT resources.
o The organization has full control over the cloud because it is managed by the organization
itself. So, there is no need for the organization to depends on anybody.
o It is suitable for organizations that require a separate cloud for their personal use and data
security is the first priority.

Disadvantages of Private Cloud

o Skilled people are required to manage and operate cloud services.


o Private cloud is accessible within the organization, so the area of operations is limited.
o Private cloud is not suitable for organizations that have a high user base, and organizations
that do not have the prebuilt infrastructure, sufficient manpower to maintain and manage
the cloud.

Hybrid Cloud

Hybrid Cloud is a combination of the public cloud and the private cloud. we can say:

Hybrid Cloud = Public Cloud + Private Cloud

Hybrid cloud is partially secure because the services which are running on the public cloud can be
accessed by anyone, while the services which are running on a private cloud can be accessed only
by the organization's users.

Example: Google Application Suite (Gmail, Google Apps, and Google Drive), Office 365 (MS
Office on the Web and One Drive), Amazon Web Services.

Advantages of Hybrid Cloud

There are the following advantages of Hybrid Cloud -


o Hybrid cloud is suitable for organizations that require more security than the public cloud.
o Hybrid cloud helps you to deliver new products and services more quickly.
o Hybrid cloud provides an excellent way to reduce the risk.
o Hybrid cloud offers flexible resources because of the public cloud and secure resources
because of the private cloud.

Disadvantages of Hybrid Cloud

o In Hybrid Cloud, security feature is not as good as the private cloud.


o Managing a hybrid cloud is complex because it is difficult to manage more than one type
of deployment model.
o In the hybrid cloud, the reliability of the services depends on cloud service providers.

Difference between public cloud, private cloud & hybrid cloud

The below table shows the difference between public cloud, private cloud, hybrid cloud

Parameter Public Cloud Private Cloud Hybrid Cloud

Host Service provider Enterprise (Third party) Enterprise (Third party)

Users General public Selected users Selected users

Access Internet Internet, VPN Internet, VPN

Owner Service provider Enterprise Enterprise

3.6 CLOUD MODELS: IAAS, PAAS & SAAS

There are the following three types of cloud service models -

1. Infrastructure as a Service (IaaS)


2. Platform as a Service (PaaS)
3. Software as a Service (SaaS)

Infrastructure as a Service (IaaS)


IaaS is also known as Hardware as a Service (HaaS). It is a computing infrastructure managed
over the internet. The main advantage of using IaaS is that it helps users to avoid the cost and
complexity of purchasing and managing the physical servers.

Characteristics of IaaS

There are the following characteristics of IaaS -

o Resources are available as a service


o Services are highly scalable
o Dynamic and flexible
o GUI and API-based access
o Automated administrative tasks

Example: DigitalOcean, Linode, Amazon Web Services (AWS), Microsoft Azure, Google
Compute Engine (GCE), Rackspace, and Cisco Metacloud.

Platform as a Service (PaaS)

PaaS cloud computing platform is created for the programmer to develop, test, run, and manage
the applications.

Characteristics of PaaS

There are the following characteristics of PaaS -

o Accessible to various users via the same development application.


o Integrates with web services and databases.
o Builds on virtualization technology, so resources can easily be scaled up or down as per
the organization's need.
o Support multiple languages and frameworks.
o Provides an ability to "Auto-scale".

Example: AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, Google App Engine,
Apache Stratos, Magento Commerce Cloud, and OpenShift.

Software as a Service (SaaS)

SaaS is also known as "on-demand software". It is a software in which the applications are hosted
by a cloud service provider. Users can access these applications with the help of internet
connection and web browser.

Characteristics of SaaS

There are the following characteristics of SaaS -

o Managed from a central location


o Hosted on a remote server
o Accessible over the internet
o Users are not responsible for hardware and software updates. Updates are applied
automatically.
o The services are purchased on the pay-as-per-use basis

Example: BigCommerce, Google Apps, Salesforce, Dropbox, ZenDesk, Cisco WebEx, ZenDesk,
Slack, and GoToMeeting.

Difference between IaaS, PaaS, and SaaS

The below table shows the difference between IaaS, PaaS, and SaaS -

IaaS Paas SaaS

It provides a virtual data center to It provides virtual platforms and It provides web software and
store information and create tools to create, test, and deploy apps to complete business
platforms for app development, apps. tasks.
testing, and deployment.

It provides access to resources It provides runtime environments It provides software as a


such as virtual machines, virtual and deployment tools for service to the end-users.
storage, etc. applications.

It is used by network architects. It is used by developers. It is used by end users.

IaaS provides only Infrastructure. PaaS provides SaaS provides


Infrastructure+Platform. Infrastructure+Platform
+Software.
Infrastructure as a Service | IaaS

Iaas is also known as Hardware as a Service (HaaS). It is one of the layers of the cloud computing
platform. It allows customers to outsource their IT infrastructures such as servers, networking,
processing, storage, virtual machines, and other resources. Customers access these resources on
the Internet using a pay-as-per use model.

In traditional hosting services, IT infrastructure was rented out for a specific period of time, with
pre-determined hardware configuration. The client paid for the configuration and time, regardless
of the actual use. With the help of the IaaS cloud computing platform layer, clients can dynamically
scale the configuration to meet changing requirements and are billed only for the services actually
used.

IaaS cloud computing platform layer eliminates the need for every organization to maintain the IT
infrastructure.

IaaS is offered in three models: public, private, and hybrid cloud. The private cloud implies that
the infrastructure resides at the customer-premise. In the case of public cloud, it is located at the
cloud computing platform vendor's data center, and the hybrid cloud is a combination of the two
in which the customer selects the best of both public cloud or private cloud.

IaaS provider provides the following services -

1. Compute: Computing as a Service includes virtual central processing units and virtual
main memory for the Vms that is provisioned to the end- users.
2. Storage: IaaS provider provides back-end storage for storing files.
3. Network: Network as a Service (NaaS) provides networking components such as routers,
switches, and bridges for the Vms.
4. Load balancers: It provides load balancing capability at the infrastructure layer.

Advantages of IaaS cloud computing layer


There are the following advantages of IaaS computing layer -
1. Shared infrastructure
IaaS allows multiple users to share the same physical infrastructure.
2. Web access to the resources
Iaas allows IT users to access resources over the internet.
3. Pay-as-per-use model
IaaS providers provide services based on the pay-as-per-use basis. The users are required to pay
for what they have used.
4. Focus on the core business
IaaS providers focus on the organization's core business rather than on IT infrastructure.
5. On-demand scalability
On-demand scalability is one of the biggest advantages of IaaS. Using IaaS, users do not worry
about to upgrade software and troubleshoot the issues related to hardware components.
Disadvantages of IaaS cloud computing layer
1. Security
Security is one of the biggest issues in IaaS. Most of the IaaS providers are not able to provide
100% security.
2. Maintenance & Upgrade
Although IaaS service providers maintain the software, but they do not upgrade the software for
some organizations.
3. Interoperability issues
It is difficult to migrate VM from one IaaS provider to the other, so the customers might face
problem related to vendor lock-in.
IaaS Iaas Solution Details
Vendor

Amazon Elastic, Elastic Compute The cloud computing platform pioneer, Amazon offers auto
Web Cloud (EC2) scaling, cloud monitoring, and load balancing features as part
Services MapReduce, Route 53, of its portfolio.
Virtual Private Cloud,
etc.
Netmagic Netmagic IaaS Cloud Netmagic runs from data centers in Mumbai, Chennai, and
Solutions Bangalore, and a virtual data center in the United States. Plans
are underway to extend services to West Asia.

Rackspace Cloud servers, cloud The cloud computing platform vendor focuses primarily on
files, cloud sites, etc. enterprise-level hosting services.

Reliance Reliance Internet Data RIDC supports both traditional hosting and cloud services,
Communica Center with data centers in Mumbai, Bangalore, Hyderabad, and
tions Chennai. The cloud services offered by RIDC include IaaS and
SaaS.

Sify Sify IaaS Sify's cloud computing platform is powered by HP's converged
Technologie infrastructure. The vendor offers all three types of cloud
s services: IaaS, PaaS, and SaaS.

Tata InstaCompute InstaCompute is Tata Communications' IaaS offering.


Communica InstaCompute data centers are located in Hyderabad and
tions Singapore, with operations in both countries.

Platform as a Service | PaaS

Platform as a Service (PaaS) provides a runtime environment. It allows programmers to easily


create, test, run, and deploy web applications. You can purchase these applications from a cloud
service provider on a pay-as-per use basis and access them using the Internet connection. In PaaS,
back end scalability is managed by the cloud service provider, so end- users do not need to worry
about managing the infrastructure.

PaaS includes infrastructure (servers, storage, and networking) and platform (middleware,
development tools, database management systems, business intelligence, and more) to support the
web application life cycle.

Example: Google App Engine, Force.com, Joyent, Azure.

PaaS providers provide the Programming languages, Application frameworks, Databases, and
Other tools:
1. Programming languages
PaaS providers provide various programming languages for the developers to develop the
applications. Some popular programming languages provided by PaaS providers are Java, PHP,
Ruby, Perl, and Go.
2. Application frameworks
PaaS providers provide application frameworks to easily understand the application development.
Some popular application frameworks provided by PaaS providers are Node.js, Drupal, Joomla,
WordPress, Spring, Play, Rack, and Zend.
3. Databases
PaaS providers provide various databases such as ClearDB, PostgreSQL, MongoDB, and Redis to
communicate with the applications.
4. Other tools
PaaS providers provide various other tools that are required to develop, test, and deploy the
applications.
Advantages of PaaS
1) Simplified Development
PaaS allows developers to focus on development and innovation without worrying about
infrastructure management.
2) Lower risk
No need for up-front investment in hardware and software. Developers only need a PC and an
internet connection to start building applications.
3) Prebuilt business functionality
Some PaaS vendors also provide already defined business functionality so that users can avoid
building everything from very scratch and hence can directly start the projects only.
4) Instant community
PaaS vendors frequently provide online communities where the developer can get the ideas to
share experiences and seek advice from others.
5) Scalability
Applications deployed can scale from one to thousands of users without any changes to the
applications.
Disadvantages of PaaS cloud computing layer
1) Vendor lock-in
One has to write the applications according to the platform provided by the PaaS vendor, so the
migration of an application to another PaaS vendor would be a problem.
2) Data Privacy
Corporate data, whether it can be critical or not, will be private, so if it is not located within the
walls of the company, there can be a risk in terms of privacy of data.
3) Integration with the rest of the systems applications
It may happen that some applications are local, and some are in the cloud.
Popular PaaS Providers
The below table shows some popular PaaS providers and services that are provided by them -

Providers Services

Google App Engine App Identity, URL Fetch, Cloud storage client library, Logservice
(GAE)

Salesforce.com Faster implementation, Rapid scalability, CRM Services, Sales cloud,


Mobile connectivity, Chatter.

Windows Azure Compute, security, IoT, Data Storage.

AppFog Justcloud.com, SkyDrive, GoogleDocs

Openshift RedHat, Microsoft Azure.

Cloud Foundry from Data, Messaging, and other


VMware

Software as a Service | SaaS

SaaS is also known as "On-Demand Software". It is a software distribution model in which


services are hosted by a cloud service provider. These services are available to end-users over the
internet so, the end-users do not need to install any software on their devices to access these
services.

There are the following services provided by SaaS providers -

Business Services - SaaS Provider provides various business services to start-up the business. The
SaaS business services include ERP (Enterprise Resource Planning), CRM (Customer
Relationship Management), billing, and sales.
Document Management - SaaS document management is a software application offered by a
third party (SaaS providers) to create, manage, and track electronic documents.

Example: Slack, Samepage, Box, and Zoho Forms.

Social Networks - As we all know, social networking sites are used by the general public, so social
networking service providers use SaaS for their convenience and handle the general public's
information.

Mail Services - To handle the unpredictable number of users and load on e-mail services, many
e-mail providers offering their services using SaaS.

Advantages of SaaS cloud computing layer


1) SaaS is easy to buy
SaaS pricing is based on a monthly fee or annual fee subscription, so it allows organizations to
access business functionality at a low cost, which is less than licensed applications.
Unlike traditional software, which is sold as a licensed based with an up-front cost (and often an
optional ongoing support fee), SaaS providers are generally pricing the applications using a
subscription fee, most commonly a monthly or annually fee.
2. One to Many
SaaS services are offered as a one-to-many model means a single instance of the application is
shared by multiple users.
3. Less hardware required for SaaS
The software is hosted remotely, so organizations do not need to invest in additional hardware.
4. Low maintenance required for SaaS
Software as a service removes the need for installation, set-up, and daily maintenance for the
organizations. The initial set-up cost for SaaS is typically less than the enterprise software. SaaS
vendors are pricing their applications based on some usage parameters, such as a number of users
using the application. So SaaS does easy to monitor and automatic updates.
5. No special software or hardware versions required
All users will have the same version of the software and typically access it through the web
browser. SaaS reduces IT support costs by outsourcing hardware and software maintenance and
support to the IaaS provider.
6. Multidevice support
SaaS services can be accessed from any device such as desktops, laptops, tablets, phones, and thin
clients.
7. API Integration
SaaS services easily integrate with other software or services through standard APIs.
8. No client-side installation
SaaS services are accessed directly from the service provider using the internet connection, so do
not need to require any software installation.
Disadvantages of SaaS cloud computing layer
1) Security
Actually, data is stored in the cloud, so security may be an issue for some users. However, cloud
computing is not more secure than in-house deployment.
2) Latency issue
Since data and applications are stored in the cloud at a variable distance from the end-user, there
is a possibility that there may be greater latency when interacting with the application compared
to local deployment. Therefore, the SaaS model is not suitable for applications whose demand
response time is in milliseconds.
3) Total Dependency on Internet
Without an internet connection, most SaaS applications are not usable.
4) Switching between SaaS vendors is difficult
Switching SaaS vendors involves the difficult and slow task of transferring the very large data files
over the internet and then converting and importing them into another SaaS also.

Popular SaaS Providers

The below table shows some popular SaaS providers and services that are provided by them -

Provider Services

Salseforce.com On-demand CRM solutions

Microsoft Office Online office suite


365

Google Apps Gmail, Google Calendar, Docs, and sites

NetSuite ERP, accounting, order management, CRM, Professionals Services Automation


(PSA), and e-commerce applications.
GoToMeeting Online meeting and video-conferencing software

Constant Contact E-mail marketing, online survey, and event marketing

Oracle CRM CRM applications

Workday, Inc Human capital management, payroll, and financial management.

3.7 CLOUD STORAGE PROVIDERS


Cloud storage is a flexible and convenient new-age solution to store data. In the past, data stored
on hard drives and external storage devices such as floppy disks, thumb drives, and compact discs.

Over time, they replaced by local on-premise storage devices. More recently, cloud storage has
become a popular medium of choice for enterprises as well as individuals. Today, individuals and
enterprises alike are embracing cloud storage to store data.

All the data is stored remotely without taking up physical space in your home or office or
exhausting the megabytes on your computer.

In other words, cloud storage is a service that lets you transfer data over the Internet and store it in
an offsite storage system maintained by a third party. The Internet connects the computer to the
offsite storage system. The Internet is all you need to access your files anywhere and at any time.

Data for Cloud Storage Providers

Data Limit
App Store Google play
for Free Premium Option
Rating Store Rating
Version

2GB $11.99/month for 2TB 4.5 4.3

15GB $1.99/month for 100GB 4.6 4.4

10 GB $3.99/month for 500GB 3.5 4.4

5 GB $1.99/month for 100GB 4.7 4.6

10 GB $2.49/Month for 1TB 3.9 4.1


Data Limit
App Store Google play
for Free Premium Option
Rating Store Rating
Version

5 GB $5/month For 500 GB 5 3.2

10 GB $10/month for 100GB 4.8 4.7

5 GB $4.34/moth for 83GB 4.5 3.7

5 GB $0.99/month for 50GB 4.9 3.5

What Can Cloud Storage Do for You?

Cloud storage offers many benefits which can potentially improve efficiency and productivity in
terms of backing up and securing the data. Here are a few of them:

• Accessibility: Data stored on the cloud can be accessed on-the-go, anytime, and
anywhere. All you need is an internet connection.
• Mobility: Cloud storage providers even offer applications that work with various
devices such as mobile phones and tablets.
• Synchronization: You have the option to sync all your files across all devices so that
you have the most current available to you all the time, creating a single source of
truth.
• Collaboration: Cloud storage services come with features that allow multiple people
to collaborate on a single file even if they are spread across various locations across
the world.
• Cost-Saving: Cloud storage providers generally require you to pay only for the
amount of storage you use, which prevents businesses from over-investing into their
storage needs. Moreover, purchasing servers can be very costly. Hiring specialized
resources to maintain these servers can be even more harmful.
• Scalable: Cloud storage providers offer various plans that can quickly scale your data
storage capacity to meet the growing needs of your business.
• Low Maintenance: The responsibility of upkeeping of the storage lies with the cloud
storage provider.
• Space-Saving: Servers and even other forms of physical storage devices such as hard
disks and USBs require space. This situation does not arise with cloud storage. You
also do not have to worry about files eating up all the area on your hard drive and
lowering the speed of your device.
• Reduced Carbon Footprint: Even a small data center requires servers, networks,
power, cooling, space, and ventilation, all of which can contribute significantly to
energy consumption and CO2 emissions. Switching to cloud computing, therefore,
can drastically reduce your energy consumption levels.
• Security: Cloud storage solutions are designed to be very resilient. They work as
resistant standby as most cloud storage providers have about two to three backup
servers located in different places globally.
How to Use Cloud Storage?

While individuals use it for personal storage to store email backups, pictures, videos, and other
such own files, enterprises use cloud storage as a commercially-supported remote backup solution,
where they can securely transfer and store data files and even share them among various locations.

Cloud storage solutions are relatively simple and easy to use. You are generally required to register
and set up your account. Once you do that, you use your unique username and password to save
your files via the Internet.

What Are the Different Types of Cloud Storage?

There are primarily three types of cloud storage solutions:

1. Public Cloud Storage

Suitable for unstructured data, public cloud storage is offered by third-party cloud storage
providers over the open Internet. They may be available for free or on a paid basis. Users are
usually required to pay for only what they use.

2. Private Cloud Storage

A private cloud allows organizations to store data in their environment. The infrastructure is hosted
on-premises. It offers many benefits that come with a public cloud service such as self-service and
scalability; the dedicated in-house resources increase the scope for customization and control.
Internal hosting and company firewalls also make this the more secure option.

3. Hybrid Cloud Storage

As the name suggests, hybrid cloud allows data and applications to be shared between a public and
a private cloud. Businesses that have a secret, the on-premise solution can seamlessly scale up to
the public cloud to handle any short-term spikes or overflow.

3.8 ENABLING TECHNOLOGIES FOR THE INTERNET OF THINGS

As mentioned in part one — What is the Internet of Things — this series is not intended to be an
engineering textbook on the IoT and its enabling technologies. However, the IoT’s technology
road map is shown in Figure 2 [19] and we will present some general background on the IoT’s
technical landscape to inform our IP strategy discussion (forthcoming).
Figure 2: Technology Roadmap: The Internet of Things

A. BIG DATA

As more things (or “smart objects”) are connected to the IoT, more data is collected from them in
order to perform analytics to determine trends and associations that lead to insights. For example,
an oil well equipped with 20-30 sensors can generate 500,000 data points every 15 seconds20, a
jetliner with 6,000 sensors generates 2.5 terabytes of data per day [21], and the more than 46
million smart utility meters installed in the U.S. generate more than 1 billion data points each
day. [22] Thus, the term “big data” refers to these large data sets that need to be collected, stored,
queried, analyzed and generally managed in order to deliver on the promise of the IoT —
insight!

Further compounding the technical challenges of big data is the fact that IoT systems must deal
with not only the data collected from smart objects, but also ancillary data that is needed

B. DIGITAL TWIN

Another consequence of the growing and evolving IoT is the concept of a “digital twin,”
introduced in 2003 by John Vickers, manager of NASA’s National Center for Advanced
Manufacturing. [23 ]The concept refers to a digital copy of a physical asset (i.e., a smart object
within the IoT), that lives and evolves in a virtual environment over the physical asset’s lifetime.
That is, as the sensors within the object collect real-time data, a set of models forming the digital
twin is updated with all of the same information. Thus, an inspection of the digital twin would
reveal the same information as a physical inspection of the smart object itself – albeit remotely.
The digital twin of the smart object can then be studied to not only optimize operations of the
smart object through reduced maintenance costs and downtime, but to improve the next
generation of its design.
C. CLOUD COMPUTING

As the word “cloud” is often used as a metaphor for the Internet, “cloud computing” refers to
being able to access computing resources via the Internet rather than traditional systems where
computing hardware is physically located on the premises of the user and any software
applications are installed on such local hardware. More formally, “cloud computing” is defined
as:

“[A] model for enabling ubiquitous, convenient, on-demand network access to a shared pool
of configurable computing resources (e.g., networks, servers, storage, applications, and
services) that can be rapidly provisioned and released with minimal management e ort or service
provider interaction.” [24]

Cloud computing – and its three service models of Software as a Service (SaaS), Platform as a
Service (PaaS) and Infrastructure as a Service (IaaS) – are important to the IoT because it allows
any user with a browser and an Internet connection to transform smart object data

into actionable intelligence. That is, cloud computing provides “the virtual infrastructure for
utility computing integrating applications, monitoring devices, storage devices, analytics tools,
visualization platforms, and client delivery… [to] enable businesses and users to access [IoT-
enabled] applications on demand anytime, anyplace and anywhere.” [25]

D. SENSORS

Central to the functionality and utility of the IoT are sensors embedded in smart objects. Such
sensors are capable of detecting events or changes in a specific quantity (e.g., pressure),
communicating the event or change data to the cloud (directly or via a gateway) and, in some
circumstances, receiving data back from the cloud (e.g., a control command) or communicating
with other smart objects. Since 2012, sensors have generally shrunk in physical size and thus
have caused the IoT market to mature rapidly. More specifically: “Technological improvements
created microscopic scale sensors, leading to the use of technologies like
Microelectromechanical systems (MEMS). This meant that sensors were now small enough to be
embedded into unique places like clothing or other [smart objects].” [26]

E. COMMUNICATIONS

With respect to sending and receiving data, wired and wireless communication technologies have
also improved such that nearly every type of electronic equipment can provide data connectivity.
This has allowed the ever-shrinking sensors embedded in smart objects to send and receive data
over the cloud for collection, storage and eventual analysis.

The protocols for allowing IoT sensors to relay data include wireless technologies such as RFID,
NFC, Wi-Fi, Bluetooth, Bluetooth Low Energy (BLE), XBee, ZigBee, Z-Wave, Wireless M-
Bus, SIGFOX and NuelNET, as well as satellite connections and mobile networks using GSM,
GPRS, 3G, LTE, or WiMAX. [27] Wired protocols, useable by stationary smart objects, include
Ethernet, HomePlug, HomePNA, HomeGrid/G.hn and LonWorks, as well as conventional
telephone lines. [28]

F. ANALYTICS SOFTWARE

Within the IoT ecosystem, Application Service Providers (ASPs) – which may or may not di er
from the companies who sell and service the smart objects – provide software to companies that
can transform “raw” machine (big) data collected from smart objects into actionable intelligence
(or insight). Generally speaking, such software performs data mining and employs mathematical
models and statistical techniques to provide insight to users. That is, events, trends and patterns
are extracted from big data sets in order to present the software’s end-users with insight

in the form of portfolio analysis, predictions, risk analysis, automations and corrective,
maintenance and optimization recommendations. In many cases, the ASPs may provide general
analytical software or software targeting specific industries or types of smart objects.

G. EDGE DEVICES

Not shown in our simplistic IoT ecosystem of Figure 1 is exactly how the smart objects
embedded with sensors connect via the Internet to the various service provider systems. The
answer is via “edge devices” – any device such
as a router, routing switch, integrated access device (IAD), multiplexer, or metropolitan area
network (MAN) and wide area network (WAN) access device which provides an entry point
from the global, public Internet into an ASP’s or other enterprise’s private network. [29] In
Industry 4.0, these edge devices are becoming smarter at processing data before such data even
reaches an enterprise network’s backbone (i.e., its core devices and cloud data centers). For
example, edge devices may translate between di erent network protocols, and provide first-hop
security, initial quality of service (QoS) and access/ distribution policy functionality. [30]
3.9 INNOVATIVE APPLICATIONS OF THE INTERNET OF THINGS
The applications of IoT technologies are multiple, because it is adjustable to almost any
technology that is capable of providing relevant information about its own operation, about the
performance of an activity and even about the environmental conditions that we need to monitor
and control at a distance.
Nowadays, many companies from different sectors or sectors are adopting this technology to
simplify, improve, automate and control different processes. Next, we show some of the
surprising practical applications of the IoT.
1. Wearables.
Virtual glasses, fitness bands to monitor for example calorie expenditure and heart beats,
or GPS tracking belts, are just some examples of wearable devices that we have been using
for some time now. Companies such as Google, Apple, Samsung and others have developed and
introduced the Internet of Things and the application thereof into our daily lives.
These are small and energy efficient devices, which are equipped with sensors, with the
necessary hardware for measurements and readings, and with software to collect and organize
data and information about users.
2. Health.
The use of wearables or sensors connected to patients, allows doctors to monitor a patient's
condition outside the hospital and in real-time. Through continuously monitoring certain
metrics and automatic alerts on their vital signs, the Internet of Things helps to improve the
care for patients and the prevention of lethal events in high-risk patients.
Another use is the integration of IoT technology into hospital beds, giving way to smart beds,
equipped with special sensors to observe vital signs, blood pressure, oximeter and body
temperature, among others.
3. Traffic monitoring.
The Internet of things can be very useful in the management of vehicular traffic in large cities,
contributing to the concept of smart cities.
When we use our mobile phones as sensors, which collect and share data from our vehicles
through applications such as Waze or Google Maps, we are using the Internet of Things to
inform us and at the same time contribute to traffic monitoring, showing the conditions of
the different routes, and feeding and improving the information on the different routes to the
same destination, distance, estimated time of arrival.
4. Fleet management.
The installation of sensors in fleet vehicles helps to establish an effective interconnectivity
between the vehicles and their managers as well as between the vehicles and their drivers. Both
driver and manager/ owner can know all kinds of details about the status, operation and needs of
the vehicle, just by accessing the software in charge of collecting, processing and organizing the
data. Even, receive alarms in real time of maintenance incidents without having been detected by
the driver.
The application of the Internet of Things to fleet management assists with geolocation (and
with it the monitoring of routes and identification of the most efficient routes),
performance analysis, telemetry control and fuel savings , the reduction of polluting
emissions to the environment and can even provide valuable information to improve the
driving of vehicles.
5. Agriculture.
Smart farms are a fact. The quality of soil is crucial to produce good crops, and the Internet of
Things offers farmers the possibility to access detailed knowledge and valuable information of
their soil condition.
Through the implementation of IoT sensors, a significant amount of data can be obtained
on the state and stages of the soil. Information such as soil moisture, level of acidity, the
presence of certain nutrients, temperature and many other chemical characteristics, helps farmers
control irrigation, make water use more efficient, specify the best times to start sowing, and even
discover the presence of diseases in plants and soil.
6. Hospitality.

The application of the IoT to the hotel industry brings with it interesting improvements in the
quality of the service. With the implementation of electronic keys, which are sent directly to
the mobile devices of each guest, it is possible to automate various interactions.
7. Smart grid and energy saving.
The progressive use of intelligent energy meters, or meters equipped with sensors, and the
installation of sensors in different strategic points that go from the production plants to the
different distribution points, allows better monitoring and control of the electrical network.
By establishing a bidirectional communication between the service provider company and the
end user, information of enormous value can be obtained for the detection of faults, decision
making and repair thereof.
It also allows offering valuable information to the end user about their consumption patterns and
about the best ways to reduce or adjust their energy expenditure.
8. Water supply.
A sensor, either incorporated or adjusted externally to water meters, connected to the Internet
and accompanied by the necessary software , helps to collect, process and analyze data, which
allows understanding the behavior of consumers, detecting faults in the supply service, report
results and offer courses of action to the company that provides the service.
9. Maintenance management.
One of the areas where the application of IoT technology is most extensive is precisely
maintenance management. Through the combination of sensors and software specialized
in CMMS/ EAM maintenance management, a multifunctional tool is obtained whose use can be
applied to a multiplicity of disciplines and practices, with the purpose of extending the useful life
of physical assets, while guaranteeing asset reliability and availability.
When the characteristics of the software in charge of processing and arranging the data collected
by the sensors are designed to specifically address the maintenance management needs of
physical assets, their application is almost unlimited.
The real-time monitoring of physical assets allows determining when a measurement is out of
range and it is necessary to perform condition-based maintenance (CBM), or even applying
Artificial Intelligence (AI) algorithms such as Machine Learning or Deep Learning to predict the
failure before it happens.
Fracttal for maintenance management and the Internet of Things (IoT)
FRACTTAL is the CMMS/ EAM software specially designed to help organizations and
institutions from any industry, sector or services to better manage the maintenance of their
physical assets. It has unique features that facilitate maintenance management, and at the same
time, has a great versatility that allows it to adapt to the specific maintenance needs of fleet
management, equipment, medical equipment and facilities, hotels, smart grids or smart cities,
and water supply, among others.
While other uses of IoT technology or the Internet of Things, are concerned with offering
an innovative approach to quality of life, urban challenges, food production, agriculture,
manufacturing, medicine, energy supply, water distribution and how to offer a wide variety
of products and services, an application oriented to maintenance management such as
Fracttal CMMS, is responsible for helping all these organizations take care of the assets on
which their fundamental activity is based. Fracttal CMMS does so by applying disruptive
Artificial Intelligence technologies such as Machine Learning or Deep Learning to predict a
failure, even before it happens.
UNIT 4 CLOUD ENABLING TECHNOLOGIES

Service Oriented Architecture – Web Services – Basics of Virtualization – Emulation – Types of


Virtualization – Implementation levels of Virtualization – Virtualization structures – Tools&
Mechanism – Virtualization of CPU, Memory & I/O Devices – Desktop Virtualization – Server
Virtualization – Google App Engine – Amazon AWS – Federation in the cloud.

Cloud-enabling technology

Cloud-enabling technology is the use of computing resources that are delivered to customers
with the help of the internet. Cloud-computing technologies are proliferating across various
sectors, such as energy and power, oil and gas, buildings and construction, transport,
communication, etc.

4.1 SERVICE ORIENTED ARCHITECTURE

Service-Oriented Architecture (SOA) allows organizations to access on-demand cloud-based


computing solutions according to the change of business needs. It can work without or with
cloud computing. The advantages of using SOA are that it is easy to maintain, platform
independent, and highly scalable.

Service Provider and Service consumer are the two major roles within SOA.

Applications of Service-Oriented Architecture

There are the following applications of Service-Oriented Architecture -

o It is used in the healthcare industry.


o It is used to create many mobile applications and games.
o In the air force, SOA infrastructure is used to deploy situational awareness systems.

The service-oriented architecture is shown below:

SOA (Service Oriented Architecture) is built on computer engineering approaches that offer an
architectural advancement towards enterprise system. It describes a standard method for
requesting services from distributed components and after that the results or outcome is
managed. The primary focus of this service oriented approach is on the characteristics of service
interface and predictable service behavior. Web Services means a set or combination of industry
standards collectively labeled as one. SOA provides a translation and management layer within
the cloud architecture that removes the barrier for cloud clients obtaining desired services.
Multiple networking and messaging protocols can be written using SOA's client and components
and can be used to communicate with each other. SOA provides access to reusable Web services
over a TCP/IP network, which makes this an important topic to cloud computing going forward.

Benefits of SOA

• Language Neutral Integration: Regardless of the developing language used, the system offers
and invoke services through a common mechanism. Programming language neutralization is
one of the key benefits of SOA's integration approach.
• Component Reuse: Once an organization built an application component, and offered it as a
service, the rest of the organization can utilize that service.
• Organizational Agility: SOA defines building blocks of capabilities provided by software and it
offers some service(s) that meet some organizational requirement; which can be recombined
and integrated rapidly.
• Leveraging Existing System: This is one of the major use of SOA which is to classify elements
or functions of existing applications and make them available to the organizations or enterprise.

Key Benefits Along With Risks of SOA

• Dependence on the network


• Provider cost
• Enterprise standards
• Agility

SOA Architecture

SOA architecture is viewed as five horizontal layers. These are described below:

• Consumer Interface Layer: These are GUI based apps for end users accessing the applications.
• Business Process Layer: These are business-use cases in terms of application.
• Services Layer: These are whole-enterprise, in service inventory.
• Service Component Layer: are used to build the services, such as functional and technical
libraries.
• Operational Systems Layer: It contains the data model.

SOA Governance
It is a notable point to differentiate between It governance and SOA governance. IT governance
focuses on managing business services whereas SOA governance focuses on managing Business
services. Furthermore in service oriented organization, everything should be characterized as a
service in an organization. The cost that governance put forward becomes clear when we
consider the amount of risk that it eliminates with the good understanding of service,
organizational data and processes in order to choose approaches and processes for policies for
monitoring and generate performance impact.

SOA Architecture and Protocols

Here lies the protocol stack of SOA showing each protocol along with their relationship among
each protocol. These components are often programmed to comply with SCA (Service
Component Architecture), a language that has broader but not universal industry support. These
components are written in BPEL (Business Process Execution Languages), Java, C#, XML etc
and can apply to C++ or FORTRAN or other modern multi-purpose languages such as Python,
PP or Ruby. With this, SOA has extended the life of many all-time famous applications.

Security in SOA

With the vast use of cloud technology and its on-demand applications, there is a need for well -
defined security policies and access control. With the betterment of these issues, the success of
SOA architecture will increase. Actions can be taken to ensure security and lessen the risks when
dealing with SOE (Service Oriented Environment). We can make policies that will influence the
patterns of development and the way services are used. Moreover, the system must be set-up in
order to exploit the advantages of public cloud with resilience. Users must include safety
practices and carefully evaluate the clauses in these respects.

Elements of SOA

Here's the diagrammatic figure showing the different elements of SOA and its subparts:
Figure - Elements Of SOA:

Though SOA enjoyed lots achieving the success in the past, the introduction to cloud technology
with SOA, renewed the value of SOA

4.2 WEB SERVICES

Cloud computing is a style of computing in which virtualised and standard resources, software
and data are provided as a service over the Internet.

Consumers and businesses can use the cloud to store data and applications and can interact with
the Cloud using mobiles, desktop computers, laptops etc. via the Internet from anywhere and at
any time.

The technology of Cloud computing entails the convergence of Grid and cluster computing,
virtualisation, Web services and Service Oriented Architecture (SOA) - it offers the potential to
set IT free from the costs and complexity of its typical physical infrastructure, allowing concepts
such as Utility Computing to become meaningful.

Key players include: IBM, HP, Google, Microsoft, Amazon Web Services, Salesforce.com,
NetSuite, VMware.
Benefits of Cloud Computing:

• predictable any time, anywhere access to IT resources


• flexible scaling of resources (resource optimisation)
• rapid, request-driven provisioning
• lower total cost of operations
Risks and Challenges of Cloud computing include:
• security of data and data protection
• data privacy
• legal issues
• disaster recovery
• failure management and fault tolerance
• IT integration management issues
• business regulatory requirements
• SLA (service level agreement) management
Web services refers to software that provides a standardized way of integrating Web-based
applications using the XML, SOAP, WSDL and UDDI open standards over the Internet.
4.3 BASICS OF VIRTUALIZATION

Virtualization is a technique, which allows to share single physical instance of an application or


resource among multiple organizations or tenants (customers). It does so by assigning a logical
name to a physical resource and providing a pointer to that physical resource on demand.

Virtualization Concept

Creating a virtual machine over existing operating system and hardware is referred as Hardware
Virtualization. Virtual Machines provide an environment that is logically separated from the
underlying hardware.
The machine on which the virtual machine is created is known as host machine and virtual
machine is referred as a guest machine. This virtual machine is managed by a software or
firmware, which is known as hypervisor.

Hypervisor
The hypervisor is a firmware or low-level program that acts as a Virtual Machine Manager.
There are two types of hypervisor:
Type 1 hypervisor executes on bare system. LynxSecure, RTS Hypervisor, Oracle VM, Sun
xVM Server, VirtualLogic VLX are examples of Type 1 hypervisor. The following diagram
shows the Type 1 hypervisor.

The type1 hypervisor does not have any host operating system because they are installed on a
bare system.
Type 2 hypervisor is a software interface that emulates the devices with which a system normally
interacts. Containers, KVM, Microsoft Hyper V, VMWare Fusion, Virtual Server 2005 R2,
Windows Virtual PC and VMWare workstation 6.0 are examples of Type 2 hypervisor. The
following diagram shows the Type 2 hypervisor.

Types of Hardware Virtualization


Here are the three types of hardware virtualization:

• Full Virtualization
• Emulation Virtualization
• Paravirtualization

Full Virtualization

In full virtualization, the underlying hardware is completely simulated. Guest software does not
require any modification to run.

Emulation Virtualization

In Emulation, the virtual machine simulates the hardware and hence becomes independent of it.
In this, the guest operating system does not require modification.

Paravirtualization
In Paravirtualization, the hardware is not simulated. The guest software run their own isolated
domains.

VMware vSphere is highly developed infrastructure that offers a management infrastructure


framework for virtualization. It virtualizes the system, storage and networking hardware.
4.4 EMULATION
Emulation Virtualization. In emulation virtualization, hardware simulates by the virtual machine
and it is independent. Here, the guest operating system does not require any other modification.
In this virtualizations, computer hardware as architectural support builds and manages a fully
virtualized VM.

Emulation Cloud is an open application development environment that helps customers and
third-party developers create, test, and fine-tune customized applications in a completely
virtual environment.

With the web-scale traffic demands of fast-growing cloud-based services, content


distribution, and new IT services emerging from virtualization, requirements for flexibility
and programmability are growing like never before.

There are four key benefits of Emulation Cloud:

• It can accelerate DevOps and web-scale IT integration by enabling customers and


partners to create, test, and fine-tune applications and scripts using a cloud-based
solution rather than expensive and labor-intensive IT resources.
• It provides access to full API definitions and descriptions and enables users to tap
into the expertise of Ciena’s team for questions regarding APIs and code.
• Users can schedule and access virtual lab time to develop unique operational tools,
without IT infrastructure investment.
• It encourages innovation through experimentation and testing—all from the safety of
a virtual cloud environment.
Ciena offers a rich the toolset in the Emulation Cloud so developers and IT teams can
simplify integration activities.

4.5 TYPES OF VIRTUALIZATION

Virtualization plays a very important role in the cloud computing technology, normally in the
cloud computing, users share the data present in the clouds like application etc, but actually with
the help of virtualization users shares the Infrastructure.

Types of Virtualization:

1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.

1) Hardware Virtualization:

When the virtual machine software or virtual machine manager (VMM) is directly installed on the
hardware system is known as hardware virtualization.

The main job of hypervisor is to control and monitoring the processor, memory and other hardware
resources.
After virtualization of hardware system we can install different operating system on it and run
different applications on those OS.

Usage:

Hardware virtualization is mainly done for the server platforms, because controlling virtual
machines is much easier than controlling a physical server.

2) Operating System Virtualization:

When the virtual machine software or virtual machine manager (VMM) is installed on the Host
operating system instead of directly on the hardware system is known as operating system
virtualization.

Usage:

Operating System Virtualization is mainly used for testing the applications on different platforms
of OS.

3) Server Virtualization:

When the virtual machine software or virtual machine manager (VMM) is directly installed on the
Server system is known as server virtualization.

Usage:

Server virtualization is done because a single physical server can be divided into multiple servers
on the demand basis and for balancing the load.

4) Storage Virtualization:

Storage virtualization is the process of grouping the physical storage from multiple network
storage devices so that it looks like a single storage device.

Storage virtualization is also implemented by using software applications.

Usage:

Storage virtualization is mainly done for back-up and recovery purposes.

4.6 IMPLEMENTATION LEVELS OF VIRTUALIZATION

The Five Levels of Implementing Virtualization

1. Instruction Set Architecture Level (ISA)


2. Hardware Abstraction Level (HAL)
3. Operating System Level
4. Library Level
5. Application Level
Virtualization has been present since the 1960s, when it was introduced by IBM. Yet, it has only
recently caught the expected traction owing to the influx of cloud-based systems.

Virtualization, to explain in brief, is the capability to run multiple instances of computer systems
on the same hardware. The way hardware is being used can vary based on the configuration of
the virtual machine.

The best example of this is your own desktop PC or laptop. You might be running Windows on
your system, but with virtualization, now you can also run Macintosh or Linux Ubuntu on it.

Now, there are various levels of virtualizations that we are going to be seeing. Let’s have a look
at them.

The Five Levels of Implementing Virtualization

Virtualization is not that easy to implement. A computer runs an OS that is configured to that
particular hardware. Running a different OS on the same hardware is not exactly feasible.

To tackle this, there exists a hypervisor. What hypervisor does is, it acts as a bridge between
virtual OS and hardware to enable its smooth functioning of the instance.

There are five levels of virtualizations available that are most commonly used in the industry.
These are as follows:

Instruction Set Architecture Level (ISA)

In ISA, virtualization works through an ISA emulation. This is helpful to run heaps of legacy
code which was originally written for different hardware configurations.

These codes can be run on the virtual machine through an ISA.

A binary code that might need additional layers to run can now run on an x86 machine or with
some tweaking, even on x64 machines. ISA helps make this a hardware-agnostic virtual
machine.

The basic emulation, though, requires an interpreter. This interpreter interprets the source code
and converts it to a hardware readable format for processing.

Hardware Abstraction Level (HAL)

As the name suggests, this level helps perform virtualization at the hardware level. It uses a bare
hypervisor for its functioning.

This level helps form the virtual machine and manages the hardware through virtualization.

It enables virtualization of each hardware component such as I/O devices, processors, memory,
etc.

This way multiple users can use the same hardware with numerous instances of virtualization at
the same time.
IBM had first implemented this on the IBM VM/370 back in 1960. It is more usable for cloud-
based infrastructure.

Thus, it is no surprise that currently, Xen hypervisors are using HAL to run Linux and other OS
on x86 based machines.

Operating System Level

At the operating system level, the virtualization model creates an abstract layer between the
applications and the OS.

It is like an isolated container on the physical server and operating system that utilizes hardware
and software. Each of these containers functions like servers.

When the number of users is high, and no one is willing to share hardware, this level of
virtualization comes in handy.

Here, every user gets their own virtual environment with dedicated virtual hardware resources.
This way, no conflicts arise.

Library Level

OS system calls are lengthy and cumbersome. Which is why applications opt for APIs from user-
level libraries.

Most of the APIs provided by systems are rather well documented. Hence, library level
virtualization is preferred in such scenarios.

Library interfacing virtualization is made possible by API hooks. These API hooks control the
communication link from the system to the applications.

Some tools available today, such as vCUDA and WINE, have successfully demonstrated this
technique

Application Level

Application-level virtualization comes handy when you wish to virtualize only an application. It
does not virtualize an entire platform or environment.

On an operating system, applications work as one process. Hence it is also known as process-
level virtualization.

It is generally useful when running virtual machines with high-level languages. Here, the
application sits on top of the virtualization layer, which is above the application program.

The application program is, in turn, residing in the operating system.

Programs written in high-level languages and compiled for an application-level virtual machine
can run fluently here.
4.7 VIRTUALIZATION STRUCTURES TOOLS& MECHANISM

In general, there are three typical classes of VM architecture. Figure 3.1 showed the architectures
of a machine before and after virtualization. Before virtualization, the operating system manages
the hardware. After virtualization, a virtualization layer is inserted between the hardware and the
operat-ing system. In such a case, the virtualization layer is responsible for converting portions of
the real hardware into virtual hardware. Therefore, different operating systems such as Linux and
Windows can run on the same physical machine, simultaneously. Depending on the position of the
virtualiza-tion layer, there are several classes of VM architectures, namely
the hypervisor architecture, para-virtualization, and host-based virtualization. The hypervisor is
also known as the VMM (Virtual Machine Monitor). They both perform the same virtualization
operations.
1. Hypervisor and Xen Architecture
The hypervisor supports hardware-level virtualization (see Figure 3.1(b)) on bare metal devices
like CPU, memory, disk and network interfaces. The hypervisor software sits directly between the
physi-cal hardware and its OS. This virtualization layer is referred to as either the VMM or the
hypervisor. The hypervisor provides hypercalls for the guest OSes and applications. Depending on
the functional-ity, a hypervisor can assume a micro-kernel architecture like the Microsoft Hyper-
V. Or it can assume a monolithic hypervisor architecture like the VMware ESX for server
virtualization.
A micro-kernel hypervisor includes only the basic and unchanging functions (such as physical
memory management and processor scheduling). The device drivers and other changeable
components are outside the hypervisor. A monolithic hypervisor implements all the
aforementioned functions, including those of the device drivers. Therefore, the size of the
hypervisor code of a micro-kernel hyper-visor is smaller than that of a monolithic hypervisor.
Essentially, a hypervisor must be able to convert physical devices into virtual resources dedicated
for the deployed VM to use.
1.1 The Xen Architecture
Xen is an open source hypervisor program developed by Cambridge University. Xen is a micro-
kernel hypervisor, which separates the policy from the mechanism. The Xen hypervisor
implements all the mechanisms, leaving the policy to be handled by Domain 0, as shown in Figure
3.5. Xen does not include any device drivers natively [7]. It just provides a mechanism by which
a guest OS can have direct access to the physical devices. As a result, the size of the Xen hypervisor
is kept rather small. Xen provides a virtual environment located between the hardware and the OS.
A number of vendors are in the process of developing commercial Xen hypervisors, among them
are Citrix XenServer [62] and Oracle VM [42].

The core components of a Xen system are the hypervisor, kernel, and applications. The organi-
zation of the three components is important. Like other virtualization systems, many guest OSes
can run on top of the hypervisor. However, not all guest OSes are created equal, and one in

particular controls the others. The guest OS, which has control ability, is called Domain 0, and the
others are called Domain U. Domain 0 is a privileged guest OS of Xen. It is first loaded when Xen
boots without any file system drivers being available. Domain 0 is designed to access hardware
directly and manage devices. Therefore, one of the responsibilities of Domain 0 is to allocate and
map hardware resources for the guest domains (the Domain U domains).
2. Binary Translation with Full Virtualization
Depending on implementation technologies, hardware virtualization can be classified into two
cate-gories: full virtualization and host-based virtualization. Full virtualization does not need to
modify the host OS. It relies on binary translation to trap and to virtualize the execution of certain
sensitive, nonvirtualizable instructions. The guest OSes and their applications consist of noncritical
and critical instructions. In a host-based system, both a host OS and a guest OS are used. A virtuali-
zation software layer is built between the host OS and guest OS. These two classes of VM architec-
ture are introduced next.
2.1 Full Virtualization
With full virtualization, noncritical instructions run on the hardware directly while critical
instructions are discovered and replaced with traps into the VMM to be emulated by software.
Both the hypervisor and VMM approaches are considered full virtualization. Why are only critical
instructions trapped into the VMM? This is because binary translation can incur a large
performance overhead. Noncritical instructions do not control hardware or threaten the security of
the system, but critical instructions do. Therefore, running noncritical instructions on hardware not
only can promote efficiency, but also can ensure system security.
2.2 Binary Translation of Guest OS Requests Using a VMM
This approach was implemented by VMware and many other software companies. As shown in
Figure 3.6, VMware puts the VMM at Ring 0 and the guest OS at Ring 1. The VMM scans the
instruction stream and identifies the privileged, control- and behavior-sensitive instructions. When
these instructions are identified, they are trapped into the VMM, which emulates the behavior of
these instructions.
The performance of full virtualization may not be ideal, because it involves binary translation
which is rather time-consuming. At the time of this writing, the performance of full virtualization
on the x86 architecture is typically 80 percent to 97 percent that of the host mach
2.3 Host-Based Virtualization
An alternative VM architecture is to install a virtualization layer on top of the host OS. This host
OS is still responsible for managing the hardware. The guest OSes are installed and run on top of
the virtualization layer. Dedicated applications may run on the VMs. Certainly, some other
applications
can also run with the host OS directly. This host-based architecture has some distinct advantages,
as enumerated next. First, the user can install this VM architecture without modifying the host
OS. The virtualizing software can rely on the host OS to provide device drivers and other low-
level services. This will simplify the VM design and ease its deployment.
Second, the host-based approach appeals to many host machine configurations. Compared to the
hypervisor/VMM architecture, the performance of the host-based architecture may also be low.
3. Para-Virtualization with Compiler Support
Para-virtualization needs to modify the guest operating systems. A para-virtualized VM
provides special APIs requiring substantial OS modifications in user applications. Performance
degradation is a critical issue of a virtualized system. No one wants to use a VM if it is much
slower than using a physical machine. The virtualization layer can be inserted at different positions
in a machine soft-ware stack. However, para-virtualization attempts to reduce the virtualization
overhead, and thus improve performance by modifying only the guest OS kernel.
Figure 3.7 illustrates the concept of a paravirtualized VM architecture. The guest operating
systems are para-virtualized. The OS is responsible for managing the hardware and the privileged
instructions to execute at Ring 0, while user-level applications run at Ring 3. The best example of
para-virtualization is the KVM to be described below.
3.1 Para-Virtualization Architecture
When the x86 processor is virtualized, a virtualization layer is inserted between the hardware and
the OS. According to the x86 ring definition, the virtualization layer should also be installed at
Ring 0. Different instructions at Ring 0 may cause some problems. In Figure 3.8, we show that
para-virtualization replaces nonvirtualizable instructions with hypercalls that communicate
directly with the hypervisor or VMM. However, when the guest OS kernel is modified for
virtualization, it can no longer run on the hardware directly.
Although para-virtualization reduces the overhead, it has incurred other problems. First, its
compatibility and portability may be in doubt, because it must support the unmodified OS as well.
Second, the cost of maintaining para-virtualized OSes is high, because they may require deep OS
kernel modifications. Finally, the performance advantage of para-virtualization varies greatly due
to workload variations. Compared with full virtualization, para-virtualization is relatively easy and
more practical. The main problem in full virtualization is its low performance in binary translation.
To speed up binary translation is difficult. Therefore, many virtualization products employ the
para-virtualization architecture. The popular Xen, KVM, and VMware ESX are good examples.
3.2 KVM (Kernel-Based VM)
This is a Linux para-virtualization system—a part of the Linux version 2.6.20 kernel. Memory
management and scheduling activities are carried out by the existing Linux kernel. The KVM does
the rest, which makes it simpler than the hypervisor that controls the entire machine. KVM is a
hardware-assisted para-virtualization tool, which improves performance and supports unmodified
guest OSes such as Windows, Linux, Solaris, and other UNIX variants.
3.3 Para-Virtualization with Compiler Support
Unlike the full virtualization architecture which intercepts and emulates privileged and sensitive
instructions at runtime, para-virtualization handles these instructions at compile time. The guest
OS kernel is modified to replace the privileged and sensitive instructions with hypercalls to the
hypervi-sor or VMM. Xen assumes such a para-virtualization architecture.
The guest OS running in a guest domain may run at Ring 1 instead of at Ring 0. This implies
that the guest OS may not be able to execute some privileged and sensitive instructions. The
privileged instructions are implemented by hypercalls to the hypervisor. After replacing the
instructions with hypercalls, the modified guest OS emulates the behavior of the original guest OS.
On an UNIX system, a system call involves an interrupt or service routine. The hypercalls apply a
dedicated service routine in Xen.
Example 3.3 VMware ESX Server for Para-Virtualizati
VMware pioneered the software market for virtualization. The company has developed
virtualization tools for desktop systems and servers as well as virtual infrastructure for large data
centers. ESX is a VMM or a hypervisor for bare-metal x86 symmetric multiprocessing (SMP)
servers. It accesses hardware resources such as I/O directly and has complete resource
management control. An ESX-enabled server consists of four components: a virtualization layer,
a resource manager, hardware interface components, and a service console, as shown in Figure
3.9. To improve performance, the ESX server employs a para-virtualization architecture in which
the VM kernel interacts directly with the hardware without involving the host OS.
The VMM layer virtualizes the physical hardware resources such as CPU, memory, network and
disk controllers, and human interface devices. Every VM has its own set of virtual hardware
resources. The resource manager allocates CPU, memory disk, and network bandwidth and maps
them to the virtual hardware resource set of each VM created. Hardware interface components are
the device drivers and the
VMware ESX Server File System. The service console is responsible for booting the system,
initiating the execution of the VMM and resource manager, and relinquishing control to those
layers. It also facilitates the process for system administrators.

4.8 VIRTUALIZATION OF CPU, MEMORY & I/O DEVICES

To support virtualization, processors such as the x86 employ a special running mode and
instructions, known as hardware-assisted virtualization. In this way, the VMM and guest OS run
in different modes and all sensitive instructions of the guest OS and its applications are trapped in
the VMM. To save processor states, mode switching is completed by hardware. For the x86
architecture, Intel and AMD have proprietary technologies for hardware-assisted virtualization.

1. Hardware Support for Virtualization


Modern operating systems and processors permit multiple processes to run simultaneously. If there
is no protection mechanism in a processor, all instructions from different processes will access the
hardware directly and cause a system crash. Therefore, all processors have at least two modes, user
mode and supervisor mode, to ensure controlled access of critical hardware. Instructions running
in supervisor mode are called privileged instructions. Other instructions are unprivileged
instructions. In a virtualized environment, it is more difficult to make OSes and applications run
correctly because there are more layers in the machine stack. Example 3.4 discusses Intel’s
hardware support approach.
At the time of this writing, many hardware virtualization products were available. The VMware
Workstation is a VM software suite for x86 and x86-64 computers. This software suite allows
users to set up multiple x86 and x86-64 virtual computers and to use one or more of these VMs
simultaneously with the host operating system. The VMware Workstation assumes the host-based
virtualization. Xen is a hypervisor for use in IA-32, x86-64, Itanium, and PowerPC 970 hosts.
Actually, Xen modifies Linux as the lowest and most privileged layer, or a hypervisor.
One or more guest OS can run on top of the hypervisor. KVM (Kernel-based Virtual Machine)
is a Linux kernel virtualization infrastructure. KVM can support hardware-assisted virtualization
and paravirtualization by using the Intel VT-x or AMD-v and VirtIO framework, respectively. The
VirtIO framework includes a paravirtual Ethernet card, a disk I/O controller, a balloon device for
adjusting guest memory usage, and a VGA graphics interface using VMware drivers.
2. CPU Virtualization
A VM is a duplicate of an existing computer system in which a majority of the VM instructions
are executed on the host processor in native mode. Thus, unprivileged instructions of VMs run
directly on the host machine for higher efficiency. Other critical instructions should be handled
carefully for correctness and stability. The critical instructions are divided into three
categories: privileged instructions, control-sensitive instructions, and behavior-sensitive
instructions. Privileged instructions execute in a privileged mode and will be trapped if executed
outside this mode. Control-sensitive instructions attempt to change the configuration of resources
used. Behavior-sensitive instructions have different behaviors depending on the configuration of
resources, including the load and store operations over the virtual memory.
A CPU architecture is virtualizable if it supports the ability to run the VM’s privileged and
unprivileged instructions in the CPU’s user mode while the VMM runs in supervisor mode. When
the privileged instructions including control- and behavior-sensitive instructions of a VM are exe-
cuted, they are trapped in the VMM. In this case, the VMM acts as a unified mediator for hardware
access from different VMs to guarantee the correctness and stability of the whole system.
However, not all CPU architectures are virtualizable. RISC CPU architectures can be naturally
virtualized because all control- and behavior-sensitive instructions are privileged instructions. On
the contrary, x86 CPU architectures are not primarily designed to support virtualization. This is
because about 10 sensitive instructions, such as SGDT and SMSW, are not privileged instructions.
When these instruc-tions execute in virtualization, they cannot be trapped in the VMM.
On a native UNIX-like system, a system call triggers the 80h interrupt and passes control to the
OS kernel. The interrupt handler in the kernel is then invoked to process the system call. On a
para-virtualization system such as Xen, a system call in the guest OS first triggers the 80h interrupt
nor-mally. Almost at the same time, the 82h interrupt in the hypervisor is triggered. Incidentally,
control is passed on to the hypervisor as well. When the hypervisor completes its task for the guest
OS system call, it passes control back to the guest OS kernel. Certainly, the guest OS kernel may
also invoke the hypercall while it’s running. Although paravirtualization of a CPU lets unmodified
applications run in the VM, it causes a small performance penalty.
2.1 Hardware-Assisted CPU Virtualization
This technique attempts to simplify virtualization because full or paravirtualization is complicated.
Intel and AMD add an additional mode called privilege mode level (some people call it Ring-1) to
x86 processors. Therefore, operating systems can still run at Ring 0 and the hypervisor can run at
Ring -1. All the privileged and sensitive instructions are trapped in the hypervisor automatically.
This technique removes the difficulty of implementing binary translation of full virtualization. It
also lets the operating system run in VMs without modification.
3. Memory Virtualization
Virtual memory virtualization is similar to the virtual memory support provided by modern operat-
ing systems. In a traditional execution environment, the operating system maintains mappings
of virtual memory to machine memory using page tables, which is a one-stage mapping from
virtual memory to machine memory. All modern x86 CPUs include a memory management unit
(MMU) and a translation lookaside buffer (TLB) to optimize virtual memory performance.
However, in a virtual execution environment, virtual memory virtualization involves sharing the
physical system memory in RAM and dynamically allocating it to the physical memory of the
VMs.
That means a two-stage mapping process should be maintained by the guest OS and the VMM,
respectively: virtual memory to physical memory and physical memory to machine memory.
Furthermore, MMU virtualization should be supported, which is transparent to the guest OS. The
guest OS continues to control the mapping of virtual addresses to the physical memory addresses
of VMs. But the guest OS cannot directly access the actual machine memory. The VMM is
responsible for mapping the guest physical memory to the actual machine memory. Figure 3.12
shows the two-level memory mapping procedure.
Since each page table of the guest OSes has a separate page table in the VMM corresponding
to it, the VMM page table is called the shadow page table. Nested page tables add another layer of
indirection to virtual memory. The MMU already handles virtual-to-physical translations as
defined by the OS. Then the physical memory addresses are translated to machine addresses using
another set of page tables defined by the hypervisor. Since modern operating systems maintain a
set of page tables for every process, the shadow page tables will get flooded. Consequently, the
perfor-mance overhead and cost of memory will be very high.
VMware uses shadow page tables to perform virtual-memory-to-machine-memory address
translation. Processors use TLB hardware to map the virtual memory directly to the machine
memory to avoid the two levels of translation on every access. When the guest OS changes the
virtual memory to a physical memory mapping, the VMM updates the shadow page tables to
enable a direct lookup. The AMD Barcelona processor has featured hardware-assisted memory
virtualization since 2007. It provides hardware assistance to the two-stage address translation in a
virtual execution environment by using a technology called nested paging.
4. I/O Virtualization
I/O virtualization involves managing the routing of I/O requests between virtual devices and the
shared physical hardware. At the time of this writing, there are three ways to implement I/O
virtualization: full device emulation, para-virtualization, and direct I/O. Full device emulation is
the first approach for I/O virtualization. Generally, this approach emulates well-known, real-world
devices.
All the functions of a device or bus infrastructure, such as device enumeration, identification,
interrupts, and DMA, are replicated in software. This software is located in the VMM and acts as
a virtual device. The I/O access requests of the guest OS are trapped in the VMM which interacts
with the I/O devices. The full device emulation approach is shown in Figure 3.14.
A single hardware device can be shared by multiple VMs that run concurrently. However,
software emulation runs much slower than the hardware it emulates [10,15]. The para-
virtualization method of I/O virtualization is typically used in Xen. It is also known as the split
driver model consisting of a frontend driver and a backend driver. The frontend driver is running
in Domain U and the backend dri-ver is running in Domain 0. They interact with each other via a
block of shared memory. The frontend driver manages the I/O requests of the guest OSes and the
backend driver is responsible for managing the real I/O devices and multiplexing the I/O data of
different VMs. Although para-I/O-virtualization achieves better device performance than full
device emulation, it comes with a higher CPU overhead.
Direct I/O virtualization lets the VM access devices directly. It can achieve close-to-native
performance without high CPU costs. However, current direct I/O virtualization implementations
focus on networking for mainframes. There are a lot of challenges for commodity hardware
devices. For example, when a physical device is reclaimed (required by workload migration) for
later reassign-ment, it may have been set to an arbitrary state (e.g., DMA to some arbitrary memory
locations) that can function incorrectly or even crash the whole system. Since software-based I/O
virtualization requires a very high overhead of device emulation, hardware-assisted I/O
virtualization is critical. Intel VT-d supports the remapping of I/O DMA transfers and device-
generated interrupts. The architecture of VT-d provides the flexibility to support multiple usage
models that may run unmodified, special-purpose, or “virtualization-aware” guest OSes.
Another way to help I/O virtualization is via self-virtualized I/O (SV-IO) [47]. The key idea of
SV-IO is to harness the rich resources of a multicore processor. All tasks associated with
virtualizing an I/O device are encapsulated in SV-IO. It provides virtual devices and an associated
access API to VMs and a management API to the VMM. SV-IO defines one virtual interface (VIF)
for every kind of virtua-lized I/O device, such as virtual network interfaces, virtual block devices
(disk), virtual camera devices, and others. The guest OS interacts with the VIFs via VIF device
drivers. Each VIF consists of two mes-sage queues. One is for outgoing messages to the devices
and the other is for incoming messages from the devices. In addition, each VIF has a unique ID
for identifying it in SV-IO.
5. Virtualization in Multi-Core Processors
Virtualizing a multi-core processor is relatively more complicated than virtualizing a uni-core
processor. Though multicore processors are claimed to have higher performance by integrating
multiple processor cores in a single chip, muti-core virtualiuzation has raised some new challenges
to computer architects, compiler constructors, system designers, and application programmers.
There are mainly two difficulties: Application programs must be parallelized to use all cores fully,
and software must explicitly assign tasks to the cores, which is a very complex problem.
Concerning the first challenge, new programming models, languages, and libraries are needed
to make parallel programming easier. The second challenge has spawned research involving
scheduling algorithms and resource management policies. Yet these efforts cannot balance well
among performance, complexity, and other issues. What is worse, as technology scales, a new
challenge called dynamic heterogeneity is emerging to mix the fat CPU core and thin GPU cores
on the same chip, which further complicates the multi-core or many-core resource management.
The dynamic heterogeneity of hardware infrastructure mainly comes from less reliable transistors
and increased complexity in using the transistors [33,66].
5.1 Physical versus Virtual Processor Cores
Wells, et al. [74] proposed a multicore virtualization method to allow hardware designers to get an
abstraction of the low-level details of the processor cores. This technique alleviates the burden and
inefficiency of managing hardware resources by software. It is located under the ISA and remains
unmodified by the operating system or VMM (hypervisor). Figure 3.16 illustrates the technique of
a software-visible VCPU moving from one core to another and temporarily suspending execution
of a VCPU when there are no appropriate cores on which it can run.
5.2 Virtual Hierarchy
The emerging many-core chip multiprocessors (CMPs) provides a new computing landscape.
Instead of supporting time-sharing jobs on one or a few cores, we can use the abundant cores in a
space-sharing, where single-threaded or multithreaded jobs are simultaneously assigned to
separate groups of cores for long time intervals. This idea was originally suggested by Marty and
Hill [39]. To optimize for space-shared workloads, they propose using virtual hierarchies to
overlay a coherence and caching hierarchy onto a physical processor. Unlike a fixed physical
hierarchy, a virtual hierarchy can adapt to fit how the work is space shared for improved
performance and performance isolation.
Today’s many-core CMPs use a physical hierarchy of two or more cache levels that statically
determine the cache allocation and mapping. A virtual hierarchy is a cache hierarchy that can adapt
to fit the workload or mix of workloads [39]. The hierarchy’s first level locates data blocks close
to the cores needing them for faster access, establishes a shared-cache domain, and establishes a
point of coherence for faster communication. When a miss leaves a tile, it first attempts to locate
the block (or sharers) within the first level. The first level can also pro-vide isolation between
independent workloads. A miss at the L1 cache can invoke the L2 access.
The idea is illustrated in Figure 3.17(a). Space sharing is applied to assign three workloads to
three clusters of virtual cores: namely VM0 and VM3 for database workload, VM1 and VM2 for
web server workload, and VM4–VM7 for middleware workload. The basic assumption is that each
workload runs in its own VM. However, space sharing applies equally within a single operating
system. Statically distributing the directory among tiles can do much better, provided operating
sys-tems or hypervisors carefully map virtual pages to physical frames. Marty and Hill suggested
a two-level virtual coherence and caching hierarchy that harmonizes with the assignment of tiles
to the virtual clusters of VMs.

Figure 3.17(b) illustrates a logical view of such a virtual cluster hierarchy in two levels. Each VM
operates in a isolated fashion at the first level. This will minimize both miss access time and
performance interference with other workloads or VMs. Moreover, the shared resources of cache
capacity, inter-connect links, and miss handling are mostly isolated between VMs. The second
level maintains a globally shared memory. This facilitates dynamically repartitioning resources
without costly cache flushes. Furthermore, maintaining globally shared memory minimizes
changes to existing system software and allows virtualization features such as content-based page
sharing. A virtual hierarchy adapts to space-shared workloads like multiprogramming and server
consolidation. Figure 3.17 shows a case study focused on consolidated server workloads in a tiled
architecture. This many-core mapping scheme can also optimize for space-shared
multiprogrammed workloads in a single-OS environment.

4.9 DESKTOP VIRTUALIZATION

Desktop virtualization is technology that lets users simulate a workstation load to access a
desktop from a connected device remotely or locally. This separates the desktop environment
and its applications from the physical client device used to access it. Desktop virtualization is a
key element of digital workspaces and depends on application virtualization.
How does desktop virtualization work?
Desktop virtualization can be achieved in a variety of ways, but the most important two types of
desktop virtualization are based on whether the operating system instance is local or remote.
Local Desktop Virtualization
Local desktop virtualization means the operating system runs on a client device using
hardware virtualization, and all processing and workloads occur on local hardware. This
type of desktop virtualization works well when users do not need a continuous network
connection and can meet application computing requirements with local system resources.
However, because this requires processing to be done locally you cannot use local desktop
virtualization to share VMs or resources across a network to thin clients or mobile devices.
Remote Desktop Virtualization
Remote desktop virtualization is a common use of virtualization that operates in a
client/server computing environment. This allows users to run operating systems and
applications from a server inside a data center while all user interactions take place on a
client device. This client device could be a laptop, thin client device, or a smartphone. The
result is IT departments have more centralized control over applications and desktops, and
can maximize the organization’s investment in IT hardware through remote access to
shared computing resources.
What is virtual desktop infrastructure?
A popular type of desktop virtualization is virtual desktop infrastructure (VDI). VDI is a variant
of the client-server model of desktop virtualization which uses host-based VMs to deliver
persistent and nonpersistent virtual desktops to all kinds of connected devices. With a persistent
virtual desktop, each user has a unique desktop image that they can customize with apps and
data, knowing it will be saved for future use. A nonpersistent virtual desktop infrastructure
allows users to access a virtual desktop from an identical pool when they need it; once the user
logs out of a nonpersistent VDI, it reverts to its unaltered state. Some of the advantages of virtual
desktop infrastructure are improved security and centralized desktop management across an
organization.
What are the benefits of desktop virtualization?
1. Resource Management:
Desktop virtualization helps IT departments get the most out of their hardware investments by
consolidating most of their computing in a data center. Desktop virtualization then allows
organizations to issue lower-cost computers and devices to end users because most of the
intensive computing work takes place in the data center. By minimizing how much computing is
needed at the endpoint devices for end users, IT departments can save money by buying less
costly machines.
2. Remote work:
Desktop virtualization helps IT admins support remote workers by giving IT central control over
how desktops are virtually deployed across an organization’s devices. Rather than manually
setting up a new desktop for each user, desktop virtualization allows IT to simply deploy a
ready-to-go virtual desktop to that user’s device. Now the user can interact with the operating
system and applications on that desktop from any location and the employee experience will be
the same as if they were working locally. Once the user is finished using this virtual desktop,
they can log off and return that desktop image to the shared pool.
3. Security:
Desktop virtualization software provides IT admins centralized security control over which users
can access which data and which applications. If a user’s permissions change because they leave
the company, desktop virtualization makes it easy for IT to quickly remove that user’s access to
their persistent virtual desktop and all its data—instead of having to manually uninstall
everything from that user’s devices. And because all company data lives inside the data center
rather than on each machine, a lost or stolen device does not post the same data risk. If someone
steals a laptop using desktop virtualization, there is no company data on the actual machine and
hence less risk of a breach

4.10 SERVER VIRTUALIZATION

t is the division of physical server into several virtual servers and this division is mainly done to
improvise the utility of server resource. In other word it is the masking of resources that are
located in server which includes the number & identity of processors, physical servers & the
operating system. This division of one physical server into multiple isolated virtual servers is
done by server administrator using software. The virtual environment is sometimes called the
virtual private-servers.

In this process, the server resources are kept hidden from the user. This partitioning of physical
server into several virtual environments; result in the dedication of one server to perform a single
application or task.

Usage of Server Virtualization

This technique is mainly used in web-servers which reduces the cost of web-hosting services.
Instead of having separate system for each web-server, multiple virtual servers can run on the
same system/computer.

• To centralize the server administration


• Improve the availability of server
• Helps in disaster recovery
• Ease in development & testing
• Make efficient use of server resources.

Approaches To Virtualization:

For Server Virtualization, there are three popular approaches.


These are:

• Virtual Machine model


• Para-virtual Machine model
• Operating System (OS) layer Virtualization

1. Virtual Machine model: are based on host-guest paradigm, where each guest runs on a virtual
replica of hardware layer. This technique of virtualization provide guest OS to run without
modification. However it requires real computing resources from the host and for this a
hypervisor or VM is required to coordinate instructions to CPU.
2. Para-Virtual Machine model: is also based on host-guest paradigm & uses virtual machine
monitor too. In this model the VMM modifies the guest operating system's code which is called
'porting'. Like that of virtual machine, similarly the Para-virtual machine is also capable of
executing multiple operating systems. The Para-virtual model is used by both Xen & UML.
3. Operating System Layer Virtualization: Virtualization at OS level functions in a different
way and is not based on host-guest paradigm. In this model the host runs a single operating
system kernel as its main/core and transfers its functionality to each of the guests. The guest
must use the same operating system as the host. This distributed nature of architecture
eliminated system calls between layers and hence reduces overhead of CPU usage. It is also a
must that each partition remains strictly isolated from its neighbors because any failure or
security breach of one partition won't be able to affect the other partitions.

Advantages of Server Virtualization

• Cost Reduction: Server virtualization reduces cost because less hardware is required.
• Independent Restart: Each server can be rebooted independently and that reboot won't affect
the working of other virtual servers.

4.11 GOOGLE APP ENGINE

Google App Engine (GAE) is a service for developing and hosting Web applications in Google's
data centers, belonging to the platform as a service (PaaS) category of cloud computing. Web
applications hosted on GAE are sandboxed and run across multiple servers for redundancy and
allowing for scaling of resources according to the traffic requirements of the moment. App
Engine automatically allocates additional resources to the servers to accommodate increased
load.
Google App Engine is Google's platform as a service offering that allows developers and
businesses to build and run applications using Google's advanced infrastructure. These
applications are required to be written in one of a few supported languages, namely: Java,
Python, PHP and Go. It also requires the use of Google query language and that the database
used is Google Big Table. Applications must abide by these standards, so applications either
must be developed with GAE in mind or else modified to meet the requirements.

GAE is a platform, so it provides all of the required elements to run and host Web applications,
be it on mobile or Web. Without this all-in feature, developers would have to source their own
servers, database software and the APIs that would make all of them work properly together, not
to mention the entire configuration that must be done. GAE takes this burden off the developers
so they can concentrate on the app front end and functionality, driving better user experience.

Advantages of GAE include:

• Readily available servers with no configuration requirement


• Power scaling function all the way down to "free" when resource usage is minimal
• Automated cloud computing tools

4.12 AMAZON AWS

In 2006, Amazon Web Services (AWS) began offering IT infrastructure services to businesses in
the form of web services -- now commonly known as cloud computing. One of the key benefits
of cloud computing is the opportunity to replace up-front capital infrastructure expenses with
low variable costs that scale with your business. With the Cloud, businesses no longer need to
plan for and procure servers and other IT infrastructure weeks or months in advance. Instead,
they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster.

Today, Amazon Web Services provides a highly reliable, scalable, low-cost infrastructure
platform in the cloud that powers hundreds of thousands of businesses in 190 countries around
the world. With data center locations in the U.S., Europe, Brazil, Singapore, Japan, and
Australia, customers across all industries are taking advantage of the following benefits:

Low Cost
AWS offers low, pay-as-you-go pricing with no up-front expenses or long-term commitments.
We are able to build and manage a global infrastructure at scale, and pass the cost saving benefits
onto you in the form of lower prices. With the efficiencies of our scale and expertise, we have
been able to lower our prices on 15 different occasions over the past four years. Visit the
Economics Center to learn more.

Agility and Instant Elasticity


AWS provides a massive global cloud infrastructure that allows you to quickly innovate,
experiment and iterate. Instead of waiting weeks or months for hardware, you can instantly
deploy new applications, instantly scale up as your workload grows, and instantly scale down
based on demand. Whether you need one virtual server or thousands, whether you need them for
a few hours or 24/7, you still only pay for what you use.

Open and Flexible

AWS is a language and operating system agnostic platform. You choose the development
platform or programming model that makes the most sense for your business. You can choose
which services you use, one or several, and choose how you use them. This flexibility allows you
to focus on innovation, not infrastructure

Secure
AWS is a secure, durable technology platform with industry-recognized certifications and audits:
PCI DSS Level 1, ISO 27001, FISMA Moderate, FedRAMP, HIPAA, and SOC 1 (formerly
referred to as SAS 70 and/or SSAE 16) and SOC 2 audit reports. Our services and data centers
have multiple layers of operational and physical security to ensure the integrity and safety of
your data.

Solutions
The AWS cloud computing platform provides the flexibility to launch your application
regardless of your use case or industry. Learn more about popular solutions customers are
running on AWS:

Application Hosting
Use reliable, on-demand infrastructure to power your applications, from hosted internal
applications to SaaS offerings.

Websites
Satisfy your dynamic web hosting needs with AWS’s scalable infrastructure platform.

Backup and Storage


Store data and build dependable backup solutions using AWS’s inexpensive data storage
services.

Enterprise IT
Host internal- or external-facing IT applications in AWS's secure environment.

Content Delivery
Quickly and easily distribute content to end users worldwide, with low costs and high data
transfer speeds.

Databases
Take advantage of a variety of scalable database solutions, from hosted enterprise database
software or non-relational database solutions.

4.13 FEDERATION IN THE CLOUD

Cloud Federation, also known as Federated Cloud is the deployment and management of several
external and internal cloud computing services to match business needs. It is a multi-national
cloud system that integrates private, community, and public clouds into scalable computing
platforms. Federated cloud is created by connecting the cloud environment of different cloud
providers using a common standard.
Federated Cloud

The architecture of Federated Cloud:

The architecture of Federated Cloud consists of three basic components:


1. Cloud Exchange
The Cloud Exchange acts as a mediator between cloud coordinator and cloud broker. The
demands of the cloud broker are mapped by the cloud exchange to the available services
provided by the cloud coordinator. The cloud exchange has a track record of what is the present
cost, demand patterns, and available cloud providers, and this information is periodically
reformed by the cloud coordinator.
2. Cloud Coordinator
The cloud coordinator assigns the resources of the cloud to the remote users based on the quality
of service they demand and the credits they have in the cloud bank. The cloud enterprises and
their membership are managed by the cloud controller.
3. Cloud Broker
The cloud broker interacts with the cloud coordinator, analyzes the Service-level agreement and
the resources offered by several cloud providers in cloud exchange. Cloud broker finalizes the
most suitable deal for their client.

Properties of Federated Cloud:

1. In the federated cloud, the users can interact with the architecture either centrally or in a
decentralized manner. In centralized interaction, the user interacts with a broker to mediate
between them and the organization. Decentralized interaction permits the user to interact
directly with the clouds in the federation.
2. Federated cloud can be practiced with various niches like commercial and non-commercial.
3. The visibility of a federated cloud assists the user to interpret the organization of several
clouds in the federated environment.
4. Federated cloud can be monitored in two ways. MaaS (Monitoring as a Service) provides
information that aids in tracking contracted services to the user. Global monitoring aids in
maintaining the federated cloud.
5. The providers who participate in the federation publish their offers to a central entity. The
user interacts with this central entity to verify the prices and propose an offer.
6. The marketing objects like infrastructure, software, and platform have to pass through
federation when consumed in the federated cloud.
Federal Cloud Architecture

Benefits of Federated Cloud:

1. It minimizes the consumption of energy.


2. It increases reliability.
3. It minimizes the time and cost of providers due to dynamic scalability.
4. It connects various cloud service providers globally. The providers may buy and sell services
on demand.
5. It provides easy scaling up of resources.

Challenges in Federated Cloud:

1. In cloud federation, it is common to have more than one provider for processing the incoming
demands. In such cases, there must be a scheme needed to distribute the incoming demands
equally among the cloud service providers.
2. The increasing requests in cloud federation have resulted in more heterogeneous
infrastructure, making interoperability an area of concern. It becomes a challenge for cloud
users to select relevant cloud service providers and therefore, it ties them to a particular cloud
service provider.
3. A federated cloud means constructing a seamless cloud environment that can interact with
people, different devices, several application interfaces, and other entities.

Federated Cloud technologies:


The technologies that aid the cloud federation and cloud services are:
1. OpenNebula
It is a cloud computing platform for managing heterogeneous distributed data center
infrastructures. It can use the resources of its interoperability, leveraging existing information
technology assets, protecting the deals, and adding the application programming interface (API).
2. Aneka coordinator
The Aneka coordinator is a proposition of the Aneka services and Aneka peer components
(network architectures) which give the cloud ability and performance to interact with other cloud
services.
3. Eucalyptus
Eucalyptus defines the pooling computational, storage, and network resources that can be
measured scaled up or down as application workloads change in the utilization of the software.
It is an open-source framework that performs the storage, network, and many other
computational resources to access the cloud environment.
UNIT V MICROSERVICES AND DEVOPS

Defining Micro services - Emergence of Micro service Architecture – Design patterns of


Micro services – The Mini web service architecture – Micro service dependency tree –
Challenges with Micro services - SOA vs. Micro service – Micro service and API – Deploying
And maintaining Micro services – Reason for having DevOps – Overview of DevOps –
History of DevOps – Concepts and terminology in DevOps – Core elements of DevOps –
Life cycle of DevOps – Adoption of DevOps - DevOps Tools – Build, Promotion and
Deployment in DevOps - DevOps in Business Enterprises

DevOps teams encapsulate individual pieces of functionality in micro services and build larger
systems by composing the micro services like building blocks. ... Using micro services can
increase team velocity. DevOps practices, such as Continuous Integration and Continuous
Delivery, are used to drive micro service deployments
5.1 DEFINING MICRO SERVICES
Micro services are an architectural and organizational approach to software development where
software is composed of small independent services that communicate over well-defined APIs.
These services are owned by small, self-contained teams.
Micro services architectures make applications easier to scale and faster to develop, enabling
innovation and accelerating time-to-market for new features.
Monolithic vs. Micro services Architecture
With monolithic architectures, all processes are tightly coupled and run as a single service. This
means that if one process of the application experiences a spike in demand, the entire
architecture must be scaled.
Adding or improving a monolithic application’s features becomes more complex as the code
base grows. This complexity limits experimentation and makes it difficult to implement new
ideas.
Monolithic architectures add risk for application availability because many dependent and
tightly coupled processes increase the impact of a single process failure.
With a microservices architecture, an application is built as independent components that run
each application process as a service.
These services communicate via a well-defined interface using lightweight APIs.
Services are built for business capabilities and each service performs a single function. Because
they are independently run, each service can be updated, deployed, and scaled to meet demand
for specific functions of an application.
Breaking a monolithic application into micro services
Characteristics of Micro services

Autonomous
Each component service in micro services architecture can be developed, deployed, operated,
and scaled without affecting the functioning of other services. Services do not need to share any
of their code or implementation with other services. Any communication between individual
components happens via well-defined APIs.

Specialized
Each service is designed for a set of capabilities and focuses on solving a specific problem. If
developers contribute more code to a service over time and the service becomes complex, it can
be broken into smaller services.
Benefits of Micro services

Agility
Microservices foster an organization of small, independent teams that take ownership of their
services. Teams act within a small and well understood context, and are empowered to work
more independently and more quickly. This shortens development cycle times. You benefit
significantly from the aggregate throughput of the organization.

Flexible Scaling
Microservices allow each service to be independently scaled to meet demand for the application
feature it supports. This enables teams to right-size infrastructure needs, accurately measure the
cost of a feature, and maintain availability if a service experiences a spike in demand.
Easy Deployment
Microservices enable continuous integration and continuous delivery, making it easy to try out
new ideas and to roll back if something doesn’t work. The low cost of failure enables
experimentation, makes it easier to update code, and accelerates time-to-market for new features.

Technological Freedom
Microservices architectures don’t follow a “one size fits all” approach. Teams have the freedom
to choose the best tool to solve their specific problems. As a consequence, teams building
microservices can choose the best tool for each job.

Reusable Code
Dividing software into small, well-defined modules enables teams to use functions for multiple
purposes. A service written for a certain function can be used as a building block for another
feature. This allows an application to bootstrap off itself, as developers can create new
capabilities without writing code from scratch.

Resilience
Service independence increases an application’s resistance to failure. In a monolithic
architecture, if a single component fails, it can cause the entire application to fail. With micro
services, applications handle total service failure by degrading functionality and not crashing the
entire application.
5.2 EMERGENCE OF MICRO SERVICE ARCHITECTURE
Several years back, the ‘cloud’ was considered the future of application development. While this
is still the case today, savvy experts say that the key to modernizing application development lies
in one particular cloud computing component: cloud-native technology.
This is an all-new approach that involves building applications using cloud-based technologies
only from bottom up. A cloud native application is distinct from cloud-based and cloud-enabled
apps in that the former is built and hosted on a cloud service from birth. Conversely, cloud-based
and cloud-enabled applications are originally developed on traditional architecture before being
reconfigured for execution in the cloud.
Combined with cloud native security, this new architecture continues to be a key component of
digital transformation for its excellent scalability, adaptability, and portability. Additionally,
cloud native technology is all about speed and agility. These 2 are among the most crucial
requirements in this fast-paced era where end-users want rapidly responsive applications with
lots of features but zero downtime.
Cloud Native Technology Foundational Pillars
The cloud native architecture is based on several critical components: the most critical of which
are the cloud infrastructure itself, microservices, modern design, containerization,
automation, and backing services.
For this post, we’re going to expound on microservices as they are considered the core of cloud
native architecture.
What are Microservices?
Microservices in cloud native architecture is building an application by separating each of its
functionalities into multiple independent services. These services are engineered to handle an
individual task in the application ranging from implementing capabilities to running processes.
Although these small software services operate largely independently, they are part of a more
extensive architecture and are made to ensure excellent communication and data exchange
between themselves.
A Peep at the Era before Microservices
The concept of microservices was conceived from the dire need to simplify the monolithic
application architecture.
Before developers embraced the idea of creating applications in separate software services, each
application comprised several layers built in a single stack and coded in one monolithic server.
These layers can be:
1. Authorization layer
2. Presentation layer- handles HTTP requests
3. Application layer- business logic
4. Database layer- responsible for retrieving data from the database
5. Application integration- creates links with other services or data sources
6. Notification layer- for sending notifications
The main challenges of this traditional application architecture set in when the business expands,
making it necessary to handle and process hundreds, thousands, or millions of users at once. The
obvious solution for this is vertical scaling, which involves installing powerful hardware to
sustain heavier applications. You may also make the server more capable, for instance, by
boosting its processing power and adding memory and storage. However, there’s only so much
that you can do to scale the infrastructure this way.
Secondly, the monolithic architecture creates a lot of areas where things can go wrong. What’s
worse is that a bug in one of the modules is potent to bring the entire system down. Furthermore,
since this architecture does not have self-healing properties, damage in one of its components
requires human intervention.
The other major drawback of monolithic applications is that you can’t make changes to a single
module without affecting the entire application. This makes it necessary to redeploy the whole
system, even when making a simple upgrade in one of the components.
How are Micro services Different?
The significant advantage of micro services is that each service can be deployed in its server with
dedicated resources. Even when deployed in a single server, each service can be made to reside
in a dedicated container. A container is technically a server by itself in that it features all the
executables, libraries, configuration files, and the binary code that each service requires to
function.
Microservices are expected to be among the cloud computing trends for 2021 because they
allow continuous delivery (CD), which enables producing software at any time and in short
cycles. One of the reasons why microservices enhance CD is the ability to upgrade one
microservice independently without affecting the other services.
Secondly, containers are self-healing, which means that they can auto-scale, auto-replicate, and
auto-restart without the need for human intervention. The real advantage here is that problems in
the application and infrastructure don’t cause downtimes and are fixed quickly.
Advantages of Microservices Architecture
• Faster deployments– microservices promote continuous delivery, which is releasing complex
applications within a short time.
• Continuous integration/continuous delivery (CI/CD)– this allows quick updates on code.
• Improved testability– because each service is small.
• Fault isolation– deploying each service in an independent server or container isolates faults.
This means that an issue with one module won’t affect the performance of the entire application.
• Improved scalability– separating the services allows horizontal scalability whenever necessary
as opposed to scaling the entire system. This helps a lot in cost-saving.
• Easy to understand– credit to the added simplicity, the IT team can easily understand
everything about each microservice and how it works.
• Enhanced security– because each service is a small independent component, the attack surface
is greatly reduced. An attack on one of the services is less likely to spread to other services as
they are isolated from each other.
Challenges of Microservices
While there’s no doubt that microservice architecture is ushering us into the future of application
development, it has a good share of its challenges, too:
• Testing takes time compared to monolith applications.
• Microservices are a distributed system, which developers find complex to build.
• Because services are hosted in independent servers or containers, there is a need to create extra
code to facilitate effective communication between the modules.
• Most IDEs (developer tools) are meant for creating monolithic applications and lack enough
support for distributed micro services.
• Deploying a distributed system of a cluster of services is generally complex.
• It’s not ideal for legacy applications as the process of re-architecting the codebase can be both
complex and costly.

5.3 DESIGN PATTERNS OF MICRO SERVICES


Microservice architecture has become the de facto choice for modern application development.
Though it solves certain problems, it is not a silver bullet. It has several drawbacks and when
using this architecture, there are numerous issues that must be addressed. This brings about the
need to learn common patterns in these problems and solve them with reusable solutions.
Thus, design patterns for microservices need to be discussed. Before we dive into the design
patterns, we need to understand on what principles microservice architecture has been built:

1. Scalability
2. Availability
3. Resiliency
4. Independent, autonomous
5. Decentralized governance
6. Failure isolation
7. Auto-Provisioning
8. Continuous delivery through DevOps

Applying all these principles brings several challenges and issues. Let's discuss those problems
and their solutions.

1. Decomposition Patterns

a. Decompose by Business Capability

Problem
Microservices is all about making services loosely coupled, applying the single responsibility
principle. However, breaking an application into smaller pieces has to be done logically. How do
we decompose an application into small services?
Solution
One strategy is to decompose by business capability. A business capability is something that a
business does in order to generate value. The set of capabilities for a given business depend on
the type of business. For example, the capabilities of an insurance company typically include
sales, marketing, underwriting, claims processing, billing, compliance, etc. Each business
capability can be thought of as a service, except it’s business-oriented rather than technical.

b. Decompose by Subdomain

Problem
Decomposing an application using business capabilities might be a good start, but you will come
across so-called "God Classes" which will not be easy to decompose. These classes will be
common among multiple services. For example, the Order class will be used in Order
Management, Order Taking, Order Delivery, etc. How do we decompose them?

Solution
For the "God Classes" issue, DDD (Domain-Driven Design) comes to the rescue. It uses
subdomains and bounded context concepts to solve this problem. DDD breaks the whole domain
model created for the enterprise into subdomains. Each subdomain will have a model, and the
scope of that model will be called the bounded context. Each microservice will be developed
around the bounded context.

Note: Identifying subdomains is not an easy task. It requires an understanding of the business.
Like business capabilities, subdomains are identified by analyzing the business and its
organizational structure and identifying the different areas of expertise.

c. Strangler Pattern

Problem
So far, the design patterns we talked about were decomposing applications for greenfield, but
80% of the work we do is with brownfield applications, which are big, monolithic applications.
Applying all the above design patterns to them will be difficult because breaking them into
smaller pieces at the same time it's being used live is a big task.

Solution
The Strangler pattern comes to the rescue. The Strangler pattern is based on an analogy to a vine
that strangles a tree that it’s wrapped around. This solution works well with web applications,
where a call goes back and forth, and for each URI call, a service can be broken into different
domains and hosted as separate services. The idea is to do it one domain at a time. This creates
two separate applications that live side by side in the same URI space. Eventually, the newly
refactored application “strangles” or replaces the original application until finally you can shut
off the monolithic application.

2. Integration Patterns

a. API Gateway Pattern

Problem
When an application is broken down to smaller microservices, there are a few concerns that need
to be addressed:

1. How to call multiple microservices abstracting producer information.


2. On different channels (like desktop, mobile, and tablets), apps need different data to
respond for the same backend service, as the UI might be different.

3. Different consumers might need a different format of the responses from reusable
microservices. Who will do the data transformation or field manipulation?

4. How to handle different type of Protocols some of which might not be supported by
producer microservice.

Solution
An API Gateway helps to address many concerns raised by microservice implementation, not
limited to the ones above.

1. An API Gateway is the single point of entry for any microservice call.

2. It can work as a proxy service to route a request to the concerned microservice,


abstracting the producer details.

3. It can fan out a request to multiple services and aggregate the results to send back to the
consumer.

4. One-size-fits-all APIs cannot solve all the consumer's requirements; this solution can
create a fine-grained API for each specific type of client.

5. It can also convert the protocol request (e.g. AMQP) to another protocol (e.g. HTTP) and
vice versa so that the producer and consumer can handle it.

6. It can also offload the authentication/authorization responsibility of the microservice.

b. Aggregator Pattern

Problem
We have talked about resolving the aggregating data problem in the API Gateway Pattern.
However, we will talk about it here holistically. When breaking the business functionality into
several smaller logical pieces of code, it becomes necessary to think about how to collaborate the
data returned by each service. This responsibility cannot be left with the consumer, as then it
might need to understand the internal implementation of the producer application.

Solution
The Aggregator pattern helps to address this. It talks about how we can aggregate the data from
different services and then send the final response to the consumer. This can be done in two
ways:

1. A composite microservice will make calls to all the required microservices, consolidate the
data, and transform the data before sending back.

2. An API Gateway can also partition the request to multiple microservices and aggregate the
data before sending it to the consumer.

It is recommended if any business logic is to be applied, then choose a composite microservice.


Otherwise, the API Gateway is the established solution.

c. Client-Side UI Composition Pattern


Problem
When services are developed by decomposing business capabilities/subdomains, the services
responsible for user experience have to pull data from several microservices. In the monolithic
world, there used to be only one call from the UI to a backend service to retrieve all data and
refresh/submit the UI page. However, now it won't be the same. We need to understand how to
do it.

Solution
With microservices, the UI has to be designed as a skeleton with multiple sections/regions of the
screen/page. Each section will make a call to an individual backend microservice to pull the data.
That is called composing UI components specific to service. Frameworks like AngularJS and
ReactJS help to do that easily. These screens are known as Single Page Applications (SPA). This
enables the app to refresh a particular region of the screen instead of the whole page.

3. Database Patterns

a. Database per Service

Problem
There is a problem of how to define database architecture for microservices. Following are the
concerns to be addressed:

1. Services must be loosely coupled. They can be developed, deployed, and scaled
independently.

2. Business transactions may enforce invariants that span multiple services.

3. Some business transactions need to query data that is owned by multiple services.

4. Databases must sometimes be replicated and sharded in order to scale.

5. Different services have different data storage requirements.

Solution
To solve the above concerns, one database per microservice must be designed; it must be private
to that service only. It should be accessed by the microservice API only. It cannot be accessed by
other services directly. For example, for relational databases, we can use private-tables-per-
service, schema-per-service, or database-server-per-service. Each microservice should have a
separate database id so that separate access can be given to put up a barrier and prevent it from
using other service tables.

b. Shared Database per Service

Problem
We have talked about one database per service being ideal for microservices, but that is possible
when the application is Greenfield and to be developed with DDD. But if the application is a
monolith and trying to break into micro services, denormalization is not that easy. What is the
suitable architecture in that case?

Solution
A shared database per service is not ideal, but that is the working solution for the above scenario.
Most people consider this an anti-pattern for microservices, but for brownfield applications, this
is a good start to break the application into smaller logical pieces. This should not be applied for
greenfield applications. In this pattern, one database can be aligned with more than one
microservice, but it has to be restricted to 2-3 maximum, otherwise scaling, autonomy, and
independence will be challenging to execute.

c. Command Query Responsibility Segregation (CQRS)

Problem
Once we implement database-per-service, there is a requirement to query, which requires joint
data from multiple services — it's not possible. Then, how do we implement queries in
microservice architecture?

Solution
CQRS suggests splitting the application into two parts — the command side and the query side.
The command side handles the Create, Update, and Delete requests. The query side handles the
query part by using the materialized views. The event sourcing pattern is generally used along
with it to create events for any data change. Materialized views are kept updated by subscribing
to the stream of events.

d. Saga Pattern

Problem
When each service has its own database and a business transaction spans multiple services, how
do we ensure data consistency across services? For example, for an e-commerce application
where customers have a credit limit, the application must ensure that a new order will not exceed
the customer’s credit limit. Since Orders and Customers are in different databases, the
application cannot simply use a local ACID transaction.

Solution
A Saga represents a high-level business process that consists of several sub requests, which each
update data within a single service. Each request has a compensating request that is executed
when the request fails. It can be implemented in two ways:

1. Choreography — When there is no central coordination, each service produces and


listens to another service’s events and decides if an action should be taken or not.

2. Orchestration — An orchestrator (object) takes responsibility for a saga’s decision


making and sequencing business logic.

4. Observability Patterns

a. Log Aggregation

Problem
Consider a use case where an application consists of multiple service instances that are running
on multiple machines. Requests often span multiple service instances. Each service instance
generates a log file in a standardized format. How can we understand the application behavior
through logs for a particular request?

Solution
We need a centralized logging service that aggregates logs from each service instance. Users can
search and analyze the logs. They can configure alerts that are triggered when certain messages
appear in the logs. For example, PCF does have Loggeregator, which collects logs from each
component (router, controller, diego, etc...) of the PCF platform along with applications. AWS
Cloud Watch also does the same.

b. Performance Metrics

Problem
When the service portfolio increases due to microservice architecture, it becomes critical to keep
a watch on the transactions so that patterns can be monitored and alerts sent when an issue
happens. How should we collect metrics to monitor application perfomance?

Solution
A metrics service is required to gather statistics about individual operations. It should aggregate
the metrics of an application service, which provides reporting and alerting. There are two
models for aggregating metrics:

• Push — the service pushes metrics to the metrics service e.g. NewRelic, AppDynamics

• Pull — the metrics services pulls metrics from the service e.g. Prometheus

c. Distributed Tracing

Problem
In microservice architecture, requests often span multiple services. Each service handles a
request by performing one or more operations across multiple services. Then, how do we trace a
request end-to-end to troubleshoot the problem?

Solution
We need a service which

• Assigns each external request a unique external request id.

• Passes the external request id to all services.

• Includes the external request id in all log messages.

• Records information (e.g. start time, end time) about the requests and operations
performed when handling an external request in a centralized service.

Spring Cloud Slueth, along with Zipkin server, is a common implementation.

d. Health Check

Problem
When microservice architecture has been implemented, there is a chance that a service might be
up but not able to handle transactions. In that case, how do you ensure a request doesn't go to
those failed instances? With a load balancing pattern implementation.

Solution
Each service needs to have an endpoint which can be used to check the health of the application,
such as /health. This API should o check the status of the host, the connection to other
services/infrastructure, and any specific logic.

Spring Boot Actuator does implement a /health endpoint and the implementation can be
customized, as well.
5. Cross-Cutting Concern Patterns

a. External Configuration

Problem
A service typically calls other services and databases as well. For each environment like dev,
QA, UAT, prod, the endpoint URL or some configuration properties might be different. A
change in any of those properties might require a re-build and re-deploy of the service. How do
we avoid code modification for configuration changes?

Solution
Externalize all the configuration, including endpoint URLs and credentials. The application
should load them either at startup or on the fly.
Spring Cloud config server provides the option to externalize the properties to GitHub and load
them as environment properties. These can be accessed by the application on startup or can be
refreshed without a server restart.

b. Service Discovery Pattern

Problem
When microservices come into the picture, we need to address a few issues in terms of calling
services:

1. With container technology, IP addresses are dynamically allocated to the service


instances. Every time the address changes, a consumer service can break and need
manual changes.

2. Each service URL has to be remembered by the consumer and become tightly coupled.

So how does the consumer or router know all the available service instances and locations?

Solution
A service registry needs to be created which will keep the metadata of each producer service. A
service instance should register to the registry when starting and should de-register
when shutting down. The consumer or router should query the registry and find out the location
of the service. An example of client-side discovery is Netflix Eureka and an example of server-
side discovery is AWS ALB.

c. Circuit Breaker Pattern

Problem
A service generally calls other services to retrieve data, and there is the chance that the
downstream service may be down. There are two problems with this: first, the request will keep
going to the down service, exhausting network resources and slowing performance. Second, the
user experience will be bad and unpredictable. How do we avoid cascading service failures and
handle failures gracefully?

Solution
The consumer should invoke a remote service via a proxy that behaves in a similar fashion to an
electrical circuit breaker. When the number of consecutive failures crosses a threshold, the
circuit breaker trips, and for the duration of a timeout period, all attempts to invoke the remote
service will fail immediately. After the timeout expires the circuit breaker allows a limited
number of test requests to pass through. If those requests succeed, the circuit breaker resumes
normal operation. Otherwise, if there is a failure, the timeout period begins again.

Netflix Hystrix is a good implementation of the circuit breaker pattern. It also helps you to define
a fallback mechanism which can be used when the circuit breaker trips. That provides a better
user experience.

d. Blue-Green Deployment Pattern

Problem
With micro service architecture, one application can have many micro services. If we stop all the
services then deploy an enhanced version, the downtime will be huge and can impact the
business. Also, the rollback will be a nightmare. How do we avoid or reduce downtime of the
services during deployment?

5.4 THE MINI WEB SERVICE ARCHITECTURE

The homogenous nature of a monolithic architecture has both strengths and challenges:

• Because monolithic applications have all business services and functions, including
theirsupporting databases, deployed as a single platform, software
development and deployment are relatively faster and easier.
• Debugging is also more straightforward because you can open up the entire project within a
single IDE instance.

However, with those benefits, monoliths are complex to maintain, refactor, and scale.
Over the years, several architectural patterns evolved out of monoliths, aiming to address these
challenges by separating business functions into individually deployable services. Two such
evolved patterns are the microservices and miniservices architectures.

This article compares these two architectures and the benefits they offer as alternatives to a
monolith.

Microservices architecture
A microservices architecture follows a development approach that designs software applications
as a set of loosely coupled, independently deployable services with distinct interfaces. These
individual functional modules perform specifically defined, separate tasks autonomously.
Characteristics of a microservice architecture

A microservice-based framework is a collection of autonomous microservices designed around


specific business capabilities. Essentially these services are miniature applications that function
collectively to support the main application.

Essential attributes of a microservices design requires:

• Each microservice contains one, and only one, responsibility, which is built around a
particular business function such as sending emails, raising alerts, assigning tickets, etc.
• Every microservice has its own database that does not share data storage with other
services.
• All services are developed, deployed, maintained, and run independently of other
microservices. Thus, each microservice has its own codebase and deployment
environments.
• Microservices are loosely coupled, i.e., you can change one microservice without
updating/impacting others.
• All microservices communicate with each other via an event-driven communication that
runs a Publisher-Subscriber pattern.

Benefits of using microservices

Adopting a microservice architecture brings a range of added benefits that aid efficiency to the
software development lifecycle (SDLC). These benefits include:

• Improved scalability. Each microservice can be scaled independently of others, instead of


scaling up the entire application framework. For instance, a specific service can be allocated
additional resources to scale its efficiency. Such an ability to scale resources for specific
services makes microservices more operationally efficient than monoliths.
• Improved system resiliency. An application consists of multiple, independent services. By
design, when one service fails, the entire system remains fairly unimpacted. This allows the
application to remain functional, while the right team works on the affected service.
• Better fault isolation. The loosely coupled nature of microservices makes it easier to find
and isolate faults of a particular service and fix them, reducing resolution times
• Enhanced maintainability. It is easier to maintain a microservices-based application
because each service can be maintained, optimized, or enhanced for better performance
without impacting other services.
• Bit-sized quicker deployments. Each service has its own codebase running in individual
containers. This enables quick development and deployment cycles that follow an efficient
DevOps model.
• Flexibility in choosing a technology stack. Developers are not boxed in by particular
programming languages or libraries. This means teams have the freedom of choosing
whatever language and libraries that are most appropriate for implementing a service. More
so, every service is designed to run its own technologies that may be different than
technologies used by other services.

Limitations of microservices
Although a microservice architecture is gaining popularity due to its benefits of enhanced
efficiency and improved resiliency, it also comes with its limitations and challenges.
• Increased complexity. Having a collection of polyglot services introduces a higher level of
complexity into the development process. There are more components to manage, and these
components have different deployment processes. The introduction of event-driven
communication is another challenge: such design is comparatively complex and requires
new skills to manage.
• Complex testing. It is challenging to test microservice-based applications because of the
various testing dependencies required for each microservice. It is even more tasking to
implement automated testing in a microservice architecture because the services are running
in different runtime environments. Besides, the need to test each microservice before
running a global test adds more complexity to maintain the framework.
• Higher maintenance overhead. With more services to handle, you have additional
resources to manage, monitor, and maintain. There is also the need for full-time security
support: due to their distributed nature, microservices are more vulnerable to attack vectors.

(See when microservices might not be the best fit for your software.)
Now we’ll turn to miniservices.

Miniservices architecture
As monoliths are challenging to scale because of size, and microservices are a lot more complex
to orchestrate and maintain, there was a need for a framework that addressed these challenges.

To solve this, a miniservices architecture fits the middle ground between monolith and
microservices architectures, a design that assumes a more realistic approach to implementing the
microservices concept.

The miniservices architecture is an architectural framework that has a collection of domain


bounded services with multiple responsibilities and shared data stores. Unlike microservices with
a complete de-coupling of services and their implementation details, miniservices can share
libraries and databases.
Characteristics of a miniservice architecture

• Related services can share the same database. This allows modules that are related to
• each other in the functions they perform to share a database. For instance, a miniservice may
perform multiple functions including image processing, rendering of images, or any other
related functions for an application.
• Communication between services is through REST APIs.
• Related services can share codebase and infrastructure used for deployment.

Benefits of using miniservices


Due to its derived design, miniservices inherit all benefits of a microservice architecture
including scalability, fault tolerance, and robustness.
Additionally, other benefits of adopting miniservices include:

• Improved performance. By reducing the number of services, interconnections, and


network traffic between domains, miniservices enhance application performance
• Shared maintenance overhead. With services handling various related functions, the
maintenance overhead associated with microservices is reduced.
• Developer friendly. Miniservices are often more suitable for companies that cannot afford
to create smaller development teams dedicated to working on each individual service.

Limitations of miniservices
• End-to-end testing can be a challenge with a miniservice framework due to the number of
• dependencies associated with a single service. This also raises complexities with respect
to efficient error handling and bug discovery.

Microservices vs miniservices
Fine-grained alternatives to a monolithic framework, both microservice and miniservice
architectures divide applications into smaller pieces within specific bounded contexts.

At its elemental level, miniservices differ from microservices by allowing shared data storage
and infrastructure. Miniservices are steadily gaining momentum as a more pragmatic approach
over microservices.

As each of these architectures has its benefits and limitations, it’s vital that your organization
perform thorough due diligence before choosing the right one. It is equally important to factor in
the technologies, skills, and effort each of these frameworks require to maintain in the long run
to avoid budget overruns and operational hiccups.

5.5 MICRO SERVICE DEPENDENCY TREE

Dependency cycles will be familiar to you if you have ever locked your keys inside your house
or car. You can't open the lock without the key, but you can't get the key without opening the
lock. Some cycles are obvious, but more complex dependency cycles can be challenging to find
before they lead to outages. Strategies for tracking and controlling dependencies are necessary
for maintaining reliable systems.

Reasons to Manage Dependencies

A lockout, as in the story of the cyclic coffee shop, is just one way that dependency management
has critical implications for reliability. You can't reason about the behavior of any system, or
guarantee its performance characteristics, without knowing what other systems it depends on.
Without knowing how services are interlinked, you can't understand the effects of extra latency
in one part of the system, or how outages will propagate. How else does dependency
management affect reliability?

SLO

No service can be more reliable than its critical dependencies.8 If dependencies are not managed,
a service with a strict SLO1 (service-level objective) might depend on a back end that is
considered best-effort. This might go unnoticed if the back end has coincidentally high
availability or low latency. When that back end starts performing exactly to its SLO, however, it
will degrade the availability of services that rely on it.

High-fidelity testing

Distributed systems should be tested in environments that replicate the production environment
as closely as possible.7 If noncritical dependencies are omitted in the test environment, the tests
cannot identify problems that arise from their interaction with the system. This can cause
regressions when the code runs in production.
Data integrity

Poorly configured production servers may accidentally depend on their development or QA


(quality assurance) environments. The reverse may also be true: a poorly configured QA server
may accidentally leak fake data into the production environment. Experiments might
inadvertently send requests to production servers and degrade production data. Dependency
management can expose these problems before they become outages.

Disaster recovery / isolated bootstrap

After a disaster, it may be necessary to start up all of a company's infrastructure without having
anything already running. Cyclic dependencies can make this impossible: a front-end service
may depend on a back end, but the back-end service could have been modified over time to
depend on the front end. As systems grow more complex over time, the risk of this happening
increases. Isolated bootstrap environments can also provide a robust QA environment.

Security

In networks with a perimeter-security model, access to one system may imply unfettered access
to others.9 If an attacker compromises one system, the other systems that depend on it may also
be at risk. Understanding how systems are interconnected is crucial for detecting and limiting the
scope of damage. You may also think about dependencies when deploying DoS (denial of
service) protection: one system that is resilient to extra load may send requests downstream to
others that are less prepared.

Dependency Cycles

Dependency cycles are most dangerous when they involve the mechanisms used to access and
modify a service. The operator knows what steps to take to repair the broken service, but it's
impossible to take those steps without the service. These control cycles commonly arise in
accessing remote systems. An error that disables sshd or networking on a remote server may
prevent connecting to it and repairing it. This can be seen on a wider scale when the broken
device is responsible for routing packets: the whole network might be offline as a result of the
error, but the network outage makes it impossible to connect to the device and repair it. The
network device depends on the very network it provides.

Dependency cycles can also disrupt recovery from two simultaneous outages. As in the isolated
bootstrap scenario, two systems that have evolved to depend upon each other cannot be restarted
while neither is available. A job-scheduling system may depend on writing to a data-storage
system, but that data-storage system may depend on the job-scheduling system to assign
resources to it.

Cycles may even affect human processes, such as oncall and debugging. In one example, a
source-control system outage left both the source-code repository and documentation server
unavailable. The only way to get to the documentation or source code of the source-control
system was to recover the same system. Without this key information about the system's
internals, the oncall engineer's response was significantly obstructed.

Microservices and External Services

In the era of monolithic software development, dependency management was relatively clear-
cut. While a monolithic binary may perform many functions, it generally provides a single
failure domain containing all of the binary's functionality. Keeping track of a small number of
large binaries and storage systems is not difficult, so an owner of a monolithic architecture can
easily draw a dependency diagram, perhaps like that in figure 1.

Microservices offer many advantages. They allow independent component releases, smoother
rollbacks, and polyglot development, as well as allowing teams to specialize in one area of the
codebase. They are not easy to keep track of, however. In a company with more than a hundred
microservices, it is unlikely that employees could draw a diagram and get it right, or guarantee
that they're making dependency decisions that won't result in a cycle.

Both monolithic services and microservices can experience bootstrapping issues caused by
hidden dependencies. They rely on access to decryption keys, network, and power. They may
also depend on external systems such as DNS (Domain Name System). If individual endpoints of
a monolith are reached via DNS, the process of keeping those DNS records up to date may create
a cycle.

The adoption of SaaS (software as a service) creates new dependencies whose implementation
details are hidden. These dependencies are subject to the latency, SLO, testing, and security
concerns mentioned previously. Failure to track external dependencies may also introduce
bootstrapping risks. As SaaS becomes more popular and as more companies outsource
infrastructure and functionality, cyclic dependencies may start to cross companies. For example,
if two storage companies were to use each other's systems to store boot images, a disaster that
affected both companies would make recovery difficult or impossible.

Directed Acyclic Graphs

At its essence, a service dependency is the need for a piece of data that is remote to the service. It
could be a configuration file stored in a file system, or a row for user data in a database, or a
computation performed by the back end. The way this remote data is accessed by the service
may vary. For the sake of simplicity, let's assume all remote data or computation is provided by a
serving back end via RPCs (remote procedure calls).

As just seen, dependency cycles among systems can make it virtually impossible to recover after
an outage. The outage of a critical dependency propagates to its dependents, so the natural place
to begin restoring the flow of data is the top of the dependency chain. With a dependency cycle,
however, there is no clear place to begin recovery efforts since every system is dependent on
another in the chain.

One way to identify cycles is to build a dependency graph representing all services in the system
and all RPCs exchanged among them. Begin building the graph by putting each service on a
node of the graph and drawing directed edges to represent the outgoing RPCs. Once all services
are placed in the graph, the existing dependency cycles can be identified using common
algorithms such as finding a topological sorting via a depth-first search. If no cycles are found,
that means the services' dependencies can be represented by a DAG (directed acyclic graph).

What happens when a cycle is found? Sometimes, it's possible to remove a cycle by inverting the
dependency, as shown in figure 2. One example is a notification system where the senders notify
the controllers about new data, and the controller then pulls data from the senders. The cycle here
can be easily removed by allowing the senders only to push data into the controller. Cycle
removal could also be accomplished by splitting the functionality across two nodes—for
example, by moving the new data notification to a third system.

Some dependencies are intrinsically cyclic and may not be removed. Replicated services may
periodically query their replicas in order to reinforce data synchronization and integrity.3 Since
all replicas represent a single service, this would be represented as a self-dependency cycle in the
graph. It's usually okay to allow self-dependencies as long as they do not prevent the isolated
bootstrapping of the system and can properly recover from a global outage.

Another intrinsically cyclic dependency occurs in data-processing pipelines implemented as a


workers-controller system.2 Workers keep the controller informed about their status, and the
controller assigns tasks to the workers when they become idle. This cyclic dependency between
workers and controllers may not be removed without completely changing the processing model.
What can be done in this case is to group workers and controllers into a supernode representing a
single service. By repeating this edge contraction for all strongly connected components of the
graph, taking into account their purpose and practical viability, you may achieve a DAG
representation of the original graph.

Tracking vs. Controlling

In some environments, you can derive great benefit from just understanding the existing
dependency graph. In others, determining the existing state is not sufficient; mechanisms are
needed for preventing new undesirable dependencies. The two approaches examined here,
dependency tracking and dependency control, have different characteristics:

• Tracking dependencies is a passive approach. You use logging and monitoring to record
which services contact each other, then look back at that data in the future. You can understand
the dependencies by creating data structures that can be queried efficiently or by representing the
relationships visually.

• Controlling dependencies is an active approach. There are several points during design and
implementation where you can identify and avoid an undesirable dependency. Additionally, you
can prevent connections from being made while the code is running in production. If you wait
until the dependency has already been used and monitored, it will be too late to prevent the
issues it may cause.

These approaches overlap (e.g., data collected during dependency control can certainly be used
for tracking), but let's look at them separately.

Dependency tracking

Initially, dependency tracking often takes the form of information stored in engineers' heads and
visualized in whiteboard drawings. This is sufficient for smaller environments, but as the system
becomes more complex, the map of services becomes too complicated for any one person to
memorize. Engineers may be surprised by an outage caused by an unexpected dependency, or
they may not be able to reason about how to move a service and its associated back ends from
one data center to another. At this stage, organizations begin to consider programmatically
generated views of the system.

Different environments may use different ways of collecting information about how services are
interconnected. In some, a firewall or network device might record logs of which services are
contacting each other, and these logs can be mined for dependency data. Alternatively, a set of
services built on a common framework might export standard monitoring metrics about every
connection; or distributed tracing might be used to expose the paths a request takes through the
system, highlighting the connections.

You can aggregate whatever sources of information are available to you and create a dependency
graph, processing the data into a common structure and optimizing it for running queries over it.
From there, you can use algorithms on the graph to check whether it is a DAG, visualize it using
software such as Graphviz and Vizceral, or expose information for each service, perhaps using a
standard dashboard with a page for each service.

By continually monitoring traffic between systems and immediately integrating it into the graph,
new dependencies may be seen shortly after they reach production. Even so, the information is
available only after the new dependency has been created and is already in use. This is sufficient
for dependency tracking, where you want to describe the interconnections of an existing system
and become aware of new ones. Preventing the dependency, however, requires
dependency control.
Dependency control

Just like dependency tracking, dependency control typically starts as a manual process using
information stored in engineers' heads. Developers might include a list of proposed back ends in
all design documentation and depend on their colleagues' knowledge of the existing systems to
flag dangers. Again, this may be enough for a smaller environment. As services are born, grow,
change, and are deprecated, the data can quickly become stale or unwieldy. Dependency control
is most effective if enforced programmatically, and there are several points to consider in adding
it.

When working on dependency management at Google, we found it best to think about


controlling dependencies from the client side of a client-server connection (i.e., the service that is
about to depend on another service). By owning the code that initiates the connections, the owner
of the client has the most control and visibility over which dependencies exist and can therefore
detect potential problems earlier. The client is also most affected by ill-considered dependencies.

Although the owner of a server may want to control who its clients are for reasons such as
capacity planning or security, bad dependencies are much more likely to affect the client's SLO.
Because the client requires some functionality or data from the server for its own functionality or
performance, it needs to be prepared for server-side outages. The server, on the other hand, is
unlikely to notice an outage of one of its clients.

One approach to dependency control is to analyze the client's code and restrict dependencies at
build time. The behavior of the binary, however, will be influenced by the configuration and
environment it receives. Identical binaries might have very different dependencies in different
situations, and the existence of code inside a binary is not a reasonable predictor for whether that
binary has a dependency on another service. If a standard mechanism is used for specifying back
ends or connection types—for example, if all back ends are provided in configuration and not in
code—this might be an area worth exploring.

There are several options for runtime enforcement. Just as with dependency tracking, existing
infrastructure could be repurposed for dependency control. If all interservice connections pass
through a firewall, network device, load balancer, or service mesh, those infrastructure services
could be instrumented to maintain a list of acceptable dependencies and drop or deny any
requests that don't match the list. Silently dropping requests at a point between the client and
server may complicate debugging, though. A request that is dropped for being an unapproved
dependency may be indistinguishable from a failure of the server or the intermediate device: the
connections may seem to just disappear.

Another option is to use a dedicated external dependency-control service that the client can query
before allowing each new back-end connection. This kind of external system has the
disadvantage of adding latency since it requires extra requests to allow or deny each back end.
And, of course, the dependency-control service itself becomes a dependency of the service.

Authorizing RPCs

The dependency-control policy consists of an ACL (access control list) of the RPC names
expected to be initiated by a service. For performance reasons, the policy is serialized and loaded
by the service during startup. If the policy is invalid (because of syntax errors or data corruption),
it's not used and the dependency control isn't activated. If the policy is correct, it becomes active,
and all outgoing RPCs are matched against it. If an RPC is fired but isn't present in the policy, it
will be flagged as rejected. Rejected RPCs are reported via monitoring so that service owners can
audit them and decide on the correct course of action: remove the RPC from the binary if it's not
a desired dependency or add it to the ACL if it's indeed a necessary new dependency.
5.6 CHALLENGES WITH MICRO SERVICES

Design
Compared to monolithic apps, organizations face increased complexity when designing
microservices. Using microservices for the first time, you might struggle to determine:

• Each microservice’s size


• Optimal boundaries and connection points between each microservice
• The framework to integrate services

Designing microservices requires creating them within a bounded context. Therefore, each
microservice should clarify, encapsulate, and define a specific responsibility.

To do this for each responsibility/function, developers usually use a data-centric view when
modeling a domain. This approach raises its own challenge—without logic, the data is
nonsensical.

Security
Microservices are often deployed across multi-cloud environments, resulting in increased risk
and loss of control and visibility of application components—resulting in additional vulnerable
points. Compounding the challenge, each microservice communicates with others via
various infrastructure layers, making it even harder to test for these vulnerabilities.

Data security within a microservices-based framework is also a prominent concern. As such, data
within such a framework remains distributed, making it a tricky exercise to maintain
the confidentiality, integrity, and privacy of user data.
Due to its distributed framework, setting up access controls and administering secured
authentication to individual services poses not only a technical challenge but also increases the
attack surface substantially.

Testing
The testing phase of any software development lifecycle (SDLC) is increasingly complex for
microservices-based applications. Given the standalone nature of each microservice, you have to
test individual services independently.
Exacerbating this complexity, development teams also have to factor in integrating services and
their interdependencies in test plans.
Increased operational complexity
Each microservice’s team is usually tasked with deciding the technology to use and manage it.
As each service should be deployed and operated independently, maintaining operations may
open a can of worms for those who are not prepared.

Here are some challenges:

1. Traditional forms of monitoring may not work for a microservices-based


application. Consider a scenario where a request from the user interface traverses multiple
services before getting to the one that can fulfill its request. The result of this traversal is a
convoluted path of services, and without the appropriate monitoring tools, identifying the
underlying cause of an issue is not only tricky—it’s often impossible.
2. Scalability is another operational challenge associated with microservices
architecture. Although the scalability of microservices is often touted as an advantage,
successfully scaling your microservice-based applications is challenging.
3. Optimizing and scaling require more complex coordination. In a typical microservices
framework, an application is broken down to smaller-independent services that are hosted
and deployed across separate servers. This architecture requires coordinating individual
components, which is another challenge particularly when you experience a sudden spike in
application usage.
4. Fault tolerance needed for every service. Businesses need their microservices to be
resilient enough to withstand internal and external failures. In a microservices-based
application, one component failing can affect the entire system. Therefore, the framework
you use should consider fault tolerance for every service to ensure a design that prevents
failure of an entire application in the event of an individual service downtime.

Communication
Independently deployed microservices act as miniature standalone applications that
communicate with each other. To achieve this, you have to configure infrastructure layers that
enable resource sharing across services.

A poor configuration may lead to:

• Increased latency
• Reduced speed of calls across different services

In this situation, you’ve got a non-optimized application with a slow response time.
When not to use microservices
Every organization shifting to a new framework should perform thorough due diligence to ensure
it’s the right fit.

When exploring microservices, these three situations will always help you decide when to skip
the microservice architecture for your chosen application.
Your defined domain is unclear/uncertain
Recall that you create microservices within a bounded context. Therefore, if it is logically
complex to break down your business requirements into specific domains, it will be equally
difficult for you to create adequately sized microservices.

Add to that the challenge of designing a proper means of communication among different
services—this complexity is likely too much for you to realize maximum benefits in
microservices.
Also, consider the future. If you’re not certain that your application’s domain will remain the
same over the coming years, it’s wise not to use a microservices-oriented approach.

Improved efficiency isn’t guaranteed


To reiterate, the idea of adopting Microservices is to embrace a DevOps culture that in turn:

• Employs automation
• Reduces cost and effort
• Brings operational efficiency

Carry out your due diligence to verify if transitioning to a microservices framework actually
helps achieve these goals. No organization would like to add up complexities and effort just to
adopt a culture without gaining improved efficiency.

Application size is small or uncomplex

When your application size does not justify the need to split it into many smaller components,
using a microservices framework may not be ideal. There’s no need to further break down
applications that are already small enough as-is.
5.7 SOA VS MICROSERVICE

Many of the chief characteristics of SOA and micro services are similar. Both involve a cloud or
hybrid cloud environment for developing and running applications, are designed to combine
multiple services necessary for creating and using applications, and each effectively breaks up
large, complicated applications into smaller pieces that are more flexible to arrange and deploy.
Because both micro services and SOA function in cloud settings, each can scale to meet the
modern demands of big data size and speeds.
Nevertheless, there are many differences between SOA and micro services that determine the use
case each is suitable for:
Micro services SOA
Designed to host services which can Designed to share resources
Architecture
function independently across services
Typically does not involve component Frequently involves component
Component sharing
sharing sharing
Granularity Fine-grained services Larger, more modular services
Each service can have an independent Involves sharing data storage
Data storage
data storage between services
Common governance protocols
Governance Requires collaboration between teams
across teams
Better for smaller and web-based
Size and scope Better for large scale integrations
applications
Communication Communicates through an API layer Communicates through an ESB
Coupling and
Relies on bounded context for coupling Relies on sharing resources
cohesion
Uses protocols like SOAP and
Remote services Uses REST and JMS
AMQP
Deployment Quick and easy deployment Less flexibility in deployment

Architecture
Microservices architecture is based on smaller, fine-grained services that are focused on a single
purpose and can function independently of one another — but interact to support the same
application. Consequently, microservices is architected to share as few service resources as
possible. Since SOA has larger, more modular services that are not independent of one another,
it’s architected to share resources as much as possible.

Component sharing
The independence of microservices minimizes the need to share components and makes the
services more resistant to failure. Additionally, the relative lack of component sharing enables
developers to easily deploy newer versions, and scale individual services much faster than with
SOA.
On the other hand, component sharing is much more common in SOA. In particular, services
share access to an ESB. Thus, if there are issues with one service in relation to the ESB, it can
compromise the effectiveness of the other connected services
Granularity
A SOA’s services are large, with some of the modular services resembling monolithic
applications. Due to each service’s capability to scale, SOAs typically have a wider range of
focus.
The more granular nature of microservices means that individual services excel in performing a
single specific task. Combining those tasks results in a single application.

Data storage
With microservices, the individual services generally have their own data storage. With SOA,
almost all of the services share the same data storage units.
Sharing the same data storage enables SOA services to reuse shared data. This capability is
useful for maximizing data’s value by deploying the same data or applications between business
units. However, this capability also results in tight coupling and an interdependence between
services.

Governance
Because SOA is based on the notion of sharing resources, it employs common data governance
mechanisms and standards across all services.
The independence of the services in microservices does not enable uniform data governance
mechanisms. Governance is much more relaxed with this approach, as individuals deploying
microservices have the freedom to choose what governance measures each service follows —
resulting in greater collaboration between teams.Download The Definitive Guide
tGovernance now.

Size and scope


Size and scope is one of the more pronounced differences between microservices and SOA. The
fine-grained nature of microservices significantly reduces the size and scope of projects for
which it’s deployed. Its relatively smaller scope of services is well-suited for developers.

Communication
SOA communication is traditionally handled by an ESB, which provides the medium by which
services “talk” to each other. However, using an ESB can slow the communication of services in
SOA. Microservices relies on simpler messaging systems, like APIs which are language agnostic
and enable quicker communication.

Coupling and cohesion


While SOA is based on sharing components, microservices is based on the concept of ‘bounded
context’. Bounded context is the coupling of a component and its data without many other
dependencies — decreasing the need to share components. The coupling in microservices can
also involve its operating system and messaging, all of which is usually included in a container.

Remote services
SOA and micro services use different protocols for accessing remote services. The main remote
access protocols for SOA include Simple Object Access Protocol (SOAP) and messaging like
Advanced Messaging Queuing Protocol (AMQP) and Microsoft Messaging Queuing (MSMQ).
The most common protocols for micro services are Representational State Transfers (REST) and
simple messaging such as Java Messaging Service (JMS). REST protocols are frequently used
with APIs. The protocols for micro services are more homogenous than those for SOA, which
are typically used for heterogeneous interoperability.

Deployment
Ease of deployment is another major difference between micro services and SOA. Since the
services in micro services are smaller and largely independent of one another, they are deployed
much more quickly and easily than those in SOA. These factors also make the services in micro
services easier to build.
5.8 MICROSERVICE AND API

What are Microservices?

Microservices or most commonly known as Micro service Architecture is an architectural style to


build applications. So, Microservices basically structure an application as a collection of small
autonomous services, modelled around a business domain. Now, when you have a monolithic
application, you will basically have all the functionalities stored in one place.

For example, if you consider an e-commerce application, then it will have mainly 3 functionalities.
The functionalities could be:

• The customers’ information


• The products stored by the customer in cart
• The products available in the e-commerce application

Now, before microservices came into the picture, monolithic architecture was used.

Monolithic Architecture

Monolithic architecture is an architectural style in which all the functionalities or the required
components would be inside one big block. So, if you build the above application, using the
monolithic style, then the architecture would look as below:

As you can refer from the above image, all the components of the application would reside in a
single area. But, there are few challenges of the monolithic architecture because of
which Microservices has become so popular in the market. So, if we refactor this application to
Microservices, then there would be three services (Customer Service, Cart Service, and Product
Service
Now, before I tell you how can we refactor this application into Microservices, next in this article
on Microservices vs API, let me give you an insight about APIs’.

What are APIs’?

Application Program Interface or most commonly known as APIs’ is a way through which you
can make sure two or more applications communicate with each other to process the client request.
So, you can understand APIS’ as a point of contact, through which all the services communicate
with each other to process the client’s request and send the response.

Now, while building and using applications, we generally do CRUD operations. When I say CRUD
operations, I mean that we create a resource, read a resource, update a resource and delete a
resource. So, APIs’ are generally developed by using the RESTful style, and these methods are
nothing but the methods of HTTP.

HTTP Methods

The methods associated with the HTTP actions are, as you can see in the below image:

The above methods help us standardize a way in which actions will be performed on various
applications having different interfaces. Also, with the help of these methods, you as a developer
can easily understand the inference of the actions taken across the different interfaces.

So, now that, you know what are APIs’, next in this article on Microservices vs APIs’, let us
understand where are APIs’ used in Microservices.

Where are APIs’ used in Microservices?

Consider a scenario, where you have built the above-considered e-commerce application using
Microservices. Them you will basically see three services, i.e. the customer service, cart service,
and products service. Now, how do you think these services communicate with each other to
process the client’s request?

Well, that is through the APIs’. So, each of these microservices will have their own APIs’ to
communicate with the other services. Refer to the below image:
Now, even if one microservice, does not work, then the application will not go down. Instead, only
that particular feature will not be working, and once it starts working, APIs’ can process the request
again and send the required response, back to the client.

Alright, so now that you know about Microservices and API, let us next look into the differences
between Microservices and APIs’.

Microservices vs API

The difference between Microservices and APIs’ is as follows:

Microservices API
An architectural style through which, you can A set of procedures and functions which allow
build applications in the form of small the consumer to use the underlying service of
autonomous services. an application.
Also, from the above example, it must be clear to you that APIs’ are a part of microservices and
thus help these services in communicating with each other. However, while communicating with
the other services, each service can have its own CRUD operations to store the relevant data in its
database.

Not only this but while performing CRUD operations, APIs’ generally accept and return
parameters based on the request sent by the user. With this folks, we come to an end to this article
on Microservices vs API. I hope you have understood what are microservices and APIs’ and how
are APIs’ used in microservices.

5.9 DEPLOYING AND MAINTAINING MICROSERVICES


In a microservices architecture, each microservice performs a simple task and communicates
with clients or other microservices by using lightweight mechanisms such as REST API requests.
You can code each microservice using a programming language that's best suited for the task that
it performs. Microservices-based applications are easier to deploy and maintain.

Architecture
This reference architecture shows Python Flask and Redis microservices deployed as Docker
containers in a Kubernetes cluster in Oracle Cloud Infrastructure. The containers pull Docker
images from Oracle Cloud Infrastructure Registry.
The following diagram illustrates this reference architecture.

Description of the illustration microservices-oci.png


The architecture has the following components:
• Region
An Oracle Cloud Infrastructure region is a localized geographic area that contains one or
more data centers, called availability domains. Regions are independent of other regions,
and vast distances can separate them (across countries or even continents).
• Availability domains
Availability domains are standalone, independent data centers within a region. The
physical resources in each availability domain are isolated from the resources in the other
availability domains, which provides fault tolerance. Availability domains don’t share
infrastructure such as power or cooling, or the internal availability domain network. So, a
failure at one availability domain is unlikely to affect the other availability domains in the
region.
• Fault domains
A fault domain is a grouping of hardware and infrastructure within an availability
domain. Each availability domain has three fault domains with independent power and
hardware. When you distribute resources across multiple fault domains, your applications
can tolerate physical server failure, system maintenance, and power failures inside a fault
domain.
• Virtual cloud network (VCN) and subnet
A VCN is a customizable, software-defined network that you set up in an Oracle Cloud
Infrastructure region. Like traditional data center networks, VCNs give you complete
control over your network environment. A VCN can have multiple non-overlapping
CIDR blocks that you can change after you create the VCN. You can segment a VCN
into subnets, which can be scoped to a region or to an availability domain. Each subnet
consists of a contiguous range of addresses that don't overlap with the other subnets in the
VCN. You can change the size of a subnet after creation. A subnet can be public or
private.
• Container Engine for Kubernetes
Oracle Cloud Infrastructure Container Engine for Kubernetes is a fully managed,
scalable, and highly available service that you can use to deploy your containerized
applications to the cloud. You specify the compute resources that your applications
require, and Container Engine for Kubernetes provisions them on Oracle Cloud
Infrastructure in an existing tenancy. Container Engine for Kubernetes uses Kubernetes to
automate the deployment, scaling, and management of containerized applications across
clusters of hosts.
• Registry
Oracle Cloud Infrastructure Registry is an Oracle-managed registry that enables you to
simplify your development-to-production workflow. Registry makes it easy for you to
store, share, and manage development artifacts, like Docker images. The highly available
and scalable architecture of Oracle Cloud Infrastructure ensures that you can deploy and
manage your applications reliably.

Recommendations
Your requirements might differ from the architecture described here. Use the following
recommendations as a starting point.
• VCN
When you create a VCN, determine the number of CIDR blocks required and the size of
each block based on the number of resources that you plan to attach to subnets in the
VCN. Use CIDR blocks that are within the standard private IP address space.
Select CIDR blocks that don't overlap with any other network (in Oracle Cloud
Infrastructure, your on-premises data center, or another cloud provider) to which you
intend to set up private connections.
After you create a VCN, you can change, add, and remove its CIDR blocks.
When you design the subnets, consider your traffic flow and security requirements.
Attach all the resources within a specific tier or role to the same subnet, which can serve
as a security boundary.
Use regional subnets.
For simplicity, this architecture uses a public subnet to host Container Engine for
Kubernetes. You can also use a private subnet. In that case, use a NAT gateway to allow
access to the public internet from the cluster.
• Container Engine for Kubernetes
In this architecture, the worker nodes use the VM.Standard2.1 shape and they run on
Oracle Linux. Two worker nodes are used to host two different microservices, but you
can create up to 1000 nodes on each cluster.
• Registry
We use Oracle Cloud Infrastructure Registry as a private Docker registry for internal use,
pushing Docker images to and pulling them from the Registry. You can also use it as a
public Docker registry, enabling any user with internet access and the appropriate URL to
pull images from public repositories in the registry.
Considerations
• Scalability
You can scale out your application by updating the number of worker nodes in the
Kubernetes cluster, depending on the load. Similarly, you can scale in by reducing the
number of worker nodes in the cluster.
• Application availability
Fault domains provide the best resilience within a single availability domain. You can
also deploy instances or nodes that perform the same tasks in multiple availability
domains. This design removes a single point of failure by introducing redundancy.
• Manageability
This architecture uses two microservices. One is a Python Flask microservice, a simple
web application that performs CRUD operations. The other microservice is a Redis in-
memory database. The Python-Flask microservice communicates with the Redis
microservice to retrieve the data.
• Security
Use policies that restrict who can access which Oracle Cloud Infrastructure resources that
your company has and how.
Container Engine for Kubernetes is integrated with Oracle Cloud Infrastructure Identity
and Access Management (IAM). IAM provides easy authentication with native Oracle
Cloud Infrastructure identity functionality.

Deploy
The code required to deploy this reference architecture is available in GitHub. You can pull the
code into Oracle Cloud Infrastructure Resource Manager with a single click, create the stack, and
deploy it. Alternatively, download the code from GitHub to your computer, customize the code,
and deploy the architecture by using the Terraform CLI.
• Deploy by using Oracle Cloud Infrastructure Resource Manager:

1. Click
If you aren't already signed in, enter the tenancy and user credentials.
2. Review and accept the terms and conditions.
3. Select the region where you want to deploy the stack.
4. Follow the on-screen prompts and instructions to create the stack.
5. After creating the stack, click Terraform Actions, and select Plan.
6. Wait for the job to be completed, and review the plan.
To make any changes, return to the Stack Details page, click Edit Stack, and
make the required changes. Then, run the Plan action again.
7. If no further changes are necessary, return to the Stack Details page,
click Terraform Actions, and select Apply.
• Deploy by using the Terraform CLI:
1. Go to GitHub.
2. Download or clone the code to your local computer.
3. Follow the instructions in README.md.
5.10 REASON FOR HAVING DEVOPS

DevOps is no more than a set of processes that coordinate to unify development teams and
processes to complement software development. The main reason behind DevOps' popularity is
that it allows enterprises to create and improve products at a faster pace than traditional software
development methods.

Devops Popularity
According to the Previous survey, the implementation rate has increased significantly.

In 2016, 66 % of global organizations had adopted self-development, 19 percent had not adopted
DevOps, and 15 percent had not yet decided.
As of 2017, 74 percent of global organizations adopted DevOps, 16 percent did not adopt
DevOps, and 10 percent were not decided.

1. Shorter Development Cycles, Faster Innovation


When we have a biased response from the development and operations teams, it is often difficult
to tell if the application is operational. When development teams simply submit a request, the
cycle times are unnecessarily extended.
With joint development and operations efforts, the team's applications are ready to use more
quickly. This is important because companies succeed on the basis of their ability to innovate
faster than their competitors.

2. Reduce Implementation Failure, Reflections and Recovery Time


The main reason for the failure of the teams in the implementation failure is due to programming
defects. With shorter development cycles, DevOps promotes frequent code versions. This, in
turn, makes it easy to detect code defects.

3. Better Communication and Cooperation


Improved DevOps software development culture. The common teams are happier and more
productive. Culture focuses on performance rather than individual goals. When teams trust each
other, they can experiment and innovate more effectively. Teams can focus on bringing the
product to market or production, and their key performance indicators must be organized
accordingly.

4. Greater Competencies
High efficiency helps accelerate development and makes it less prone to errors. There are ways
to automate DevOps tasks. Continuous integration servers automate the code testing process,
reducing the amount of manual work required. This means that software engineers can focus on
completing tasks that can not be automated.

Speeding up tools are another chance to increase efficiency

Acceleration tools can be used to compile code more quickly.

Parallel workflows can be integrated into the continuous delivery chain to avoid delays; one
more team is expected to complete its work.

5. Reduce Costs and IT Staff


All the benefits of DevOps translate into reduced general costs and requirements of IT staff.
DevOps development teams require IT staff to be 35 percent less and IT costs 30 percent lower

5.11 OVERVIEW OF DEVOPS

DevOps is a set of practices that works to automate and integrate the processes between
software development and IT teams, so they can build, test, and release software faster and
more reliably.

The term DevOps was formed by combining the words “development” and “operations” and
signifies a cultural shift that bridges the gap between development and operation teams,
which historically functioned in siloes.

The DevOps lifecycle consists of six phases, representing the processes, capabilities, and tools
needed for development on the left side of the loop, and the processes, capabilities, and tools
needed for operations on the right side of the loop. Throughout each phase, teams collaborate
and communicate to maintain alignment, velocity, and quality. The DevOps lifecycle includes
phases to plan, build, continuously integrate and deploy (CI/CD), monitor, operate, and respond
to continuous feedback.

Atlassian helps teams at each stage of the DevOps lifecycle by providing an open toolchain and
guidance on best practices. With Jira as the backbone, teams can choose to use Atlassian
products, or bring their favorite products to the open toolchain. The Atlassian ecosystem offers a
robust array of integrations and add-ons, allowing teams to customize their toolchain to meet
their needs.

5.12 HISTORY OF DEVOPS

he concept of DevOps emerged out of a discussion between Andrew Clay and Patrick Debois
in 2008. They were concerned about the drawbacks of Agile and wanted to come up with
something better. The idea slowly began to spread and after the DevOpsDays event held in
Belgium in 2009, it became quite a buzzword.
5.13 CONCEPTS AND TERMINOLOGY IN DEVOPS

DevOps principles
• Culture. ...
• Automation of processes. ...
• Measurement of KPIs (Key Performance Indicators) ...
• Sharing. ...
• Agile planning. ...
• Continuous development. ...
• Continuous automated testing. ...
• Continuous integration and continuous delivery (CI/CD)
The most important DevOps terms to know today include:
• Agile. Used in the DevOps world to describe infrastructure, processes or tools that are adaptable
and scalable. ...
• Continuous delivery. ...
• Continuous integration. ...
• Immutable infrastructure. ...
• Infrastructure-as-Code. ...
• Microservices. ...
• Serverless computing.

5.14 CORE ELEMENTS OF DEVOPS

DevOps is a term that reflects what happens when seeming disparate concepts are turned into a
portmanteau. Combining developers and operations into one concept creates a new dynamic that
changes the way people think about IT and the role of the developer in the organization.

Our research shows there are really six that are most important to consider:

• Collaboration
• Automation
• Continuous integration
• Continuous testing
• Continuous delivery
• Continuous monitoring
Developers and operations have historically been separate organizations in IT. They have been
separated due in many respects to the constraints of enterprise systems practices and the shift
from waterfall to agile development processes. Before any app can be put into production it
needs to be configured so it performs as expected by the user. In production, the app and the
systems that support it need monitoring. Developers may create the app, but it is up to the
systems team to often manage hotspots, unusual spikes and transactions across a distributed
infrastructure.

But more than ever, developers and operations need to have shared practices. Efficiency matters
but the greater need is for app development. Apps needs to be designed faster, tested and
deployed across mobile and web platforms.

DevOps has lots to offer in this complex new world. It offers a process that can be put into place
and repeated without distracting from other work and a person’s personal life.
But it also offers impediments. “Navigating DevOps” is an ebook New Relic published that
covers the six capabilities of DevOps to help companies get a grip on what they want to
accomplish. These points adapted from the book can help offer some perspective on ways to
approach the DevOps topic:
1. Collaboration
Instead of pointing fingers at each other, development and IT operations work together (no,
really). While the disconnect between these two groups created the impetus for its creation,
DevOps extends far beyond the IT organization, because the need for collaboration extends to
everyone with a stake in the delivery of software. As Laurie Wurster et al. explained in a recent
Gartner report: “Successful DevOps requires business, development, QA, and operations
organizations to coordinate and play significant roles at different phases of the application
lifecycle. She further said:

2. Automation
DevOps relies heavily on automation—and that means you need tools. Tools you build. Tools
you buy. Open source tools. Proprietary tools. And those tools are not just scattered around the
lab willy-nilly: DevOps relies on toolchains to automate large parts of the end-to-end software
development and deployment process.

3. Continuous Integration
The continuous integration principle of agile development has a cultural implication for the
development group. Forcing developers to integrate their work with other developers
frequently—at least daily—exposes integration issues and conflicts much earlier than is the case
with waterfall development. However, to achieve this benefit, developers have to communicate
with each other much more frequently—something that runs counter to the image of solitary
genius coders working for weeks or months on a module before they are “ready” to send it out
into the world. That seed of open, frequent communication blooms in DevOps.
4. Continuous Testing
The testing piece of DevOps is easy to overlook—until you get burned. As one industry
expert puts it, “The cost of quality is the cost of failure.” While continuous integration and
delivery get the lion’s share of the coverage, continuous testing is quietly finding its place as an
equally critical piece of DevOps.
5. Continuous Delivery
In the words of one commentator, “continuous delivery is nothing but taking this concept of
continuous integration to the next step.” Instead of ending at the door of the development lab,
continuous integration in DevOps extends to the entire release chain, including QA and
operations. The result is that individual releases are far less complex and come out much more
frequently.
6. Continuous Monitoring
Given the sheer number of releases, there’s no way to implement the kind of rigorous pre-release
testing that characterizes waterfall development. Therefore, in a DevOps environment, failures
must be found and fixed in real time. How do you do that? A big part is continuous monitoring.

According to one pundit, the goals of continuous monitoring are to quickly determine when a
service is unavailable, understand the underlying causes, and most importantly, apply these
learnings to anticipate problems before they occur. In fact, some monitoring experts advocate
that the definition of a service must include monitoring—they see it as integral to service
delivery.
5.15 LIFE CYCLE OF DEVOPS

DevOps defines an agile relationship between operations and Development. It is a process that is
practiced by the development team and operational engineers together from beginning to the final
stage of the product.

Learning DevOps is not complete without understanding the DevOps lifecycle phases. The
DevOps lifecycle includes seven phases as given below:

1) Continuous Development

This phase involves the planning and coding of the software. The vision of the project is decided
during the planning phase. And the developers begin developing the code for the application. There
are no DevOps tools that are required for planning, but there are several tools for maintaining the
code.

2) Continuous Integration

This stage is the heart of the entire DevOps lifecycle. It is a software development practice in
which the developers require to commit changes to the source code more frequently. This may be
on a daily or weekly basis. Then every commit is built, and this allows early detection of problems
if they are present. Building code is not only involved compilation, but it also includes unit testing,
integration testing, code review, and packaging.
The code supporting new functionality is continuously integrated with the existing code.
Therefore, there is continuous development of software. The updated code needs to be integrated
continuously and smoothly with the systems to reflect changes to the end-users.

Jenkins is a popular tool used in this phase. Whenever there is a change in the Git repository, then
Jenkins fetches the updated code and prepares a build of that code, which is an executable file in
the form of war or jar. Then this build is forwarded to the test server or the production server.

3) Continuous Testing

This phase, where the developed software is continuously testing for bugs. For constant testing,
automation testing tools such as TestNG, JUnit, Selenium, etc are used. These tools allow QAs
to test multiple code-bases thoroughly in parallel to ensure that there is no flaw in the functionality.
In this phase, Docker Containers can be used for simulating the test environment.

Selenium does the automation testing, and TestNG generates the reports. This entire testing phase
can automate with the help of a Continuous Integration tool called Jenkins.

Automation testing saves a lot of time and effort for executing the tests instead of doing this
manually. Apart from that, report generation is a big plus. The task of evaluating the test cases that
failed in a test suite gets simpler. Also, we can schedule the execution of the test cases at predefined
times. After testing, the code is continuously integrated with the existing code.

4) Continuous Monitoring
Monitoring is a phase that involves all the operational factors of the entire DevOps process, where
important information about the use of the software is recorded and carefully processed to find out
trends and identify problem areas. Usually, the monitoring is integrated within the operational
capabilities of the software application.

5) Continuous Feedback

The application development is consistently improved by analyzing the results from the operations
of the software. This is carried out by placing the critical phase of constant feedback between the
operations and the development of the next version of the current software application.

6) Continuous Deployment

In this phase, the code is deployed to the production servers. Also, it is essential to ensure that the
code is correctly used on all the servers.

The new code is deployed continuously, and configuration management tools play an essential
role in executing tasks frequently and quickly. Here are some popular tools which are used in this
phase, such as Chef, Puppet, Ansible, and SaltStack.

Containerization tools are also playing an essential role in the deployment


phase. Vagrant and Docker are popular tools that are used for this purpose. These tools help to
produce consistency across development, staging, testing, and production environment. They also
help in scaling up and scaling down instances softly.

Containerization tools help to maintain consistency across the environments where the application
is tested, developed, and deployed. There is no chance of errors or failure in the production
environment as they package and replicate the same dependencies and packages used in the testing,
development, and staging environment. It makes the application easy to run on different
computers.

7) Continuous Operations

All DevOps operations are based on the continuity with complete automation of the release process
and allow the organization to accelerate the overall time to market continuingly.

It is clear from the discussion that continuity is the critical factor in the DevOps in removing steps
that often distract the development, take it longer to detect issues and produce a better version of
the product after several months. With DevOps, we can make any software product more efficient
and increase the overall count of interested customers in your product

5.16 ADOPTION OF DEVOPS

Adopting DevOps allows you to streamline your software delivery lifecycle and to be able to
deliver better software faster. ... The reason why organizations are interested in adopting
DevOps is to streamline their software delivery lifecycle and to be able to deliver better software
faster

If you’re considering a move to a DevOps delivery model, here are six approaches I’ve found to
be critical for ensuring a successful DevOps adoption within an organization.

1. Embrace a DevOps Mindset


DevOps doesn’t begin by just stating, “Let’s do DevOps,” and jumping in with tools. Your entire
organization needs to have a clear understanding of what DevOps is and what specific business
needs it can address, and, most importantly, everyone needs to be willing to change the way
things have always been done.
One way to begin this process is to identify your current application value streams—the series of
activities necessary for moving your products from development all the way into production.
Understanding where in this delivery process there are constraints, bottlenecks, and wait queues
will allow you to which activities you should concentrate on improving.
Identifying areas where your current delivery process is inefficient and must be improved is your
opportunity to truly make change in your organization. But in order to do so, you must be willing
to experiment. Short-term failure is acceptable, as long as you’re learning from it and improving.
Instead of accepting inefficient ways of doing business because that’s the way it’s always been
done, you will need to encourage your organization to ask questions like: “Why do we do this
[process]? What’s its business value? How can we make it more efficient?”
Organizations often equate DevOps with automation. While automation can help accelerate
manual processes, DevOps is fundamentally about collaboration and communication. If you
don’t embrace strong communication and collaborative practices among everyone in the
software development, testing, delivery, and operational process, automating your processes will
not yield the business benefits you desire.

2. Make the Most of Metrics


One of the most overlooked initiatives in DevOps adoption is selecting the right metrics to
record and track progress. Establishing the right baseline DevOps metrics early on and not being
afraid to measure things that might initially not make you look very good is the key to being able
to show demonstrative progress over time and real business benefits to senior leadership.
From my experience, these are some of most useful DevOps metrics:

• Production failure rate: how often the software fails in production during a fixed period of time
• Mean time to recover: how long it takes an application in production to recover from a failure
• Average lead time: how long it takes for a new requirement to be built, tested, delivered, and
deployed into production
• Deployment speed: how fast you can deploy a new version of an application to a particular
environment (integration, test, staging, preproduction, or production environments)
• Deployment frequency: how often you deploy new release candidates to test, staging,
preproduction, and production environments
• Mean time to production: how long it takes when new code is committed into a code repository
for it to be deployed to production

There are many other metrics you can collect too, but avoid collecting vanity metrics that look
impressive but don’t tie to business benefit, as well as metrics that are easily gamed—meaning
they make your team look good but don’t actually contribute to business improvements, such as
the number of commits your team makes.
Once you’ve determined the metrics you wish to collect and have a baseline of where you
currently stand, set goals for each metric so the team knows what to strive for.
Most importantly, constantly share your DevOps goals, metrics, and progress with everyone
involved. Set up metrics dashboards that display current metrics and progress toward your goals.
Providing complete transparency is sometimes a difficult thing for teams to do, but it will foster
more effective communication and collaboration, breaking down barriers between the dev and
ops teams in the process.

3. Understand and Address Your Unique Needs


Contrary to what those selling DevOps products will tell you, there is no “one size fits all”
solution for DevOps. You cannot just drop in an automated tool or hire a self-proclaimed
“DevOps engineer” and expect success. Every organization will have a unique DevOps journey
tied to its specific business and culture, and that journey will be far more about changing
people’s habits and communication patterns than what tools help you enable automation.
DevOps is a way of accelerating the creation and delivery of quality software, but it only
succeeds if you focus on what makes business sense for your organization. For instance, if your
customers can’t consume ten to twenty updates to your product a day, don’t make doing so your
goal! Instead, focus on improving the usability, security, or some other key attribute your
customer cares more about.

4. Adopt Iteratively
When getting started, don’t try to boil the ocean with a complete, enterprisewide DevOps
initiative. Identify a pilot application, form a cross-functional DevOps team that includes dev,
test, and operations, examine your value stream to determine constraints and bottlenecks, and
create an initial deployment pipeline that addresses some of your process constraints. Measure
progress and success, wash, rinse, and repeat.
Generally, you should look at tackling your biggest value stream constraints first, as doing so
will have the largest business impact. Some of these constraints will be easy to resolve, while
others will take a significant amount of time to correct—and often a whole lot of convincing
others to change.
You’ll want to go through a few iterations to build confidence in the framework and the pilot
before you start expanding to other projects. Ensure you’re making progress against your metrics
before moving on and applying those lessons to other teams. Most importantly, make sure those
involved are influencers who can take the principles back to their respective teams; keeping all
your expertise locked up on your pilot won't help you expand to the enterprise effectively.
If you’re beginning your DevOps journey from inside a software development organization,
consider starting from the beginning of your delivery process and moving toward production.
Properly implementing branch management strategies and build automation is key to fast
feedback that will enable efficient downstream processes in the future.
Once sound build automation is in place, a more comprehensive continuous integration
capability that includes continuous testing is in order. Getting your continuous integration
process working effectively will allow you to begin shifting left your assurance activities over
time and speeding up delivery.

5. Emphasize Quality Assurance Early


Based on my observations in most organizations, testers get the least amount of time to do
quality assurance, and eventually, the quality of the product suffers. Organizations that struggle
with DevOps often focus their efforts on automating deployments, overlooking the needs of QA.
While it is impossible to automate all your testing in DevOps, it is critical to automate all tests
run as part of your continuous integration process (unit tests, static code analysis, etc.), as well as
regression testing and smoke testing performed on each environment within your delivery
process. Automating at least some functional testing and nonfunctional tests associated with
security, performance, and other quality characteristics can often be achieved to speed up these
activities.
Run lengthy, long-running tests that may require large, production-like environments late in your
DevOps process so as to not slow down earlier feedback cycles that must be quick.

6. Take a Smart Approach to Automation


Automation is the cornerstone of accelerating your delivery processes, and everything—
infrastructure, environment, configuration, platform, build, test, process, etc.—should be defined
and written in code. If something is time-intensive, broken, or prone to error, start automating
there first. This will quickly benefit your team by reducing delivery times, increasing
repeatability, and eliminating configuration drift.
Standardize your approach to automation in order to ensure dev, ops, QA, and everyone in
between has a common frame of reference and a common language. It’s important that you use
software engineering best practices when building DevOps automation. Infrastructure as code
should be designed and implemented with coding standards, effectively tested, under
configuration control, and well documented. The quality of your automation should be just as
important as the quality of your application
5.17 DEVOPS TOOLS

adoption of new techniques, better tools, and improved collaboration methods continue to be on
the rise in the DevOps universe. Enterprises today are looking for automation in the area of
continuous integration, continuous testing, and continuous delivery to facilitate their DevOps
adoption.

How to Ensure Peak Performance of Applications in Production »

A typical DevOps process consists of 8 stages: plan, code, build, test, release, deploy, operate
and monitor. There is a plethora of tools available — open source and OEM solutions — that can
be used for each of these stages and some across various stages. There’s no single magical tool to
implement or facilitate DevOps. As organizations embark on their DevOps journey, they will
have to research, evaluate and try various tools for various functionalities. To make it simple for
DevOps teams, we have put together a list of 10 DevOps tools that can be used to achieve a
successful DevOps transformation.
DevOps
stages
Read on to see our pick of 10 tools. We hope you benefit from them in your DevOps journey.

# DevOps Tools DevOps Stage

1. Git Code, Build

2. Gradle Build

3. Selenium Test

4. Jenkins Build, Test, Deploy

5. Puppet Deploy, Operate

6. Chef Deploy, Operate

7. Docker Build, Deploy, Operate

8. Kubernetes Build, Deploy, Operate

9. Ansible Deploy, Operate

10. eG Enterprise Monitor

5.18 BUILD , PROMOTION AND DEPLOYMENT IN DEVOPS

Build tools are programs that automate the creation of executable applications from source
code. Building incorporates compiling, linking and packaging the code into a usable or
executable form. In small projects, developers will often manually invoke the build process

Build automation is the process of automating the retrieval of source code, compiling it into
binary code, executing automated tests, and publishing it into a shared, centralized repository.
Build automation is critical to successful DevOps processes. ... That's why build automation is so
important in DevOps
The Build Promotion capability lets you automatically promote successful builds across the
pipeline in
Automic Continuous Delivery Director
. You can use this capability to promote successful builds across release phases. Application
build numbers are included in, and can be automatically tracked across, the lifecycle of a release.

Automic Continuous Delivery Director


recognizes the build field of a Release Automation Run Deployment task as the build number.
To promote a successful build to the next phase, use the built-in
last_successful_build
token. This token is used for all applications and across all phases in a release where a build
field is required. The token correlates the correct build number and associated applications.
To receive new build numbers inside the relevant releases, integrate releases with your build
system. Use the
last_successful_build
token in the
Build
field of the first phase. If the build system is configured correctly, the new build number is
promoted.

Devops deployment tools


1 – Jenkins
One of the leading CI/CD tools for DevOps, Jenkins is an open-source continuous integration
server that automates the build cycle of a software development project.
2 – ElectricFlow
ElectricFlow is a tool used to automate releases, and offers a free community version that can be
run on VirtualBox.
3 – Microsoft Visual Studio
Visual Studio allows users to define release definitions, track releases, run automation, and more.
4 – Octopus Deploy
Octopus Deploy automates the deployment of .NET applications and can be installed on a server,
or an Azure host instance.
5 – IBM UrbanCode
Part of the IBM suite of products since 2013, UrbanCode automates deployment to both cloud
and on-premise environments.
6 – AWS CodeDeploy
CodeDeploy is Amazon’s automated deployment tool, with an impressive platform and language
agnosticism.
7 – DeployBot
DeployBot allows for automatic or manual deployments across multiple environments, when you
connect it to your Git repository. You can even deploy via Slack, or via many other integration
options.
8 – Shippable
Shippable has a CI platform that runs builds on minions – Docker-based containers.
9 – TeamCity
TeamCity is a CI server with smart config features, and official Docker images for agents and
servers.
10 – Bamboo
Atlassian – makers of Confluence and Jira – have a CI offering called Bamboo Server. Bamboo
touts “integrations that matter” and has a Small Teams package whose proceeds are donated to
charity.
11 – Codar
HP’s continuous deployment solution is called Codar, which uses Jenkins to trigger
deployments.
12 – CircleCI
CircleCI is a CI solution, with emphasis on reliability, flexibility, and speed. Source to build to
deploy solutions, with support for a range of applications and languages.
13 – Gradle
Some of the biggest names in the tech industry use build tool Gradle – Netflix, Adobe, and
LinkedIn, for example. This is a general purpose build tool similar to Apache’s Ant.
14 – Automic
Automic applies DevOps principles to backend apps, so they can benefit from the practices many
frontend web-based apps do.
15 – Distelli
Distelli is a specialist in deploying Kubernetes clusters, and can be used with any physical or
cloud-based server.
16 – XL Deploy
XL Deploy is a release automation tool supporting a variety of environments and plugins, using
agentless architecture.
17 – Codeship
A hosted CI solution, Codeship supports customization with native Docker support.
18 – GoCD
GoCD is an open-source CD server focusing on visualizing workflows.
19 – Capistrano
An open-source deployment tool, Capistrano is scriptable and is a self-described “sane,
expressive API”.
20 – Travis CI
A free tool for open-source projects, you can sync Travis CI to your GitHub account and use it to
automate testing and deployment.
21 – BuildBot
Self-described as a “framework with batteries included”, BuildBot is an open-source Python-
based CI framework that is highly flexible.
5.19 DEVOPS IN BUSINESS ENTERPRISES
DevOps is about much more than automation and pipelines. It is about developing a culture
where organizational agility leads to Continuous Value Creation.

As a business manager you spend the working day focusing on everything from sales figures and
customer satisfaction to employee experience and product range. You’re probably also keeping a
close eye on the online sales conversion rate, delivery process, and accompanying marketing
costs. But, are you taking enough interest in the value, quality and development of digital
services?

There are many different types of quality: speed, reliability, flawlessness and scalability for
different devices. Potential customers go through their decision making process as they get to
know the service providers. For this reason the development of digital services becomes
particularly important when the product itself is a digital service.

DevOps is intended to automate IT service functions related to software development,


testing and maintenance.

Including test automation in the software delivery process is crucial\

With DevOps practices the process linking the development environment to the production
environment can be automated.

If everything is automated doesn't that reduce costs? Yes, in the long run. Developing
automation takes time and money, but as manual work is automated the re-loops of various
activities become cheaper.

The customer starts to benefit from automation as the project becomes more transparent,
changes are deployed to the production, and it is known which features are being used by
the end users. The development team can focus on their work, challenges can be fixed quickly,
and releasing a fixed version does not require additional work.

Improving predictability and transparency


Wouldn't it be great to know in advance when a desired feature is creating value for your
customers and, consequently, for your business? DevOps improves predictability from both a
technical and non-technical point of viewI'll now focus on the decision-making part of the
process. One commonly used method is to divide the things to be developed into four levels. For
example:

Theme: Business-oriented goal for the product. The business owner owns and prioritizes the
theme with the product owner

Epic: Large chunk of work that enables the feasibility (S-XL) and viability (business value) of a
clearly defined value generating functionality to be evaluated. The product owner owns and
prioritizes the epic with the business owner

Story: Something a user can accomplish with clearly defined “Definition of done” (DoD) and
effort (story points). The product owner owns and prioritizes the story with the development
team

Task: A single milestone in getting the story done with time as an estimate. The task is owned
and prioritized by the development team according to agreed methodology within the team.
These categorizations relate to the decision-making process because they give the business
owner oversight of what the development team is doing. This means they can prioritize the
development theme and the epics beneath it while retaining the freedom to quickly change
direction if required. If development starts heading in the wrong direction wasteful tasks and
stories can be easily abandoned and work reprioritized.

DevOps also allows a certain degree of transparency in costs. The best work focuses on
maximizing customer value and automating unnecessary manual tasks, which relate to improving
business results. In this case the business owner and the product owner spend money wisely and
the results matter. Measuring customer value is difficult, but it is easy, for example, to look at
inbound money and conversions in an e-commerce store and mirror them in the amount of
money spent on development.
Changing culture
The change will not be managed. The change will be made. One of the barriers to successful
transformation comes when top management entrusts a single person with the task of changing
the way the organization does things. The management itself then follows the change through
steering group meetings, management group meetings, and powerpoint presentations. This
approach will probably not lead to best success.

Instead, management should use sprints in their own work. This would give them first-hand
experience of working in the newly adapted ways. It would clarify in their own minds what is
being asked of the organization as a whole and provide them with direct experience of the
benefits of a systematic, goal-oriented, and structured way of working. Of course, it's not
realistic that all senior management jobs are planned for a week or two, but some are and can be
treated as sprints.

One form of cultural change is early failure. It's the best way to save a lot of money on
development. When concepts are validated often and at an early stage only valued features are
brought up to the development backlog. However, people often rely too much on their own ideas
and insights and are unwilling to show unfinished solutions to end users, even though it would be
beneficial and effective. At this stage, the lowest-cost development ideas and the service to be
developed are truly relevant to the customer's needs.

DevOps and Agile development


Often, DevOps is thought of as a variety of tools and practices, such as test automation, release
pipe optimization, and making the deployment status visible to developers through monitoring.
That’s all true, but these are not one-time development activities. Instead they should be
constantly under development and optimized according to the changing needs.

DevOps enables Agile development. Full time, continuous value creation can only be achieved
when time and energy is not wasted on manual work steps or on problems with tooling. And to
fully realize that idea the management needs to lead from the front, and practice what they
preach. To find out more, read this blog "Agile vs. DevOps: the grand debate."

You might also like