0% found this document useful (0 votes)
51 views37 pages

Cloud Computing 30 66

This document provides an overview of parallel and distributed computing concepts that form the foundation for cloud computing systems. It discusses the evolution of computing eras from sequential to parallel processing. Parallel computing uses multiple tightly coupled processors to solve problems simultaneously, while distributed computing uses loosely coupled independent computers communicating over a network. The document also outlines different hardware architectures for parallel processing including SISD, SIMD, MISD and MIMD systems. It provides examples of distributed systems and applications.

Uploaded by

Rizky Muhammad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views37 pages

Cloud Computing 30 66

This document provides an overview of parallel and distributed computing concepts that form the foundation for cloud computing systems. It discusses the evolution of computing eras from sequential to parallel processing. Parallel computing uses multiple tightly coupled processors to solve problems simultaneously, while distributed computing uses loosely coupled independent computers communicating over a network. The document also outlines different hardware architectures for parallel processing including SISD, SIMD, MISD and MIMD systems. It provides examples of distributed systems and applications.

Uploaded by

Rizky Muhammad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Unit 1

Chapter 2
Unit Structure
2.0 Objective
2.1 Eras of computing
2.2 Parallel vs. distributed computing
2.3 Elements of parallel computing
2.3.1 What is parallel processing?
2.3.2 Hardware architectures for parallel processing
2.3.2.1 Single-instruction, single-data (SISD) systems
2.3.2.2 Single-instruction, multiple-data (SIMD) systems
2.3.2.3 Multiple-instruction, single-data (MISD) systems
2.3.2.4 Multiple-instruction, multiple-data (MIMD) systems
2.3.1 Approaches to parallel programming
2.3.2 Levels of parallelism
2.3.3 Laws of caution
2.4 Elements of distributed computing
2.4.1 General concepts and definitions
2.4.2 Components of a distributed system
2.4.3 Architectural styles for distributed computing
2.4.3.1 Component and connectors
2.4.3.2 Software architectural styles
2.4.3.3 System architectural styles
2.4.4 Models for interprocess communication
2.4.4.1 Message-based communication
2.4.4.2 Models for message-based communication
2.5 Technologies for distributed computing
2.5.1 Remote procedure call
2.5.2 Distributed object frameworks
2.5.2.1 Examples of distributed object frameworks
2.5.3 Service-oriented computing
2.5.3.1 What is a service?
2.5.3.2 Service-oriented architecture (SOA)
2.5.3.3 Web services
2.5.3.4 Service orientation and cloud computing
2.6 Summary
2.7 Review questions
2.8 Reference for further reading

Cloud Computing: Unedited Version pg. 1


2.0 Objective
The computing components (hardware, software, infrastructures) that allow the delivery
of cloud computing services refer to a Cloud system or cloud computing technology.
Consumers can acquire new skills without investing in new hardware or software via the
public cloud. Instead, they pay a subscription fee for their cloud provider or only pay for
their resources. These IT assets are owned and managed through the Internet by the service
providers.
This chapter presents the basic principles and models of parallel and distributed computing,
which provide the foundation for building cloud computing systems and frameworks.

2.1 Eras of computing

The two most prominent computing era are sequential and parallel. In the past decade, the
high performance computer searches for parallel machines have become important
competitors of vector machines. Figure 2.1 provides a hundred-year overview of the
development of the computing era. During these periods the four main computing elements
are created like architectures, compilers, applications and problem-solving environments.
The computing era begins with the development of hardware, followed by software
systems (especially in the area of compilers and operating systems), applications, and with
a growing problem solving environment it enters its saturation level. Each computing
element is subject to three stages: R&D, commercialization and commodity.

FIGURE 2.1 Eras of computing

2.2 Parallel vs. distributed computing


Parallel computing and distributed computing terms, even though they are somewhat
different things, are often used interchangeably. The term parallel means a tightly
coupled system, whereas the distributed one refers to a wider system class, including the
tightly coupled.
The concurrent use of several computer resources to solve a computational problem is
parallel computing:
• A problem is divided into discrete pieces which can be solved simultaneously
• A number of instructions for each part are broken down further

Cloud Computing: Unedited Version pg. 2


• Instructions on various processors from each part run simultaneously
• An overall mechanism for control/coordination is used

For example:

FIGURE 2.2 Sequential and Parallel Processing


The problem in the computation should be:
The problem in the computation should be:
• Be divided into discreet parts of work that can be solved at the same time;;
• At any given time, execute multiple program instructions;
• With many compute resources in less time than one compute resource, can be solved.
Typically, computation resources are:
• One computer with several processors / cores
• A random number of these computers connected through a network

Initially, only certain architectures were considered by parallel systems .It featured multiple
processors with the same physical memory and a single computer. Over time, those limitations
have been relaxed and parallel systems now include all architectures, whether physically present
or based on the concept of shared memory, whether the library support, specific hardware and a
very efficient network infrastructure are present physically or created. For example, a cluster of
the nodes linked by InfiniBand can be considered a parallel system and configured with a
distributing shared memory system

Cloud Computing: Unedited Version pg. 3


Computing distributed is computed by distributed independent computers communicating only
over a network (Figure ). Distributed computing systems are typically treated differently than
parallel computing systems or shared memory systems, in which many computers share a common
memory pool used to communicate processors to each other. Memory systems used multiple
computers in order to solve a common problem, computing between the connected computers
(nodes) and communicating between nodes through message-passage.

FIGURE 2.3. A distributed computing system.

Distributed computing is limited to programs in a geographically-limited area with components


shared among computers. Broader definitions both include common tasks and program
components. Distributed computing in the broadest sense means that something is shared between
many systems, which can also happen in different locations.
Examples of distributed systems / applications of distributed computing:

• Intranets, Internet, WWW, email.


• Telecommunication networks: Telephone networks and Cellular networks.
• Network of branch office computers -Information system to handle automatic processing
of orders,
• Real-time process control: Aircraft control systems,
• Electronic banking,
• Airline reservation systems,
• Sensor networks,
• Mobile and Pervasive Computing systems.

2.3 Elements of parallel computing

That is the exponential increase in computing power. In 1965, Intel's co-founder, Gordon
Moore, noted that the number of transistors on a single-inch chip doubles per year, while
the cost falls by about half. It's now 18 months, and it gets longer. Silicon reaches a
performance limit in an increasing number of applications requiring increased speed,
reduced latency and light detection. To address this constraint, the feasible solution is to
connect several processors to solve "Great Challenge" problems in coordination with each
other. The initial steps towards parallel computing lead to the growth. It includes
technology, architecture and systems for multiple parallel activities. This section refers to
its proper characterization, which includes the parallelism of the operation of multiple
processors coordinating together within single computer.

Cloud Computing: Unedited Version pg. 4


2.3.1 What is parallel processing?

Parallel processing is a way to manage different parts of an overall task when comparing
two or more processors. CPUs. Break up various parts of a task among several processors
can reduce the time a program needs to run. Either machine with over one CPU or multi-
core processors which are commonly found on computers today may perform parallel
processing. In the parallel computing machine the concept known as divide and conquer
.Divide and conquer is an elegant way to solve a problem. You split up problems in
smaller problems of the same type may be resolved individually, and partial outcomes
combined in a total solution. The approach is used to break the problem into smaller and
smaller problems, until any problem is solved easily. Parallel programming is called
multiprocessor system programming using the divide and conquer technique

Intensive computational problems and applications require additional processing power


than it has ever been. Although the processor’s speed is increasing, traditional sequential
computers do not deliver the power to solve these problems. In parallel computers, an
area in which many processors simultaneously take on problems, many of these problems
are potentially addressed.

Several factors influence the development of parallel processing. The following are
prominent among them are:

1. In many fields of science and engineering parallel computing was considered the
"high end computing" to model problems that were difficult to solve: In the fields
like
• Atmosphere, Earth, Environment
• Physics - applied, nuclear, particle, condensed matter, high pressure, fusion,
photonics
• Bioscience, Biotechnology, Genetics
• Chemistry, Molecular Sciences
• Geology, Seismology
• Mechanical Engineering - from prosthetics to spacecraft
• Electrical Engineering, Circuit Design, Microelectronics
• Computer Science, Mathematics
• Defense, Weapons

2. Sequential architectures are physically constrained by the speed of light and the
laws of thermodynamics. The saturation point (no vertical growth) is reached
by a speed at which sequential CPUs can operate. Therefore, an alternate way
to achieve high computational speed is to connect several CPUs (the possibility
for horizontal growth).
3. Pipeline hardware, superscale etc. improvements are not scalable and require
sophisticated compiler technology. The task is difficult to develop this compiler
technology
4. Another attempt to improve performance was vector processing by doing more
than one task at a time. Capability to add (or subtract or multiply, or otherwise
manipulate) two numerical arrays to devices has been introduced in this case.
This was useful when data naturally appeared in vectors or matrices in certain
engineering applications. Vector processing was not so valuable in applications
with less well-formed data.
5. There is indeed extensive R&D work on development tools and environments
and parallel processing technology is mature, and commercially exploitable.
6. Essential networking technology advancement paves the way for heterogeneous
computing.

2.3.2 Hardware architectures for parallel processing


Parallel computers highlight the parallel processing of the operations somehow. All basic
parallel processing and computing concepts have been specified in the previous unit.
Parallel computers can be distinguished by data and instruction streams of computer
organizations. They can also be classified on a computer structure, for example multiple
processors with a separate memory or a global shared memory. In a program called grain

Cloud Computing: Unedited Version pg. 5


size, parallel levels of processing can also be defined based on the size of instructions.
But computers in parallel can be classified according to different criteria

The following classification of parallel computers have been identified:


1) Classification based on the instruction and data streams
2) Classification based on the structure of computers
3) Classification based on how the memory is accessed
4) Classification based on grain size

Flynn's Classical Taxonomy

• Parallel computers are classified in different ways.


• Flynn Taxonomy has been one of the most widely used classifications used since
1966.
• Flynn's taxonomy defines the architecture of multi-processor computers
according to how the two distinct aspects of instruction and data stream can be
categorized. Each of these dimensions can only contain a Single or multiple state
of one kind.
• The following matrix describes the four possible Flynn classifications:

FIGURE 2.4 • Flynn Taxonomy

2.3.2.1 Single-instruction, single-data (SISD) systems


SISD computing system is a uniprocessor that can execute a single instruction on a
single data stream. The SISD processes machine instructions sequentially, computers
that adopt this model are commonly referred to as sequential computers. The SISD
architecture is used in most conventional computers. All processing instructions and
data should be stored in the primary memory. Depending on the rates at which the
computer can transfer information internally, the speed is restricted in the processing
element of the SISD model. The IBM PC, workstations are the prevalent representative
SISD systems.

Cloud Computing: Unedited Version pg. 6


FIGURE 2.5: Single-instruction, single-data (SISD) architecture

2.3.2.2 Single-instruction, multiple-data (SIMD) systems

A SIMD system is a multi-processor system that can execute the same instruction on all
CPUs but operates on many data streams. SIMD-based machines are ideal for scientific
computing because they involve many vector and matrix operations. The data may be
divided into multiple sets (N-sets for N PE systems) so that the information can be
transferred to all the processing elements (PEs). Each PE can process the same data set
.This is ideally suited for complex problems with a high degree of regularity like graphics
/ image processing. Most modern computers, particularly those with graphics processor
units (GPUs) employ SIMD instructions and execution units. Dominant representative
SIMD systems is Cray’s vector processing machine.
Examples:
Processor Arrays: Thinking Machines CM-2, MasPar MP-1 & MP-2, ILLIAC IV
Vector Pipelines: IBM 9000, Cray X-MP, Y-MP & C90, Fujitsu VP, NEC SX-2, Hitachi
S820, ETA10

FIGURE 2.6 : Single-instruction, multiple-data (SIMD) architecture.

2.3.2.3 Multiple-instruction, single-data (MISD) systems

An MISD is a multiprocessor system that executes different instructions on different PEs,


but they all operate in the same dataset.
Multiple instructions: every processing unit works independently on the data over
separate streams of instructions.
Single Data: A single stream of data is fed into multiple processing units.

FIGURE 2.7 Multiple-instruction, single-data (MISD) architecture.

Cloud Computing: Unedited Version pg. 7


Example Z = sin(x)+cos(x)+tan(x)
On the same data set the system performs various operations. For most applications,
machines designed using MISD are not useful, some are designed, but none of them are
commercially available.

2.3.2.4 Multiple-instruction, multiple-data (MIMD) systems

MIMD is a multiprocessor system that can carry out multiple instructions on multiple sets
of data. Every PE in a model with a MIMD has separate instructions and data streams, so
any type of application can be used on machines built using this model. In comparison to
SIMD and MISD, PEs can operate synchronous or asynchronous, deterministic or non-
deterministic in MIMD computers. Currently, the most common type of parallel computer
- most modern supercomputers fall into this category. Examples: most current
supercomputers, networked parallel computer clusters and "grids", multi-processor SMP
computers, multi-core PCs.

FIGURE 2.8 Multiple-instructions, multiple-data (MIMD) architecture.

MIMD machines are divided broadly into shared-memory MIMD and distributed-memory
MIMD on the manner in which PEs are connected to the main memory.

Shared memory MIMD machines

All PEs are connected to a single global memory in the shared MIMD (tightly coupled
multiprocessor systems) model and have all access to it. The communication between PEs
within this model takes place by means of a shared memory, changes of the data stored by
one PE in the global memory are visible to all other PEs. The dominant shared memory
systems for Silicon Graphics and Sun / IBM (Symmetric Multi-Processing) are shared
memory systems.

Distributed memory MIMD machines

All PEs have a local memory on distributed memory MIMD machines (loose
multiprocessor systems). In this model, communication among PEs is carried out via the
interconnection network (the inter-process communication channel or IPC). The network
connecting PEs can be set up in tree, mesh or as needed.

The MIMD shared memory architecture is easier to design, but less tolerant to failure and
more difficult to expand compared to the MIMD distributed memory model. Shared MIMD
failures affect the entire system, but not the distributed model in which every PE can be
easily isolated. In comparison, MIMD shared memory architectures are less likely to scale
as the introduction of more PEs triggers memory conflict. This is not the case with
distributed memory, in which each PE has its own memory. Thanks to realistic effects and
consumer specifications, the distributed MIMD memory architecture is better than the
others.

Cloud Computing: Unedited Version pg. 8


FIGURE 2.9 shared (left) and distributed (right) memory MIMD architecture.

2.3.3 Approaches to parallel programming

In general, a sequential program always runs the same sequence of instructions with the
same input data and always generates the same results were as programs must be
represented by splitting work into several parts running on different processors. The
broken program is a parallel program.
Various methods are available for parallel programming. The most significant of these
are:
• Data parallelism
• Process parallelism
• Farmer-and-worker model

Each three of these models can be used for task-level parallelism In the case of data
parallelism, Divide and conquer is a multi-branched recursion-based design algorithm. A
divide-and - conquer algorithm works by breaking a data into two or more similar or related
data repetitively and the same instructions are used to process each data set for different
PEs. This is a very useful approach for machine processing based on the SIMD model.
With process parallelism, there are many (but separate) operations in a single activity that
could be done on several processors. In farmer and-worker model, the main (master)
computation causes many sub problems which slave fires off to be executing. The only
communication between the master and slave computations is to start the master
computation for slaves, and return the result of the slave computation to master.

2.3.4 Levels of parallelism

Bit-level Parallelism: In this parallelism, it’s focused on the doubling of the word size
of the processor. Increased parallelism in bit levels means that arithmetical operations
for large numbers are executed more quickly. An 8-bit processor, for example, takes 2
cycles to perform a 16-bit addition whereas a 16-bit is a single cycle. With the advent of
64-bit processors this degree of parallelism seems to be over.

Instruction-level parallelism (ILP): This form of parallelism aims to leverage the


possible overlap in a computer program between instructions. On each hardware of the
processor, most ILP types are implemented and applied:

Instruction Pipelining: Execute various stages in the same cycle of various independent
instructions and use all idle resources.

Task Parallelism: Task parallelism involves breaking down a task into subtasks and
then assigning each of the subtasks for execution. Subtasks are carried out concurrently
by the processors.

Out-of-order execution: Instructions without breaching data dependencies may be


executed if even though previous instructions are still executed, a unit is available.

Cloud Computing: Unedited Version pg. 9


2.3.5 Laws of caution

Already that we have implemented certain basic elements of parallel computing in


architecture and models, we can take into account some of the knowledge gained from
the design and implementation of these systems. There are principles which could enable
us to understand how much parallelism will help an application or a software system.
Parallelism is used in many activities together in order that the machine can maximize
its performance or speed. In particular, it should be kept in mind. But the relationships
that manage the growth not linear pace. For instance, the user intends to speed for a given
n processor increased up to n times. This is an optimal solution, but it seldom occurs due
to overhead communication.
Two important guidelines are here to be considered:

• Computation speed is proportional to the system's square root costs; it is never


linearly increased. The faster a system gets, the costlier its speed will be (Figure
).
• Speed increases with the logarithm of the number of processors (i.e.,y= k
*log(N)) of parallel computer. Figure illustrates that concept.

FIGURE 2.10 Cost versus speed

FIGURE 2.11 Number processors versus speed.

2.4 Elements of distributed computing

In this portion, we broaden these principles and discuss how different tasks can be achieved
by utilizing systems consisting of many heterogeneous computer systems. They address
what is commonly called distributed computing and more specifically, in terms of the
software designer, they present the most important guidelines and principles for the
implementation of distributed computing systems.

Cloud Computing: Unedited Version pg. 10


2.4.1 General concepts and definitions

Distributed computing work explores the models, architectures, and algorithms used in
the design and management of distributed systems. We use the one as a general definition
of the distributed system proposed by Tanenbaum

A distributed system is a collection of independent computers that appears to its users


as a single coherent system

It is definitely the ideal form of a distributed system that completely hides from the user
the "implementation details" of creating a powerful system from many more basic
systems. Within this section, we concentrate on the architectural models that use and
present a coherent system to use independent computers. The fundamental step in all
distributed computer architectures is the concept of computer communication. The
distributed system is an application that performs protocol collection to coordinate
several communication network action processes such that all components cooperate in
order to perform one or a number of similar tasks. The collaborating computers can
control both remote and local resources in the distributed system over
the communication network. Multiple existence Individual computers in the distributed
network are transparent to the user. The user does not know the work is performed in
remote areas on different machines. Coulouris definition of Distributed system

A distributed system is one in which components located at networked computers


communicate and coordinate their actions only by passing messages.

The distributed system components communicate with some kind of message passing,
as defined in this above description. This term covers several models of communication.

2.4.2 Components of a distributed system

Nearly all large computing systems are distributed now. Systems are distributed. The
distributed system is "a set of independent machines that present to the user as one
coherent system." Information processing is distributed on several machines instead of
being confined to a single computer. The overviews of the various layers involved in the
delivery of distributed system services are presented in Figure 2.12.

FIGURE 2.12 A layered view of a distributed system


Reference from “Mastering Cloud Computing Foundations and Applications Programming” by
Rajkumar Buyya)

Cloud Computing: Unedited Version pg. 11


At the lowest level the physical infrastructure is computer and network hardware, which
is explicitly supervised by the operating system that provides the essential interprocess
communication (IPC) services, the scheduling and management of the process, as well
as the management of resources in file systems and local systems. Combined, those other
two layers become the framework on the top for specialized software to convert a number
of networked computers to distributed system

The implementation of well-known operating system principles and many more on a


hardware and network level allows heterogeneous components and their structure to be
integrated easily into a consistent, unified framework. For instance, connectivity in the
network among various devices is governed by standards, allowing for smooth
interaction. IPC services at operating system level have been introduced with the
introduction of standardized communication protocols like TCP / IP, User Datagram
Protocol (UDP) as well as other.

The middleware layer utilizes such services to develop and deploy distributed
applications in a consistent environment. When using the services provided by the
operating system, middleware creates its own protocols, data formats and programming
language or frameworks to create distributed apps. This layer offers support to
programming paradigms for distributed systems. They all constitute an interface that is
entirely independent of the underlying operating system and covers all of the lower
layers' heterogeneities.

The applications and services designed and built for middleware use reflect the upper
part of the distributed system stack. These can be used for many reasons. Sometimes
they can view their functionality through a web browser in the form of graphical
interfaces (GUIs). For example, the utilization of web technology is highly preferred in
the context of a cloud computing system not only for interface applications distributed
apps with consumers, but also for platform services to create distributed systems. An
excellent example is the IaaS provider, for instance Amazon Web Services (AWS), who
provides virtual machine creation facilities, organizes it together into a cluster and
deploys applications and systems on top. Figure gives an example of how a cloud
computing system's general reference architecture of a distributed system is
contextualized.

FIGURE 2.13 A cloud computing distributed system.


Reference from “Mastering Cloud Computing Foundations and Applications Programming” by
Rajkumar Buyya)

Cloud Computing: Unedited Version pg. 12


2.4.2 Architectural styles for distributed computing

Distributed systems are also complex software components that are distributed through
many devices by design. It is important that these processes are structured appropriately
to overcome their complexities. There are various ways to see how a distributed System
is organized, but it is readily apparent to distinguish between the logical organization and
actual physical advancement of software components.
The distributed systems are structured mainly by the software constituent components of
the network. These software architectures inform us the structure and interaction of
different components of the program. In this chapter we will first concentrate on some
common approaches to the organization of computer systems.
To create an effective distributed network, software modules need to be mounted on
specific computers and put on them. There are a number of different choices to make.
Sometimes named device architecture the final instantiation of a software architecture.
In this chapter we examine traditional central architectures in which the majority of
software components (and therefore features) are implemented by a single server, while
remote clients can access that server with simple communication methods. Moreover,
we call decentralized systems where computers perform more or less the same roles and
hybrid organizations.

Architectural Styles

Originally we find the logical arrangement of distributed systems into software modules,
also called software architecture. Computer architectural work has evolved dramatically
and the design or adoption of architectures is now widely recognized as essential to large
system growth.

The idea of an architectural style is important for our discussion. Such a style is
formulated in components, the connections between components, the data exchange
between components and the configuration of these elements together in one system. A
part is a modular unit with well-defined interfaces and which can be replaced in its
environment. As discussed below, the key point regarding a distributed system
component is that the component can be substituted if its interfaces are known. A much
more complex term is a connector, usually defined as a communication, coordination or
co-operation mechanism between the components. For example, the (remote) procedure
calls, message passing, or streaming data can be generated by a connector.

The architectural styles are organized into two main classes:


• Software architectural styles
• System architectural styles

The first class is about the software's logical structure; the second class contains all types
representing the physical structure of the software systems represented by their major
components.

2.4.3.1 Component and connectors

Component and connectors visions describe models consisting of elements with a certain
presence over time, such as processes, objects, clients, servers, and data storage. In
addition, component and connector models provide interaction mechanisms, such as
communication links and protocols, information flows, and shared storage access, as
components. These interactions also are conducted across complex infrastructure, such as
middleware systems, communication channels, and process schedulers. Component is a
behavioral unit. The description of the component defines what the job can do and needs
to do. Connector is an indication that one component is usually linked by relationships
such as data flow or control flow. Connector is a mechanism.

2.4.3.2 Software architectural styles

Styles and patterns in software architecture define how to organize the system components
to build a complete system and to satisfy the customer's requirements. A number of

Cloud Computing: Unedited Version pg. 13


software architectural styles and patterns are available in the software industry, so that it is
necessary to understand the special design of the project.

These models form the basis on which distributed systems are built logically and discussed
in the following sections.

Data centered architectures

At the center of this architecture is a data store, which is often accessed through other
components that update, add, delete or modify the data present in the store. This figure 2.1
4 shows a typical data-centric style. A central repository is accessed by the client software.
Variation of such a method is used to turn the repository into a blackboard, as client's data
or client’s interest data change customer's notifications. It will facilitate integrality with
this data-centered architecture. This allows for changes to existing components and the
addition of new customer components to the architecture without the permission or concern
of other customers. Customers may use blackboard mechanisms to transfer data.

FIGURE : 2.14 Typical Data-Centric Style

A repository architecture consists of a central (often a database) data structure and an


independent collection of components which function on the central data structure
For example, blackboard architectures, where a blackboards serve as communication
centers of knowledge sources and repositories for multiple applications, include repository
architectures. Repositories, including software development, CAD, are important in data
integration and are implemented in a variety of applications.
In the blackboard style the principal components are shown in the figure 1. The problem is
solved from several sources of knowledge. The problem is solved by each source of
information and its solution, partial solution or suggestion is written on the blackboard.
Around the same time, any other source of knowledge either modifies or extends the
solution given by the previous source of knowledge or writes to solve the problem itself.
Control shell is used to organize and monitor the activities of information sources to
prevent them from creating a mess that may differ from the current course of the project.
This is the management, monitoring and control of all activities conducted during the
troubleshooting session through the control shell.
Scalability i.e. is one of the benefits of this architectural design. Source of knowledge can
easily be added or removed as needed from the program. Sources of knowledge are
independent and thus workable simultaneously under the control element constraint. It is a
problem in this architecture that it is not known in advance when to stop the solution finding
process because more and more refinement is always feasible. Pair of synchronizes. It is
difficult to achieve multiple sources of knowledge

Cloud Computing: Unedited Version pg. 14


FIGURE 2.15 . Blackboard architecture

Data-flow architectures

Data Flow Architecture converts input data into output data via a collection of
computational or deceptive elements. It's a computer architecture that has no program
counter, so the execution is unpredictable, meaning that behaviors are indeterminate. Data
flow is an aspect of the Von-neumann computer model consisting of a single program
counter, sequential execution and control flow that defines fetch, execution, and commit
order.

This architecture has been applied successfully.

• The architecture for data flow eliminates development time and can quickly switch
from design to implementation.
• It aims primarily to achieve the reuse and alteration features.
• Through the architecture of data flows, the data can be flowed without cycles into
the graph topology or into a linear structure.

The modules are implemented in two different types:

1. Batch Sequential
2. Pipe and Filter

Batch Sequential

• Batch sequential compilation in 1970 was considered to be a sequence process.


• In Batch sequential, Separate program systems are run sequentially and the data
is transferred from one program to the next as an aggregation.
• This is a typical paradigm for data processing.

FIGURE 2.16 Batch Sequential

• The diagram above shows the batch sequential architecture flow. It offers
simpler sub-system divisions and each subsystem can be an independent
program which works on input and produces output data.
• The biggest downside of the sequential batch architectures is the lack of a
concurrency and interactive interface. It provides high latency and low
throughput.

Cloud Computing: Unedited Version pg. 15


Pipe and Filter

• Pipe is a connector that transfers data from one filter to another filter
• Pipe is a directional data stream which a data buffer implements to store all
data, before the following filter has time to process it.
• It moves the data from one data source to a one data sink.
• The stateless data stream is pipes.

FIGURE 2.17 Pipe and Filter


• The figure above shows the sequence of the pipe filter. All filters are the
processes running concurrently, which means they can run as separate
threads or coroutines or be fully located on various machines.

• Every pipe has a filter connection and has its own role in filter's
operation. The filters are robust, with the addition and removal of pipes
on runtime.

• Filter reads the data from their input pipes, performs its function on
these data and places the result on all output pipes. If the input pipes are
not enough data, the filter only waits for them.

Filter

• Filter is a component.
• The interfaces are used to flow in a variety of inputs and to flow out a
variety of outputs.
• It processes and refines the data input.
• The independent entities are filters.
• Two ways to create a filter exist:

1. Active Filter
2. Passive Filter

• The active filter creates the pipes' data flow.


• Data flow on the pipes is driven by the passive filter.

• Filter does not share state with other filters.


• The identity of upstream and downstream filters is unclear.
• Separate threads are used for filters. It may be threads or coroutines of
hardware or software.

Advantages of Pipes and Filters

• Pipe-filter provides high throughput and excessive data processing


efficiency.
• It allows maintenance of the system simpler and provides reusability.
• It has low connectivity and flexibility through sequential and parallel
execution.

Cloud Computing: Unedited Version pg. 16


Disadvantages of Pipe and Filter

• Dynamic interactions cannot be accomplished with Pipe and Filter.


• For data transmission in ASCII format it needs a low common
denominator.
• Pipe-filter architecture can be difficult to dynamically configure.

Virtual machine architectures

Virtual machine architecture refers to a structured system interface specification, including


the logical behavior of the resources handled by interface. Implementation defines an
architecture’s real implementation. The levels of abstraction are the design layers, be they
hardware or software, each associated with a different interface or architecture. In systems
using this design, the general interface is as follows: the software (or application)
determines its functions and state, as interpreted by the virtual machine engine, in an
abstract format.
The implementation is based on an understanding of the program. The engine retains an
internal structure of the state of the program. The rule-based systems, interpreter and
command-language processors are very common examples within this group. The simplest
type of artificial intelligence is rule-based systems (also known as production systems or
expert systems). A rule-based program requires rules for representing knowledge with
system-coded knowledge .The concepts of a rule-based system depend almost entirely on
expert systems that mimic human expert reasoning in the resolution of a wisdom-intensive
question. Instead of describing knowledge as a collection of true facts in a static and
declarative way, a rule-based structure portrays knowledge as a series of laws that say what
to do or not. The networking domain provides another fascinating use of rule-based
systems: network intrusion detection systems (NIDS) also are based on a set of rules to
classify suspicious behaviors associated with potential computer device intrusions.

Interpreter Style

The interpreter is an architectural style that is ideal for applications that can not specifically
use the most adequate language or machine to execute the solution. The style comprises a
few parts that are a program we attempt to run, an interpreter we are attempting to interpret,
the program's current state and the interpreter and the memory portion that will carry the
program, the program’s actual state and its current state. Calls for procedures for
communication between elements, and direct memory access, are the connector for the
architectural style of an interpreter.

Four compositions of the interpreter:


• Engine interpreter: the interpreter 's job is completed
• Area of data storage: contains the pseudo code
• Data store field : Reports current interpreter engine state
• external data structure: Tracks the development of the source code interpreted

Input: the input to the portion of the interpreted program is forwarded to the program
state where the interpreter is read by the program
Output: The output of the Program is placed in the state of the program where the data
is interpreted interface system part
This model is quite useful in designing virtual machines for high-level programming (Java,
C#) and scripting languages (Awk, PERL, and so on).

• Application portability and flexibility throughout different platforms


• Virtualization. Machine code for one hardware architecture can be executed on
another via the virtual machine.
• System behavior defined by custom language or data structure; facilitates the
development and comprehension of software.
• Dynamic change supports (Efficiency)
• Usually the interpreter only has to translate the code to a

Cloud Computing: Unedited Version pg. 17


• Intermediate representation (or not translate at all), so that it takes considerably less
time to test change.

An interpreter or virtual machine does not have to follow all the instructions of the source
code that it processes. It can refuse in particular to execute code which breaches any
security limitations under which it operates. For example. JS-interpreter is a JavaScript
interpreter that is sandboxed in JavaScript. The arbitrary JavaScript code can be executed
line by line. Performance of the main JavaScript environment is completely isolated. Multi-
threaded competitor JavaScript without the use of web workers are available in JS-
Interpreter instances.

Call & return architectures

The most frequently used pattern on computer systems was Call & return architectures
style. The mechanism of call or function call includes main programs and subroutines,
remote procedure calls, object-oriented systems, and layered systems. They all come under
the call and return in the style of architecture.

Top-Down Style.

The top down approach is basically the breakdown of a systems in order to get details on
its compositional sub- structures in a reversing engineering manner (also known as
stepwise design and stepwise refining and in some cases used in a decomposition fashion).
In a top-down approach, an overview of the system is made and all first-level subsystems
are defined, but not comprehensive. Each subsystem is then further modified, often at
various other subsystem levels, until the full specification has been reduced to smaller
elements. The "black boxes" helps define a top-down layout that is easier to manipulate.
Nonetheless, black boxes could not explain or be precise enough to validate the model
effectively. The big picture starts with the top down approach. It divides into smaller
pieces.

A top-down approach involves dividing the problem between tasks and separating tasks
into smaller subtasks. In this approach, we first develop the main module and then develop
the next stage modules. This process is followed until all modules have been created.

Object-Oriented Style.
The object-oriented programme, instead of actions & logic, is a programming language
paradigm structured around objects & data. In order to take data, process it and generate
results, a traditional procedure program is organized. The program was centralized in terms
of logic instead of data. They focus object-orientated programming on objects and their
manipulation rather than on the logic that creates them.

The first phase in the OOPs is data modeling that includes defining, manipulating
and relationship involving all the objects. The modeling of data is a planning phase that
requires tremendous attention. We have a method to produce those objects once every
object involved in the program has been identified. It is known as the class mechanism. A
class includes data or properties and the logical sequence of methods for manipulating data.
Every way is separate and the rationale which has already been established in other
methods should not be repeated.

Architectural styles based on independent components

A number of independent processes / objects communicating via messages are part of the
independent component architecture. The messages can be transmitted via publish /
subscribe paradigms for a given or unnamed participant.
Components typically do not control each other by sending data. It can be changed as the
components are isolated.
Examples: Event systems and communication processe are subsystems of this type.

Event systems

Cloud Computing: Unedited Version pg. 18


This paradigm separates the implementation of the component from the knowledge of
component names and locations. The pattern of the publisher / subscriber, where:

Publisher(s): advertise the data you would like to share with others
Subscriber(s): Receipt of published data register interest.

For communications between components, a message manager is used. Publishers send


messages to the manager who redistributes them to subscribers.

Communication process
The architectural type of communication process is also known as Client-Server
architecture.
Client: begins a server call that requests for some service.
Server: provides client data.
Returns data access when the server works synchronously

2.4.3.3 System architectural styles


The Client-server and Peer-to - peer (P2P) are the two key system level architectures we
use today. In our everyday lives, we use these two types of services, but the difference
between them is often misinterpreted.

Client Server Architecture

Two major components are in the client server architecture. The server and the client.
The server is the location of all transmission, transmission, and processing data, while the
client can access the remote server services and resources. The server allows clients to
make requests, and the server will reply. In general, the remote side is managed only by a
computer. But in order to be on the safe side, we load balancing techniques using several
servers.

FIGURE 2.18 Client/server architectural styles

The Client Server architecture is a standard design feature with a centralized security
database. This database includes information on security, such as credentials and access
details. Absent security keys, users can't sign in to a server. This architecture therefore
becomes a bit more stable and secure than Peer to Peer. The stability comes because the
security database can make for more efficient use of resources. However, on the other
hand, the system could crash because only a small amount of work can be done by a
server at a certain time.

Advantages:

• Easier to Build and Maintain

Cloud Computing: Unedited Version pg. 19


• Better Security
• Stable

Disadvantages:

• Single point of failure


• Less scalable

Peer to Peer (P2P)

There is no central control in a distributed system behind peer to peer. The fundamental
idea is that at a certain time each node can be a client or a server. If something is asked
from the node, it could be referred to as a client and if something arrives from a node it
could be referred to as a server. Usually every node is called a peer.

FIGURE 2.19 Peer to Peer (P2P)

Any new node will first join this network. Upon joining, they may either request or provide
a service. A node's initiation phase (joining a node) can vary based on
network's implementation. There are two ways a new node can learn what other nodes
provide.
Centralized Lookup Server-The new node must register and mention the services on the
network with the centralized look up server. So, just contact the centralized look up system
anytime you need to have a service and it will direct you to the appropriate service provider.

Decentralized System-A node that seeks particular services will, broadcast and request
each other node in the network, so that the service provider can respond.

A Comparison between Client Server and Peer to Peer Architectures

BASIS FOR CLIENT-SERVER PEER-TO-PEER


COMAPARISON
Basic There is a specific server Clients and server are not
and specific clients distinguished; each node act
connected to the server as client and server.
Service The client request for Each node can request for
service and server services and can also provide
respond with the service. the services.
Focus Sharing the information. Connectivity.
Data The data is stored in a Each peer has its own data.
centralized server.
Server When several clients As the services are provided
request for the services by several servers distributed
simultaneously, a server in the peer-to-peer system, a
can get bottlenecked. server in not bottlenecked.
Expense The client-server are Peer-to-peer are less
expensive to implement. expensive to implement.

Cloud Computing: Unedited Version pg. 20


Stability Client-Server is more Peer-to Peer suffers if the
stable and scalable. number of peers increases in
the system.

2.4.4 Models for interprocess communication

A distributed system is a set of computers that behave as a cohesive network to its users.
One important thing is that differences between the different computers and how they
interact are often hidden from users. This then gives the user a single image of the system.
The OS hides all communication details among the user's processes. The user does not
know that many systems exist. The inter-process communication called IPC is done by
various mechanisms in distributed systems and for different systems, these mechanisms
may vary. Another significant aspect is that users and applications can communicate
consistent and uniform with a distributed system

Communications between various processes is the essence of all distributed systems and it
is important to understand how processes can share information on different machines. In
order to exchange data between two application and processes, Inter Process
Communication or IPC as its name implies. Processes may be on or in a different location
on the same machine. Distributed systems communication often depends on low-level
messaging as the underlying network provides. Communication is difficult to communicate
through message passing than primitive communication based on a shared memory
available on non-distributed platforms

Inter-process Communication (IPC) is a method for the communication and


synchronization of systems. The communication between such processes can be regarded
as a cooperation method among them .These three methods allow processes to
communicate with each other: shared memory, remote procedure call (RPC), and message
passing. In distributed systems, IPCs with sockets are very popular. In short, an IP and a
port number are a pair of sockets. Every one requires a socket for two processes to
communicate.
If a server daemon runs on a host, it listens to its port and manages all customer requests
that are sent to the client port (server socket). To submit a message, a client has to be aware
of the IP and server port (server socket). Once a client starts communication with the
servers and is freed once communication is over, the OS kernels also provide the client's
port.
Although communication by sockets is popular and effective, it is considered low because
sockets allow unstructured streams of bytes only to be transmitted between processes. The
data transmitted as a byte stream is organized by client and server applications.

2.4.4.1 Message-based communication

Message abstraction is essential in the development of models and technologies, enabling


distributed computing. Distributed system is a system in which components reside in
networked communication and only through moving messages coordinate their functions.
Within this message, any confidential data transferred from one individual to another is
identified. It includes any type of data representation with size and time constraints while
invoking a remote process or an object instance sequence or a common message. That is
why the 'message-based communication model,' which is based on data streaming
abstraction, can benefit from referencing various inter-process communication models.

Despite the abstraction that is shown to developers in programming the coordination of


common components, various distributed programming models use this type of
communication. Below are several major distributed models of programming using
message templates,

Although communication by sockets is popular and effective, it is considered low because


sockets allow unstructured streams of bytes only to be transmitted between processes. The
data transmitted as a byte stream is organized by client and server applications.

Cloud Computing: Unedited Version pg. 21


Message Passing

The principle of message is implemented in this model as the main abstraction of the model.
Units that exchange explicitly encoded data and information in the form of a message. The
structure and message’s content differ or vary according to the model. Message Passing
Interface and OpenMP are significant examples of this model type.

Remote Procedure Call

This model examines the keys to the procedure call outside the limits of a single process,
suggesting system execution in remote processes. It includes the main client-server. A
remote process maintains a server component, allowing client processes to call on
processes and returns the execution output. The messages, which are generated by the
implementation of Remote Procedure Call (RPC), collect information about the method on
its own and execute the arguments required for it and also return the values. The usage of
messages referred to as the marshalling of the arguments and the return values.
Distributed Objects

This is an implementation of the object-orientated model Remote Procedure Call (RPC),


which is understood in context for remote invocation methods that are expanded through
objects. Each process assigns a series of interfaces that are remotely accessible. The client
process can request and invoke the methods accessible via these interfaces. The standard
runtime infrastructure transforms the local method invocation into a remote request
call and collects the execution results. The interaction between the caller and the remote
process takes place via messages. This model is stateless by design, the complexity of
object state management and lifetime are illustrated by distributed object models. Common
Object Request Broker Architecture (CORBA), Component Object Model (COM, DCOM
and COM+), Java Remote Method Invocation (RMI), and .NET Remoting are some of the
most important Distributed object infrastructure examples.

Active objects

Programming models based on active objects, however accessible, contain by definition


the existence of instances, regardless of whether they are agents of objects. This implies
that objects have a special control thread that allows them to show their activity. Such
models often manually use messages to execute functions and the message is connected to
a more complicated semantics.

Web Services

Web service technology offers an alternative to the RPC framework over HTTP, allowing
the interaction of established components with various technologies. A web service is
exposed as a remote object stored on a web server and invocations of the system are
converted into HTTP requests packed using a particular protocol. It must be remembered
that the concept of message is a basic abstraction of communication between interprocesses
and is used either implicitly or explicitly.

2.4.4.2 Models for message-based communication

Point-to-point message model


A software or application is designed from point to point (PTP) around the idea of message
queues, senders and receivers. That message is sent to a certain queue and customers
receive messages from the queue(s) that are set up to hold their messages. All messages
sent to them are kept until the messages are consumed or until messages expire.

Publish-and-subscribe message model

Publish-subscribe is a message service. It describes a particular type of communication


between components or software modules. The name is chosen to represent the most
important features of this communication model.

Cloud Computing: Unedited Version pg. 22


Software modules interact directly with each other in straightforward interactions using
mechanisms and media that are recognized by all parties. For communication needs
becoming more complex or demanding, other systems of communication have developed.
Publish-subscribe is one such subscription and only one of many.

The core ideas for Publish-Subscribe

• There are not necessarily software components that know with whom they interact.

• The data producers publish the data in the whole network.

• Data consumers subscribe to the system and receive data from it as a whole.

• Information is named such that the available information can be defined by


software modules. Sometimes this label is called the topic.

FIGURE 2.20 Publish-and-subscribe message model

A central software module ensures that all data, publishing and subscription are
administered and matched. The "broker" is commonly referred to. Often brokers are a
network of cooperating software modules and software modules that use broker services
are called clients.

Clients that publish and also subscribe "register" with the broker for communication paths
to manage, clients and other housekeeping activities to authenticate.

Message delivery to subscribers filtered in relation to content rather than topic. Instead of
or with the topic, this can be used. Only a few Publish-Subscribe systems have
implemented this.

Data can be "persistent," because subscribers who register on the network after last
publishing the data will have the last published data on the specific topic.

Request-reply message model

A request reply messaging model is different from a traditional pub / sub or P2P model,
which publishes a message to a topic or queue and enables clients to receive the message
without providing the reply response.

Request reply messages may be used when a customer sends the requested message for
information from a remote client application or for a processing action to be carried out.
Once the client application receives the request message, it receives the necessary

Cloud Computing: Unedited Version pg. 23


information or carries out the requested action. The information is then applied to a
reply message, or a confirmation of completion of the task is submitted in response to the
request.

2.5 Technologies for distributed computing

In this section, we are implementing appropriate technologies which give realistic


implementation of interaction models which depend mainly on message-based
communication. Such systems include remote procedure call (RPC), distributed object
frameworks and services-oriented computing.

2.5.1 Remote procedure call

Remote Procedure Call (RPC) is a protocol that can be used by a program to request the
service on the other computer of the network without the need for the details of the network.
RPC is used in remote systems to call other processes such as a local system. Sometimes,
a procedure call is also called a function call or a subroutine call.
The client-server model is utilized by RPC. The program you are requesting is a client and
the service provider is the server. Like an on-going or local procedure call, the RPC is a
synchronous operation which requires a suspension of the requesting program until the
remote procedure result are returned. Nevertheless multiple RPCs can be performed
concurrently by using lightweight processes or threads that share the same space.

In Remote Procedure Call software, interface definition (IDL) language, the


specification language used to describe an application programming interface (API) of a
software component. IDL provides in that case a bridge between the two ends of the
connection, which may be connected by different computer languages and operating
systems (OSes).

RPC message procedure

When program statements using the RPC framework are compiled into an executable
program, the compiled code includes a stub representing the remote procedure code. The
stub receives the request and transmits it to a client runtime program on the local computer
when the program is running and a call is issued. Once the client stub is first invoked, it
contacts a name server to specify where the server is located.

The Client Runtime Program is familiar with how to address the remote computer and
server application and sends the message over the network that requests the remote
procedure. The server also has a runtime program and stub this interface with the remote
procedure itself. Response-request protocols returned in the same way

When making a Remote Procedure Call

Cloud Computing: Unedited Version pg. 24


FIGURE 2.21 Remote Procedure Call
1. The calling environment is terminated, procedural parameters are transferred across the
network and the procedure is execute in the environment.

2. When the procedure is completed and results are produced, its results are returned to
the calling environment where it resumes to execute as if back from a regular procedure
call.

NOTE: RPC is particularly suitable for the client-server interaction (e.g. query-response)
between the caller and callee. The client and server are not both execute simultaneously
in the concept. Instead, it jumps back and forth from the caller to the callee.

Working of RPC

FIGURE 2.22 Working of Remote Procedure Call

In an RPC there will be the following steps:

Cloud Computing: Unedited Version pg. 25


1. A client invokes a client stub procedure which usually passes parameters. The client
stub is resides in the own address area of the client.
2. The client stubbed marshalls (pack) the parameters in a message. Marshalling involves
converting the parameter representation to a standard format and copying each
parameter to the message.
3. The client stub transfers the message to the transportation layer and transfers it to the
remote server.
4. On the server, a transport layer transfers the message to a server stub to demarshall
(unpack) the parameters and uses the standard procedure call method to call the desired
process routine.
5. When the server procedure finishes, it returns to the stub server (for example, through
a normal procedure call return). The stub server then transmits the message to the
transport layer.
6. The transport layer returns the resulting message to the client transport layer, which
returns the message to the client stub.
7. The client stops the return parameters and returns the execution to the caller.

2.5.2 Distributed object frameworks

The most well-known ways to develop distributed systems or frameworks are client-server
systems. An extension to this client-server model is a distributed object framework. It is a
library where distributed applications can be built using object-oriented programming.
Distributed objects in distributed computing are objects that are distributed in different
address spaces, in different processes on the same computer, or even in multiple network-
connected computers. However, they perform around each other via data sharing and
invoking methods. It also involves transparency of location where remote objects appear
the same as local objects. With remote method invocation, usually message-passing, the
main method of distributed communication for objects is by sending a message to a
different object within a remote machine or process to perform some task. The results are
returned to the object that you call.

The Remote procedure Call (RPC) method applies to distributed environments the common
programming abstraction of the procedure call allowing the call process to call the remote
node as local.

Remote method invocation (RMI) resembles RPC for distributed objects, but has
additional advantages in terms of the use of object-oriented programming concepts for
distributed systems and extends to the distributed Global environment the concept of the
object reference and enables the use of object references such as Parameter in remote
invocation.
Remote Procedure call – The client calls procedures in a different server program
Remote method invocation (RMI) – an object can invoke object methods in a different
process
Event notification – Objects receive notification of events in other objects they have
registered for

2.5.2.1 Examples of distributed object frameworks

Distributed programming environment (DPE)-software can be developed and managed by


programmers distributed around the world Practically supporting distributed object
computing, such as Internet, on the distributed system. The research is aimed at developing
a programming environment that supports an effective distributed environment
programming. A system that uses distributed objects provides the distributed and parallel
programming environment with flexible and scalable programming. A lot of them
are distributed object computing systems like CORBA, DCOM, and Java are supported

Common object request broker architecture (CORBA)

Common Object Request Broker Architecture (CORBA), is a consortium of more than 800
companies that supports the most famous middleware. With the exception of Microsoft,
this consortium is the majority of computing companies which has its own Distributed

Cloud Computing: Unedited Version pg. 26


Component Object Model (DCOM) object broker. The object bus of CORBA sets and
defines the object components Interoperability. An object bus is the Object Request broker
(ORB). CORBA is conducted to discover and interoperate object components within an
object bus. CORBA supports transparent object references through object interfaces
between distributed objects

CORBA is essentially a design specification for an Object Request Broker (ORB) that
provides an ORB mechanism to allow distributed objects, either locally or on remote
devices, written in different languages or in various network locations, to communication
with each other.

The CORBA Interface Definition Language or IDL enables language development,


location-independent interface development and distribution of distributed objects. The
application components can communicate with each other via CORBA, regardless of
where they are or who designed them. CORBA ensures transparency of the location in
order to execute these requests.

CORBA is usually described as a "software bus" because the objects are located and
accessed via a software communication interface. The following illustration identifies the
main components in the implementation of CORBA.

FIGURE 2.23 Common object request broker architecture (CORBA)

A well-defined object-oriented interface ensures data transmission from client to server.


The Object Request Broker (ORB) sets the target object's location, sends the request to it
and returns the caller with any response. With this object-oriented technology, developers
can use characteristics such as legacy, encapsulation, polymorphism and dynamic binding
during runtime. These features allow for the modification, modification and reutilization
of applications with minimal parent interface changes. The following illustration shows
how a client transmits a request through the ORB to a server:

Cloud Computing: Unedited Version pg. 27


FIGURE 2.24 working of Common object request broker architecture (CORBA)

Interface Definition Language (IDL)


The Interface Definition Language is a key pillar of the CORBA standards. IDL is OMG
for the definition of language-neutral APIs and provides a platform-independent line-up of
distributed object interfaces. Client / server interface-standardized data and operations
begin to provide a consistent approach between the CORBA environments and clients in
heterogeneous environments. This mechanism is the IDL and is used by CORBA to
describe the object interfaces.

For applications, IDL defines and does not take programming language as modules,
interfaces, and operations. The various programming languages, such as Ada, C++ , C #
and Java, provide standardized IDL mapping to the implementation of the interface.

The IDL compiler creates a stub-and-skeleton code to marshalling and unmarshalling the
parameters from the network stream to memory instances of the language implemented,
etc. The stub is a client proxy for an object reference accessed from a servant and is a proxy
for the servant’s client. Language-specific IDL stubs and skeletons can communicate with
a skeleton in a language. The stub code is linked to the client code and the skeleton code is
connected to the object implementation and communicates in order to implement remote
operations with the ORB run time system.

IIOP (Internet Inter-ORB Protocol) is a protocol which allows distributed programs to


communicate on the Internet in various programming languages. The Common Object
Request Broker Architecture (CORBA) is a key element of a strategic industry standard.
Using the CORBA IIOP and related procedures, a company may develop programs which
are able to communicate, wherever they are and without having to understand anything
about the program other than its own service, or its name, with existing or future programs
of their own company or another.

Distributed component object model (DCOM/COM)

The Distributed component object model (DCOM) is a proprietary Microsoft


communication technology between software components that are spread across
networked computers. DCOM is a distributed component object model. The Distributed
Component Object Model is a component object model (COM) network expansion
technology that enables network-wide, interprocess communication. By managing low-
level network protocol details, DCOM supports communication among objects within the
network. This enables multiple processes to work together to achieve a single task by using
distributed programs.

Cloud Computing: Unedited Version pg. 28


Java remote method invocation (RMI)

RMI implies Remote Method Invocation. A mechanism that permits the access / invoke of
an object in one program (JVM) on another JVM. It enables remote communication
between programs in Java, RMI is used to create distributed applications.

We create two programs in an RMI application: the server program (residing on the server)
and the client program (residing on the client).

The server program creates a remote object and provides the client with a reference to that
object (using the registry).
The client program requests remote objects and tries to invoke its methods on the server.

The following diagram shows the architecture of an RMI application.

FIGURE 2.25 Java remote method invocation (RMI)

▪ Transport Layer− using this layer the client are connected with the server. This
connection is maintained with existing connection and new connections are also
created.
▪ Stub − the stub is the proxy of a client remote object. This is located in the client
system; it serves as the client's gateway.
▪ Skeleton − It's the object on the server side. To pass the request on to a remote
object, Stub interacts with the skeleton.
▪ RRL (Remote Reference Layer) − this is the layer that manages the client's
remote object reference.

The following points sum up how an RMI program works.

▪ Whenever the client makes a request to the remote object, the stub receives the
request to the RRL.
▪ If the RRL from the client receives the request, it uses a method called invoke ()
from the remoteRef object. The request is passed on the server side to the RRL.
▪ The server's RRL passes the client to the server skeleton that eventually calls the
object on a server.
▪ The results are passed to the client

When a client invokes a method that supports remote object parameters, the parameters
shall be enclosed in a message before they are transmitted through the network. Such may
be primitive-type parameters or objects. If the primitive type is used, the parameters are
assembled and the header is attached. If the parameter is an object, it is serialized. This
method is referred to as marshalling.

Cloud Computing: Unedited Version pg. 29


The packed parameters are unbundled on the server side and the appropriate method is
invoked. This method is referred to as unmarshalling.

RMI Registry is a name space that contains all server objects. The server registers this
object into an RMIregistry (using bind method () or Rebind () methods) (methods), any
time an object is created. Those are registered using a single name known as the bind name.

The client requires a reference to that object to invoke a remote object. The client must
then retrieve the object from the registry by its bind name (using the lookup () method).

FIGURE 2.25 RMI program works

.NET remoting

The. NET remote system offers an interprocess communication between Application


Domains through the use of the Remoting Framework. The programs may be installed on
the same computer or on different computers on the same network. Through the use of
Binary or SOAP formatters in the data stream the .NET Remoting facilitates distributed
object communications over TCP and HTTP channels.

The three main components of the Remoting Framework are:

1. Remote object
2. Remote Listener Application-( Remote Object requests)
3. Remote Client Application-( makes Remote Object requests)

FIGURE 2.26 .NET remoting Framework

The Remote Object is implemented in the MarshalByRefObject class.

Cloud Computing: Unedited Version pg. 30


The basic workflow of.Net Remoting can be seen from the figure above. In addition, if a
client calls Remote method, the client does not directly call the methods. The remote object
receives a proxy and is used to call up the remote object method. The message is encrypted
with a corresponding Formatter (Binary Formatter or SOAP Formatter) in the
Configuration File when the proxy receives a process call from the Server then the call will
be sent to the Server using a channel selected (TcpChannel or HttpChannel). The server
side channel accepts the request from the proxy and sends it to the server on the Remoting
system where the remote object methods are located and invoked methods on the Remote
Object. Once the remote procedure is executed, every call outcome is returned to the client
in the same way. It must be generated and initialized in a process known as Activation
before an object instance of a Remotable type can be accessed. The activation is classified
as Client Activated Objects and Server Activated Objects in two types.

2.5.3.2 Service-oriented architecture (SOA)

Service-Oriented Architecture (SOA) is a software style in which services via an


interconnected communication protocol to other components are distributed by application
components. The principles are separate from the manufacturers and others. Most services
communicate with one another in a service-oriented architecture: through data transmission
or through two or more services that coordinate the activity. It is just one term for service
architecture.

Service-oriented architecture characteristics


▪ Business value
▪ Strategic goals
▪ Intrinsic inter-operability
▪ Shared services
▪ Flexibility
▪ Evolutionary refinement

Both of these core principles could be shown through an older distributed application
paradigm, to service-oriented, cloud-related architecture (which also is considered to be a
service-oriented architecture offshoot).

Service-Oriented Architecture Patterns

FIGURE 2.27 Service-Oriented Architecture


Each of the building blocks for the Service-oriented Architecture consists of three roles:
service provider; service broker, service registry, service repository and customer /
requester service.
In accordance with the service registry, a service provider is responsible for addressing
whether and how services are rendered, such as security, availability, costs, and more.
The type of service and any trade agreements are also decided by this role.

Cloud Computing: Unedited Version pg. 31


The service broker provides the requester with information about the service. Whoever
implements the broker's scope is determined.
The service requestor locates and then adds the entries to the broker registry. You can
access multiple services or you may not; this depends on the service applicant’s capacity.

Implementing Service-Oriented Architecture

There are a wide variety of technologies that can be used when it comes to implementing
service-oriented architecture (SOA), depending on the ultimate objective and what you're
trying to achieve.
Service-Oriented Architecture is typically implemented with web services which make
'functional building blocks via standard Internet protocols' available.
SOAP, which stands for Simple Object Access Protocol, is an example of a web service
standard. Briefly speaking, SOAP 'is a messaging protocol specification for standardized
information sharing in computer network implementation of web services. Although SOAP
was initially not well received, it has grown in popularity since 2003 and is being used and
accepted more widely. Jini, COBRA, or REST are other options for implementation of
Service-Oriented Architecture.
It is important to remember that architectures can be applied in different ways, including
messaging, such as ActiveMQ, Apache Thrift and SORCER, "regardless of the particular
technologies."

Why Service-Oriented Architecture Is Important

FIGURE 2.28 Before and After Service-Oriented Architecture

Service-oriented architecture has many benefits, particularly in a web-based business.


Here, we will quickly discuss some of those advantages:
To create the reusable code, use Service-Oriented Architecture: Not only is it time-
consumptive, but it is not necessary to reinvent your coding wheel whenever a new service
or process is needed. The SOA also allows that coding languages to be used, since all runs
via a central interface.
Using Service-Oriented Architecture to facilitate interaction: A common mode of
communication is generated with Service-Oriented Architecture that enables different
systems and platforms to operate independently of each other. By this connection, the
Service-Oriented Architecture can also work around firewalls that enable "companies to
share operationally important services."
Using the scalability Service-Oriented Architecture: it is vital to be able to scale a business
to meet customer's requirements, however some dependencies can be prevented from using
it. Use Service-Oriented Architecture reduces the interaction between customers, which
makes it easier to scale.
Using Service-oriented Architecture to reduce costs: with a Service-oriented Architecture
it is possible to decrease costs while still "maintaining a desired performance." It is possible
for businesses to restrict the amount of analyzes they need to create custom solutions using
Service-oriented Architecture.

Cloud Computing: Unedited Version pg. 32


2.5.3.3 Web services

Web Service is a structured method for distributing client-server communication on the


World Wide Web. A web service is a software module that performs a variety of tasks.

You can search for the web services across the network and invoke them appropriately.
The web service will, when invoked, provide the customer with the features that the web
service invokes.

FIGURE 2.29 Web Service Architecture Diagram

The above diagram gives a very clear view of the internal working of a web service. The
customer will make a series of web service calls to a server to host the current web service
via request. These applications are rendered through so-called remote procedure calls.
Remote Procedure Call (RPC) are calls made using the webservice hosting service
procedures. Amazon provides a web service for products sold online through amazon.com,
for example. The front end and layer of presentation may be in. Net or Java, but the web
service will interact in either programming language.

Data transmitted between the client and the server is the primary component of a web
service, namely XML. An XML is HTML equivalent, and the intermediate language that
many programming languages can easy to understand and they only speak in XML while
applications talk to each other. This provides a can application interface for interacting with
one another in different programming languages. Web services use SOAP (Simple Object
Access Protocol) to transfer XML data between applications. The data is transmitted
through standard HTTP. The data that is transmitted to the program from the web server is
called SOAP. The message from SOAP is just XML. The client application that calls to
the Web service can be written in any programming language, as this document is written
in XML.

Why do you need a Web Service?

Every day software systems use a wide range of web-based programming tools. Several
apps in Java, others in Net, others in Angular JS, Node.js, etc. can be built. These
heterogeneous applications most often require some kind of communication between them.
Since they are constructed in different programming languages, effective communication
between applications is very difficult to ensure.
Here web services are offered. Web services provide a shared platform that enables
multiple applications could base on various programming languages to communicate with
each other.

Type of Web Service

Two kinds of web services are mainly available.

Cloud Computing: Unedited Version pg. 33


1. SOAP web services.
2. RESTful web services.

There are some components which must be in place to make a web service fully functional.
Regardless of which programming language is being used to program the web service,
these components must be present.

Let us take a closer look at these elements

SOAP is regarded as an independent message protocol for transport. SOAP is based on the
SOAP Messages transfer of XML data. Every message has a document called an XML
document. Only the XML document structure follows a certain pattern, but the contents do
not follow. The best component of Web services and SOAP is that they are all delivered
via HTTP, the standard web protocol .
This is the message of a SOAP

A root element called the < Envelope > is needed in every SOAP document. The first
element of an XML document is the root element.
The envelope is divided into 2 parts in turn. The first is the header and the second is the
body.
The header comprises the routing data, the information to which the XML document should
be sent to.
The actual message is in the body.
A simple example of communication through SOAP is given in the diagram below.

FIGURE 2.30 WSDL (Web services description language)

If it is found, a web service will not be used. The client invoking the web service should
know the location of the web service.

Second, the client application wants to learn what the web service does to invoke the right
web service. It is achieved using WSDL, known as the Web services description
language. The WSDL file is another XML file which mainly tells the web service what its
client application does. The client applications will understand the location and use of the
web services by using the WSDL document.

Web Service Example


An example of a WSDL file is given below.

<definitions>
<message name="TutorialRequest">
<part name="TutorialID" type="xsd:string"/>
</message>

<message name="TutorialResponse">
<part name="TutorialName" type="xsd:string"/>

Cloud Computing: Unedited Version pg. 34


</message>

<portType name="Tutorial_PortType">
<operation name="Tutorial">
<input message="tns:TutorialRequest"/>
<output message="tns:TutorialResponse"/>
</operation>
</portType>

<binding name="Tutorial_Binding" type="tns:Tutorial_PortType">


<soap:binding style="rpc"
transport="http://schemas.xmlsoap.org/soap/http"/>
<operation name="Tutorial">
<soap:operation soapAction="Tutorial"/>
<input>
<soap:body
encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"
namespace="urn:examples:Tutorialservice"
use="encoded"/>
</input>

<output>
<soap:body
encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"
namespace="urn:examples:Tutorialservice"
use="encoded"/>
</output>
</operation>
</binding>
</definitions>

The main aspects of the above WSDL declaration are the following;

< message > – The WSDL description message parameter is used to describe the different
data items per Web service operation. In this example, there are two messages, one being
the "TutorialRequest" and the the other being the "TutorialResponse" operation, which can
be exchanged between the web service and the client application. The TutorialRequest
contains an item of the string form "TutorialID." Similarly, an element called
"TutorialName," also a form string is found in TutorialResponse.
< portType >-In fact, this defines the Web service operation that is referred to in our case
as known as Tutorial. This procedure will obtain 2 messages, one is input and the other is
output.
< binding >-The protocol that is used contains this element. And we describe this in our
case to use http (http:/schemas.xmlsoap.org/soap/http). Additional details on the body of
the operation are specified, including namespace and the encoding of the message.

Universal Description, Discovery, and Integration (UDDI)

UDDI is the standard in which webservices offered by a particular provider are described,
published and discovered. It provides a specification for hosting web services content.

In the previous topic, we discussed WSDL and how it provides details about the actual
activities of the Web service. Yet how can a client application consider a WSDL file to
recognize the various web-based operations. UDDI provides the solution and a server that
can host WSDL files. This means that the client application has full access to the UDDI, a
database which contains all WSDL files.

Just as a phone directory has a certain person's name, address and telephone number, so
the UDDI registry is fitted with the related web service information. That's why a developer
user knows where to find it.

Cloud Computing: Unedited Version pg. 35


We now also realize why web services first came about, which were to provide a platform
to talk to each other with different applications.

But let's discuss some other advantages as to why web services are relevant.

Exposing Business Functionality on the network-a web server is a unit of managed code
which offers client applications or end users with some type of functionality. The HTTP
protocol allows this functionality to be called, so that it also can be called up via the
Internet. Both programs are already available on the internet, which makes web services
more useful. It ensures that the web service can be available on the Web anywhere and can
provide the required functionality.

Interoperability between applications-Web services allow different applications to talk to


each other and to share data and services. You can speak to each other about any kind of
query. And you can now write generic code that can be understood by all applications in
lieu of writing a specific code that only specific applications to understand.

A Standardized Protocol which everybody understands- Web services use a standardized


industry protocol to communicate, which everybody understands. All four layers
(Transport service, XML Messaging, Service Description and Service Discovery layers)
use well-defined web services network stack protocols.

Reduction in cost of communication-Web providers use SOAP over HTTP protocol to


implement their web-based services using the existing low-cost internet.

2.5.3.4 Service orientation and cloud computing

Service orientation is a built-in architectural approach that uses automated


software resources to incorporate business processes. Such business services comprise of
a collection of loosely coupled components designed to reduce dependency, designed to
support a business function that is well specified. The creation of modular business service
systems contributes to more versatile and effective IT systems.

Systems designed to integrate service orientation allow businesses to utilize existing


resources and easily manage the unavoidable changes that a dynamic company is
experiencing. There are also circumstances where the combination of a number of services
is needed. It means that these combined workloads will operate with less latency than with
loosely coupled parts.

Hybrid cloud environments become important because organizations constantly reinvent


themselves and become more competitive, in order to respond to change. IT must be at the
frontline of an innovation and transformation-based business strategy. Organizations
understand that for all kinds of workloads it is difficult to find one best IT computing
approach. Thereby, a hybrid cloud system is the most realistic solution.

A high degree of flexibility and modularity to make a cloud infrastructure work in the real
world. To support a range of workloads and business services a cloud must be designed.
One can tell when a service will be upgraded and when it can be downgraded.

Specifically, this service-based architectural design approach supports key cloud


characteristics of elasticity, self-support, standard-based interfaces and flexibility in pay-
as-you-go. Combining a service-oriented approach with cloud services enables businesses
to decrease costs and improve flexibility in business. Scalabilities and elasticity for public
and private cloud systems are interchangeable and loosely mixed.

Cloud Computing: Unedited Version pg. 36


2.6 Summary
In this chapter we introduced parallel and distributed computing as a framework on which
cloud computing can be properly described. The solution of a major issue emerged out of
parallel and distributed computing by using several processing components first and then
multiple network computer nodes.
2.7 Review questions
1. Differentiate between parallel and distributed computing.
2. What is an SIMD architecture?
3. Explain the major categories of parallel computing systems.
4. Explain the different levels of parallelism that can be obtained in a computing
system
5. What is a distributed system? What are the components that characterize it?
6. What is an architectural style and how does it handle a distributed system?
7. List the most important software architectural styles.
8. What are the fundamental system architectural styles?
9. Describe the most important model for message-based communication.
10. Discuss RPC and how it enables interprocess communication.
11. What is CORBA?
12. What is service-oriented computing?
13. What is market-oriented cloud computing?
2.8 Reference for further reading
1. Mastering Cloud Computing Foundations and Applications Programming Rajkumar
Buyya ,Christian Vecchiola,S. Thamarai Selvi MK publications ISBN: 978-0-12-
411454-8
2. Cloud Computing Concepts, Technology & Architecture Thomas Erl, Zaigham
Mahmood, and Ricardo Puttini , The Prentice Hall Service Technology Series ISBN-
10 : 9780133387520 ISBN-13 : 978-0133387520
3. Distributed and Cloud Computing: From Parallel Processing to the Internet of Things
1st Edition by Kai Hwang Jack Dongarra Geoffrey Fox ISBN-10 : 9789381269237
ISBN-13 : 978-9381269237

Cloud Computing: Unedited Version pg. 37

You might also like