Cloud Computing 30 66
Cloud Computing 30 66
Chapter 2
Unit Structure
2.0 Objective
2.1 Eras of computing
2.2 Parallel vs. distributed computing
2.3 Elements of parallel computing
2.3.1 What is parallel processing?
2.3.2 Hardware architectures for parallel processing
2.3.2.1 Single-instruction, single-data (SISD) systems
2.3.2.2 Single-instruction, multiple-data (SIMD) systems
2.3.2.3 Multiple-instruction, single-data (MISD) systems
2.3.2.4 Multiple-instruction, multiple-data (MIMD) systems
2.3.1 Approaches to parallel programming
2.3.2 Levels of parallelism
2.3.3 Laws of caution
2.4 Elements of distributed computing
2.4.1 General concepts and definitions
2.4.2 Components of a distributed system
2.4.3 Architectural styles for distributed computing
2.4.3.1 Component and connectors
2.4.3.2 Software architectural styles
2.4.3.3 System architectural styles
2.4.4 Models for interprocess communication
2.4.4.1 Message-based communication
2.4.4.2 Models for message-based communication
2.5 Technologies for distributed computing
2.5.1 Remote procedure call
2.5.2 Distributed object frameworks
2.5.2.1 Examples of distributed object frameworks
2.5.3 Service-oriented computing
2.5.3.1 What is a service?
2.5.3.2 Service-oriented architecture (SOA)
2.5.3.3 Web services
2.5.3.4 Service orientation and cloud computing
2.6 Summary
2.7 Review questions
2.8 Reference for further reading
The two most prominent computing era are sequential and parallel. In the past decade, the
high performance computer searches for parallel machines have become important
competitors of vector machines. Figure 2.1 provides a hundred-year overview of the
development of the computing era. During these periods the four main computing elements
are created like architectures, compilers, applications and problem-solving environments.
The computing era begins with the development of hardware, followed by software
systems (especially in the area of compilers and operating systems), applications, and with
a growing problem solving environment it enters its saturation level. Each computing
element is subject to three stages: R&D, commercialization and commodity.
For example:
Initially, only certain architectures were considered by parallel systems .It featured multiple
processors with the same physical memory and a single computer. Over time, those limitations
have been relaxed and parallel systems now include all architectures, whether physically present
or based on the concept of shared memory, whether the library support, specific hardware and a
very efficient network infrastructure are present physically or created. For example, a cluster of
the nodes linked by InfiniBand can be considered a parallel system and configured with a
distributing shared memory system
That is the exponential increase in computing power. In 1965, Intel's co-founder, Gordon
Moore, noted that the number of transistors on a single-inch chip doubles per year, while
the cost falls by about half. It's now 18 months, and it gets longer. Silicon reaches a
performance limit in an increasing number of applications requiring increased speed,
reduced latency and light detection. To address this constraint, the feasible solution is to
connect several processors to solve "Great Challenge" problems in coordination with each
other. The initial steps towards parallel computing lead to the growth. It includes
technology, architecture and systems for multiple parallel activities. This section refers to
its proper characterization, which includes the parallelism of the operation of multiple
processors coordinating together within single computer.
Parallel processing is a way to manage different parts of an overall task when comparing
two or more processors. CPUs. Break up various parts of a task among several processors
can reduce the time a program needs to run. Either machine with over one CPU or multi-
core processors which are commonly found on computers today may perform parallel
processing. In the parallel computing machine the concept known as divide and conquer
.Divide and conquer is an elegant way to solve a problem. You split up problems in
smaller problems of the same type may be resolved individually, and partial outcomes
combined in a total solution. The approach is used to break the problem into smaller and
smaller problems, until any problem is solved easily. Parallel programming is called
multiprocessor system programming using the divide and conquer technique
Several factors influence the development of parallel processing. The following are
prominent among them are:
1. In many fields of science and engineering parallel computing was considered the
"high end computing" to model problems that were difficult to solve: In the fields
like
• Atmosphere, Earth, Environment
• Physics - applied, nuclear, particle, condensed matter, high pressure, fusion,
photonics
• Bioscience, Biotechnology, Genetics
• Chemistry, Molecular Sciences
• Geology, Seismology
• Mechanical Engineering - from prosthetics to spacecraft
• Electrical Engineering, Circuit Design, Microelectronics
• Computer Science, Mathematics
• Defense, Weapons
2. Sequential architectures are physically constrained by the speed of light and the
laws of thermodynamics. The saturation point (no vertical growth) is reached
by a speed at which sequential CPUs can operate. Therefore, an alternate way
to achieve high computational speed is to connect several CPUs (the possibility
for horizontal growth).
3. Pipeline hardware, superscale etc. improvements are not scalable and require
sophisticated compiler technology. The task is difficult to develop this compiler
technology
4. Another attempt to improve performance was vector processing by doing more
than one task at a time. Capability to add (or subtract or multiply, or otherwise
manipulate) two numerical arrays to devices has been introduced in this case.
This was useful when data naturally appeared in vectors or matrices in certain
engineering applications. Vector processing was not so valuable in applications
with less well-formed data.
5. There is indeed extensive R&D work on development tools and environments
and parallel processing technology is mature, and commercially exploitable.
6. Essential networking technology advancement paves the way for heterogeneous
computing.
A SIMD system is a multi-processor system that can execute the same instruction on all
CPUs but operates on many data streams. SIMD-based machines are ideal for scientific
computing because they involve many vector and matrix operations. The data may be
divided into multiple sets (N-sets for N PE systems) so that the information can be
transferred to all the processing elements (PEs). Each PE can process the same data set
.This is ideally suited for complex problems with a high degree of regularity like graphics
/ image processing. Most modern computers, particularly those with graphics processor
units (GPUs) employ SIMD instructions and execution units. Dominant representative
SIMD systems is Cray’s vector processing machine.
Examples:
Processor Arrays: Thinking Machines CM-2, MasPar MP-1 & MP-2, ILLIAC IV
Vector Pipelines: IBM 9000, Cray X-MP, Y-MP & C90, Fujitsu VP, NEC SX-2, Hitachi
S820, ETA10
MIMD is a multiprocessor system that can carry out multiple instructions on multiple sets
of data. Every PE in a model with a MIMD has separate instructions and data streams, so
any type of application can be used on machines built using this model. In comparison to
SIMD and MISD, PEs can operate synchronous or asynchronous, deterministic or non-
deterministic in MIMD computers. Currently, the most common type of parallel computer
- most modern supercomputers fall into this category. Examples: most current
supercomputers, networked parallel computer clusters and "grids", multi-processor SMP
computers, multi-core PCs.
MIMD machines are divided broadly into shared-memory MIMD and distributed-memory
MIMD on the manner in which PEs are connected to the main memory.
All PEs are connected to a single global memory in the shared MIMD (tightly coupled
multiprocessor systems) model and have all access to it. The communication between PEs
within this model takes place by means of a shared memory, changes of the data stored by
one PE in the global memory are visible to all other PEs. The dominant shared memory
systems for Silicon Graphics and Sun / IBM (Symmetric Multi-Processing) are shared
memory systems.
All PEs have a local memory on distributed memory MIMD machines (loose
multiprocessor systems). In this model, communication among PEs is carried out via the
interconnection network (the inter-process communication channel or IPC). The network
connecting PEs can be set up in tree, mesh or as needed.
The MIMD shared memory architecture is easier to design, but less tolerant to failure and
more difficult to expand compared to the MIMD distributed memory model. Shared MIMD
failures affect the entire system, but not the distributed model in which every PE can be
easily isolated. In comparison, MIMD shared memory architectures are less likely to scale
as the introduction of more PEs triggers memory conflict. This is not the case with
distributed memory, in which each PE has its own memory. Thanks to realistic effects and
consumer specifications, the distributed MIMD memory architecture is better than the
others.
In general, a sequential program always runs the same sequence of instructions with the
same input data and always generates the same results were as programs must be
represented by splitting work into several parts running on different processors. The
broken program is a parallel program.
Various methods are available for parallel programming. The most significant of these
are:
• Data parallelism
• Process parallelism
• Farmer-and-worker model
Each three of these models can be used for task-level parallelism In the case of data
parallelism, Divide and conquer is a multi-branched recursion-based design algorithm. A
divide-and - conquer algorithm works by breaking a data into two or more similar or related
data repetitively and the same instructions are used to process each data set for different
PEs. This is a very useful approach for machine processing based on the SIMD model.
With process parallelism, there are many (but separate) operations in a single activity that
could be done on several processors. In farmer and-worker model, the main (master)
computation causes many sub problems which slave fires off to be executing. The only
communication between the master and slave computations is to start the master
computation for slaves, and return the result of the slave computation to master.
Bit-level Parallelism: In this parallelism, it’s focused on the doubling of the word size
of the processor. Increased parallelism in bit levels means that arithmetical operations
for large numbers are executed more quickly. An 8-bit processor, for example, takes 2
cycles to perform a 16-bit addition whereas a 16-bit is a single cycle. With the advent of
64-bit processors this degree of parallelism seems to be over.
Instruction Pipelining: Execute various stages in the same cycle of various independent
instructions and use all idle resources.
Task Parallelism: Task parallelism involves breaking down a task into subtasks and
then assigning each of the subtasks for execution. Subtasks are carried out concurrently
by the processors.
In this portion, we broaden these principles and discuss how different tasks can be achieved
by utilizing systems consisting of many heterogeneous computer systems. They address
what is commonly called distributed computing and more specifically, in terms of the
software designer, they present the most important guidelines and principles for the
implementation of distributed computing systems.
Distributed computing work explores the models, architectures, and algorithms used in
the design and management of distributed systems. We use the one as a general definition
of the distributed system proposed by Tanenbaum
It is definitely the ideal form of a distributed system that completely hides from the user
the "implementation details" of creating a powerful system from many more basic
systems. Within this section, we concentrate on the architectural models that use and
present a coherent system to use independent computers. The fundamental step in all
distributed computer architectures is the concept of computer communication. The
distributed system is an application that performs protocol collection to coordinate
several communication network action processes such that all components cooperate in
order to perform one or a number of similar tasks. The collaborating computers can
control both remote and local resources in the distributed system over
the communication network. Multiple existence Individual computers in the distributed
network are transparent to the user. The user does not know the work is performed in
remote areas on different machines. Coulouris definition of Distributed system
The distributed system components communicate with some kind of message passing,
as defined in this above description. This term covers several models of communication.
Nearly all large computing systems are distributed now. Systems are distributed. The
distributed system is "a set of independent machines that present to the user as one
coherent system." Information processing is distributed on several machines instead of
being confined to a single computer. The overviews of the various layers involved in the
delivery of distributed system services are presented in Figure 2.12.
The middleware layer utilizes such services to develop and deploy distributed
applications in a consistent environment. When using the services provided by the
operating system, middleware creates its own protocols, data formats and programming
language or frameworks to create distributed apps. This layer offers support to
programming paradigms for distributed systems. They all constitute an interface that is
entirely independent of the underlying operating system and covers all of the lower
layers' heterogeneities.
The applications and services designed and built for middleware use reflect the upper
part of the distributed system stack. These can be used for many reasons. Sometimes
they can view their functionality through a web browser in the form of graphical
interfaces (GUIs). For example, the utilization of web technology is highly preferred in
the context of a cloud computing system not only for interface applications distributed
apps with consumers, but also for platform services to create distributed systems. An
excellent example is the IaaS provider, for instance Amazon Web Services (AWS), who
provides virtual machine creation facilities, organizes it together into a cluster and
deploys applications and systems on top. Figure gives an example of how a cloud
computing system's general reference architecture of a distributed system is
contextualized.
Distributed systems are also complex software components that are distributed through
many devices by design. It is important that these processes are structured appropriately
to overcome their complexities. There are various ways to see how a distributed System
is organized, but it is readily apparent to distinguish between the logical organization and
actual physical advancement of software components.
The distributed systems are structured mainly by the software constituent components of
the network. These software architectures inform us the structure and interaction of
different components of the program. In this chapter we will first concentrate on some
common approaches to the organization of computer systems.
To create an effective distributed network, software modules need to be mounted on
specific computers and put on them. There are a number of different choices to make.
Sometimes named device architecture the final instantiation of a software architecture.
In this chapter we examine traditional central architectures in which the majority of
software components (and therefore features) are implemented by a single server, while
remote clients can access that server with simple communication methods. Moreover,
we call decentralized systems where computers perform more or less the same roles and
hybrid organizations.
Architectural Styles
Originally we find the logical arrangement of distributed systems into software modules,
also called software architecture. Computer architectural work has evolved dramatically
and the design or adoption of architectures is now widely recognized as essential to large
system growth.
The idea of an architectural style is important for our discussion. Such a style is
formulated in components, the connections between components, the data exchange
between components and the configuration of these elements together in one system. A
part is a modular unit with well-defined interfaces and which can be replaced in its
environment. As discussed below, the key point regarding a distributed system
component is that the component can be substituted if its interfaces are known. A much
more complex term is a connector, usually defined as a communication, coordination or
co-operation mechanism between the components. For example, the (remote) procedure
calls, message passing, or streaming data can be generated by a connector.
The first class is about the software's logical structure; the second class contains all types
representing the physical structure of the software systems represented by their major
components.
Component and connectors visions describe models consisting of elements with a certain
presence over time, such as processes, objects, clients, servers, and data storage. In
addition, component and connector models provide interaction mechanisms, such as
communication links and protocols, information flows, and shared storage access, as
components. These interactions also are conducted across complex infrastructure, such as
middleware systems, communication channels, and process schedulers. Component is a
behavioral unit. The description of the component defines what the job can do and needs
to do. Connector is an indication that one component is usually linked by relationships
such as data flow or control flow. Connector is a mechanism.
Styles and patterns in software architecture define how to organize the system components
to build a complete system and to satisfy the customer's requirements. A number of
These models form the basis on which distributed systems are built logically and discussed
in the following sections.
At the center of this architecture is a data store, which is often accessed through other
components that update, add, delete or modify the data present in the store. This figure 2.1
4 shows a typical data-centric style. A central repository is accessed by the client software.
Variation of such a method is used to turn the repository into a blackboard, as client's data
or client’s interest data change customer's notifications. It will facilitate integrality with
this data-centered architecture. This allows for changes to existing components and the
addition of new customer components to the architecture without the permission or concern
of other customers. Customers may use blackboard mechanisms to transfer data.
Data-flow architectures
Data Flow Architecture converts input data into output data via a collection of
computational or deceptive elements. It's a computer architecture that has no program
counter, so the execution is unpredictable, meaning that behaviors are indeterminate. Data
flow is an aspect of the Von-neumann computer model consisting of a single program
counter, sequential execution and control flow that defines fetch, execution, and commit
order.
• The architecture for data flow eliminates development time and can quickly switch
from design to implementation.
• It aims primarily to achieve the reuse and alteration features.
• Through the architecture of data flows, the data can be flowed without cycles into
the graph topology or into a linear structure.
1. Batch Sequential
2. Pipe and Filter
Batch Sequential
• The diagram above shows the batch sequential architecture flow. It offers
simpler sub-system divisions and each subsystem can be an independent
program which works on input and produces output data.
• The biggest downside of the sequential batch architectures is the lack of a
concurrency and interactive interface. It provides high latency and low
throughput.
• Pipe is a connector that transfers data from one filter to another filter
• Pipe is a directional data stream which a data buffer implements to store all
data, before the following filter has time to process it.
• It moves the data from one data source to a one data sink.
• The stateless data stream is pipes.
• Every pipe has a filter connection and has its own role in filter's
operation. The filters are robust, with the addition and removal of pipes
on runtime.
• Filter reads the data from their input pipes, performs its function on
these data and places the result on all output pipes. If the input pipes are
not enough data, the filter only waits for them.
Filter
• Filter is a component.
• The interfaces are used to flow in a variety of inputs and to flow out a
variety of outputs.
• It processes and refines the data input.
• The independent entities are filters.
• Two ways to create a filter exist:
1. Active Filter
2. Passive Filter
Interpreter Style
The interpreter is an architectural style that is ideal for applications that can not specifically
use the most adequate language or machine to execute the solution. The style comprises a
few parts that are a program we attempt to run, an interpreter we are attempting to interpret,
the program's current state and the interpreter and the memory portion that will carry the
program, the program’s actual state and its current state. Calls for procedures for
communication between elements, and direct memory access, are the connector for the
architectural style of an interpreter.
Input: the input to the portion of the interpreted program is forwarded to the program
state where the interpreter is read by the program
Output: The output of the Program is placed in the state of the program where the data
is interpreted interface system part
This model is quite useful in designing virtual machines for high-level programming (Java,
C#) and scripting languages (Awk, PERL, and so on).
An interpreter or virtual machine does not have to follow all the instructions of the source
code that it processes. It can refuse in particular to execute code which breaches any
security limitations under which it operates. For example. JS-interpreter is a JavaScript
interpreter that is sandboxed in JavaScript. The arbitrary JavaScript code can be executed
line by line. Performance of the main JavaScript environment is completely isolated. Multi-
threaded competitor JavaScript without the use of web workers are available in JS-
Interpreter instances.
The most frequently used pattern on computer systems was Call & return architectures
style. The mechanism of call or function call includes main programs and subroutines,
remote procedure calls, object-oriented systems, and layered systems. They all come under
the call and return in the style of architecture.
Top-Down Style.
The top down approach is basically the breakdown of a systems in order to get details on
its compositional sub- structures in a reversing engineering manner (also known as
stepwise design and stepwise refining and in some cases used in a decomposition fashion).
In a top-down approach, an overview of the system is made and all first-level subsystems
are defined, but not comprehensive. Each subsystem is then further modified, often at
various other subsystem levels, until the full specification has been reduced to smaller
elements. The "black boxes" helps define a top-down layout that is easier to manipulate.
Nonetheless, black boxes could not explain or be precise enough to validate the model
effectively. The big picture starts with the top down approach. It divides into smaller
pieces.
A top-down approach involves dividing the problem between tasks and separating tasks
into smaller subtasks. In this approach, we first develop the main module and then develop
the next stage modules. This process is followed until all modules have been created.
Object-Oriented Style.
The object-oriented programme, instead of actions & logic, is a programming language
paradigm structured around objects & data. In order to take data, process it and generate
results, a traditional procedure program is organized. The program was centralized in terms
of logic instead of data. They focus object-orientated programming on objects and their
manipulation rather than on the logic that creates them.
The first phase in the OOPs is data modeling that includes defining, manipulating
and relationship involving all the objects. The modeling of data is a planning phase that
requires tremendous attention. We have a method to produce those objects once every
object involved in the program has been identified. It is known as the class mechanism. A
class includes data or properties and the logical sequence of methods for manipulating data.
Every way is separate and the rationale which has already been established in other
methods should not be repeated.
A number of independent processes / objects communicating via messages are part of the
independent component architecture. The messages can be transmitted via publish /
subscribe paradigms for a given or unnamed participant.
Components typically do not control each other by sending data. It can be changed as the
components are isolated.
Examples: Event systems and communication processe are subsystems of this type.
Event systems
Publisher(s): advertise the data you would like to share with others
Subscriber(s): Receipt of published data register interest.
Communication process
The architectural type of communication process is also known as Client-Server
architecture.
Client: begins a server call that requests for some service.
Server: provides client data.
Returns data access when the server works synchronously
Two major components are in the client server architecture. The server and the client.
The server is the location of all transmission, transmission, and processing data, while the
client can access the remote server services and resources. The server allows clients to
make requests, and the server will reply. In general, the remote side is managed only by a
computer. But in order to be on the safe side, we load balancing techniques using several
servers.
The Client Server architecture is a standard design feature with a centralized security
database. This database includes information on security, such as credentials and access
details. Absent security keys, users can't sign in to a server. This architecture therefore
becomes a bit more stable and secure than Peer to Peer. The stability comes because the
security database can make for more efficient use of resources. However, on the other
hand, the system could crash because only a small amount of work can be done by a
server at a certain time.
Advantages:
Disadvantages:
There is no central control in a distributed system behind peer to peer. The fundamental
idea is that at a certain time each node can be a client or a server. If something is asked
from the node, it could be referred to as a client and if something arrives from a node it
could be referred to as a server. Usually every node is called a peer.
Any new node will first join this network. Upon joining, they may either request or provide
a service. A node's initiation phase (joining a node) can vary based on
network's implementation. There are two ways a new node can learn what other nodes
provide.
Centralized Lookup Server-The new node must register and mention the services on the
network with the centralized look up server. So, just contact the centralized look up system
anytime you need to have a service and it will direct you to the appropriate service provider.
Decentralized System-A node that seeks particular services will, broadcast and request
each other node in the network, so that the service provider can respond.
A distributed system is a set of computers that behave as a cohesive network to its users.
One important thing is that differences between the different computers and how they
interact are often hidden from users. This then gives the user a single image of the system.
The OS hides all communication details among the user's processes. The user does not
know that many systems exist. The inter-process communication called IPC is done by
various mechanisms in distributed systems and for different systems, these mechanisms
may vary. Another significant aspect is that users and applications can communicate
consistent and uniform with a distributed system
Communications between various processes is the essence of all distributed systems and it
is important to understand how processes can share information on different machines. In
order to exchange data between two application and processes, Inter Process
Communication or IPC as its name implies. Processes may be on or in a different location
on the same machine. Distributed systems communication often depends on low-level
messaging as the underlying network provides. Communication is difficult to communicate
through message passing than primitive communication based on a shared memory
available on non-distributed platforms
The principle of message is implemented in this model as the main abstraction of the model.
Units that exchange explicitly encoded data and information in the form of a message. The
structure and message’s content differ or vary according to the model. Message Passing
Interface and OpenMP are significant examples of this model type.
This model examines the keys to the procedure call outside the limits of a single process,
suggesting system execution in remote processes. It includes the main client-server. A
remote process maintains a server component, allowing client processes to call on
processes and returns the execution output. The messages, which are generated by the
implementation of Remote Procedure Call (RPC), collect information about the method on
its own and execute the arguments required for it and also return the values. The usage of
messages referred to as the marshalling of the arguments and the return values.
Distributed Objects
Active objects
Web Services
Web service technology offers an alternative to the RPC framework over HTTP, allowing
the interaction of established components with various technologies. A web service is
exposed as a remote object stored on a web server and invocations of the system are
converted into HTTP requests packed using a particular protocol. It must be remembered
that the concept of message is a basic abstraction of communication between interprocesses
and is used either implicitly or explicitly.
• There are not necessarily software components that know with whom they interact.
• Data consumers subscribe to the system and receive data from it as a whole.
A central software module ensures that all data, publishing and subscription are
administered and matched. The "broker" is commonly referred to. Often brokers are a
network of cooperating software modules and software modules that use broker services
are called clients.
Clients that publish and also subscribe "register" with the broker for communication paths
to manage, clients and other housekeeping activities to authenticate.
Message delivery to subscribers filtered in relation to content rather than topic. Instead of
or with the topic, this can be used. Only a few Publish-Subscribe systems have
implemented this.
Data can be "persistent," because subscribers who register on the network after last
publishing the data will have the last published data on the specific topic.
A request reply messaging model is different from a traditional pub / sub or P2P model,
which publishes a message to a topic or queue and enables clients to receive the message
without providing the reply response.
Request reply messages may be used when a customer sends the requested message for
information from a remote client application or for a processing action to be carried out.
Once the client application receives the request message, it receives the necessary
Remote Procedure Call (RPC) is a protocol that can be used by a program to request the
service on the other computer of the network without the need for the details of the network.
RPC is used in remote systems to call other processes such as a local system. Sometimes,
a procedure call is also called a function call or a subroutine call.
The client-server model is utilized by RPC. The program you are requesting is a client and
the service provider is the server. Like an on-going or local procedure call, the RPC is a
synchronous operation which requires a suspension of the requesting program until the
remote procedure result are returned. Nevertheless multiple RPCs can be performed
concurrently by using lightweight processes or threads that share the same space.
When program statements using the RPC framework are compiled into an executable
program, the compiled code includes a stub representing the remote procedure code. The
stub receives the request and transmits it to a client runtime program on the local computer
when the program is running and a call is issued. Once the client stub is first invoked, it
contacts a name server to specify where the server is located.
The Client Runtime Program is familiar with how to address the remote computer and
server application and sends the message over the network that requests the remote
procedure. The server also has a runtime program and stub this interface with the remote
procedure itself. Response-request protocols returned in the same way
2. When the procedure is completed and results are produced, its results are returned to
the calling environment where it resumes to execute as if back from a regular procedure
call.
NOTE: RPC is particularly suitable for the client-server interaction (e.g. query-response)
between the caller and callee. The client and server are not both execute simultaneously
in the concept. Instead, it jumps back and forth from the caller to the callee.
Working of RPC
The most well-known ways to develop distributed systems or frameworks are client-server
systems. An extension to this client-server model is a distributed object framework. It is a
library where distributed applications can be built using object-oriented programming.
Distributed objects in distributed computing are objects that are distributed in different
address spaces, in different processes on the same computer, or even in multiple network-
connected computers. However, they perform around each other via data sharing and
invoking methods. It also involves transparency of location where remote objects appear
the same as local objects. With remote method invocation, usually message-passing, the
main method of distributed communication for objects is by sending a message to a
different object within a remote machine or process to perform some task. The results are
returned to the object that you call.
The Remote procedure Call (RPC) method applies to distributed environments the common
programming abstraction of the procedure call allowing the call process to call the remote
node as local.
Remote method invocation (RMI) resembles RPC for distributed objects, but has
additional advantages in terms of the use of object-oriented programming concepts for
distributed systems and extends to the distributed Global environment the concept of the
object reference and enables the use of object references such as Parameter in remote
invocation.
Remote Procedure call – The client calls procedures in a different server program
Remote method invocation (RMI) – an object can invoke object methods in a different
process
Event notification – Objects receive notification of events in other objects they have
registered for
Common Object Request Broker Architecture (CORBA), is a consortium of more than 800
companies that supports the most famous middleware. With the exception of Microsoft,
this consortium is the majority of computing companies which has its own Distributed
CORBA is essentially a design specification for an Object Request Broker (ORB) that
provides an ORB mechanism to allow distributed objects, either locally or on remote
devices, written in different languages or in various network locations, to communication
with each other.
CORBA is usually described as a "software bus" because the objects are located and
accessed via a software communication interface. The following illustration identifies the
main components in the implementation of CORBA.
For applications, IDL defines and does not take programming language as modules,
interfaces, and operations. The various programming languages, such as Ada, C++ , C #
and Java, provide standardized IDL mapping to the implementation of the interface.
The IDL compiler creates a stub-and-skeleton code to marshalling and unmarshalling the
parameters from the network stream to memory instances of the language implemented,
etc. The stub is a client proxy for an object reference accessed from a servant and is a proxy
for the servant’s client. Language-specific IDL stubs and skeletons can communicate with
a skeleton in a language. The stub code is linked to the client code and the skeleton code is
connected to the object implementation and communicates in order to implement remote
operations with the ORB run time system.
RMI implies Remote Method Invocation. A mechanism that permits the access / invoke of
an object in one program (JVM) on another JVM. It enables remote communication
between programs in Java, RMI is used to create distributed applications.
We create two programs in an RMI application: the server program (residing on the server)
and the client program (residing on the client).
The server program creates a remote object and provides the client with a reference to that
object (using the registry).
The client program requests remote objects and tries to invoke its methods on the server.
▪ Transport Layer− using this layer the client are connected with the server. This
connection is maintained with existing connection and new connections are also
created.
▪ Stub − the stub is the proxy of a client remote object. This is located in the client
system; it serves as the client's gateway.
▪ Skeleton − It's the object on the server side. To pass the request on to a remote
object, Stub interacts with the skeleton.
▪ RRL (Remote Reference Layer) − this is the layer that manages the client's
remote object reference.
▪ Whenever the client makes a request to the remote object, the stub receives the
request to the RRL.
▪ If the RRL from the client receives the request, it uses a method called invoke ()
from the remoteRef object. The request is passed on the server side to the RRL.
▪ The server's RRL passes the client to the server skeleton that eventually calls the
object on a server.
▪ The results are passed to the client
When a client invokes a method that supports remote object parameters, the parameters
shall be enclosed in a message before they are transmitted through the network. Such may
be primitive-type parameters or objects. If the primitive type is used, the parameters are
assembled and the header is attached. If the parameter is an object, it is serialized. This
method is referred to as marshalling.
RMI Registry is a name space that contains all server objects. The server registers this
object into an RMIregistry (using bind method () or Rebind () methods) (methods), any
time an object is created. Those are registered using a single name known as the bind name.
The client requires a reference to that object to invoke a remote object. The client must
then retrieve the object from the registry by its bind name (using the lookup () method).
.NET remoting
1. Remote object
2. Remote Listener Application-( Remote Object requests)
3. Remote Client Application-( makes Remote Object requests)
Both of these core principles could be shown through an older distributed application
paradigm, to service-oriented, cloud-related architecture (which also is considered to be a
service-oriented architecture offshoot).
There are a wide variety of technologies that can be used when it comes to implementing
service-oriented architecture (SOA), depending on the ultimate objective and what you're
trying to achieve.
Service-Oriented Architecture is typically implemented with web services which make
'functional building blocks via standard Internet protocols' available.
SOAP, which stands for Simple Object Access Protocol, is an example of a web service
standard. Briefly speaking, SOAP 'is a messaging protocol specification for standardized
information sharing in computer network implementation of web services. Although SOAP
was initially not well received, it has grown in popularity since 2003 and is being used and
accepted more widely. Jini, COBRA, or REST are other options for implementation of
Service-Oriented Architecture.
It is important to remember that architectures can be applied in different ways, including
messaging, such as ActiveMQ, Apache Thrift and SORCER, "regardless of the particular
technologies."
You can search for the web services across the network and invoke them appropriately.
The web service will, when invoked, provide the customer with the features that the web
service invokes.
The above diagram gives a very clear view of the internal working of a web service. The
customer will make a series of web service calls to a server to host the current web service
via request. These applications are rendered through so-called remote procedure calls.
Remote Procedure Call (RPC) are calls made using the webservice hosting service
procedures. Amazon provides a web service for products sold online through amazon.com,
for example. The front end and layer of presentation may be in. Net or Java, but the web
service will interact in either programming language.
Data transmitted between the client and the server is the primary component of a web
service, namely XML. An XML is HTML equivalent, and the intermediate language that
many programming languages can easy to understand and they only speak in XML while
applications talk to each other. This provides a can application interface for interacting with
one another in different programming languages. Web services use SOAP (Simple Object
Access Protocol) to transfer XML data between applications. The data is transmitted
through standard HTTP. The data that is transmitted to the program from the web server is
called SOAP. The message from SOAP is just XML. The client application that calls to
the Web service can be written in any programming language, as this document is written
in XML.
Every day software systems use a wide range of web-based programming tools. Several
apps in Java, others in Net, others in Angular JS, Node.js, etc. can be built. These
heterogeneous applications most often require some kind of communication between them.
Since they are constructed in different programming languages, effective communication
between applications is very difficult to ensure.
Here web services are offered. Web services provide a shared platform that enables
multiple applications could base on various programming languages to communicate with
each other.
There are some components which must be in place to make a web service fully functional.
Regardless of which programming language is being used to program the web service,
these components must be present.
SOAP is regarded as an independent message protocol for transport. SOAP is based on the
SOAP Messages transfer of XML data. Every message has a document called an XML
document. Only the XML document structure follows a certain pattern, but the contents do
not follow. The best component of Web services and SOAP is that they are all delivered
via HTTP, the standard web protocol .
This is the message of a SOAP
A root element called the < Envelope > is needed in every SOAP document. The first
element of an XML document is the root element.
The envelope is divided into 2 parts in turn. The first is the header and the second is the
body.
The header comprises the routing data, the information to which the XML document should
be sent to.
The actual message is in the body.
A simple example of communication through SOAP is given in the diagram below.
If it is found, a web service will not be used. The client invoking the web service should
know the location of the web service.
Second, the client application wants to learn what the web service does to invoke the right
web service. It is achieved using WSDL, known as the Web services description
language. The WSDL file is another XML file which mainly tells the web service what its
client application does. The client applications will understand the location and use of the
web services by using the WSDL document.
<definitions>
<message name="TutorialRequest">
<part name="TutorialID" type="xsd:string"/>
</message>
<message name="TutorialResponse">
<part name="TutorialName" type="xsd:string"/>
<portType name="Tutorial_PortType">
<operation name="Tutorial">
<input message="tns:TutorialRequest"/>
<output message="tns:TutorialResponse"/>
</operation>
</portType>
<output>
<soap:body
encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"
namespace="urn:examples:Tutorialservice"
use="encoded"/>
</output>
</operation>
</binding>
</definitions>
The main aspects of the above WSDL declaration are the following;
< message > – The WSDL description message parameter is used to describe the different
data items per Web service operation. In this example, there are two messages, one being
the "TutorialRequest" and the the other being the "TutorialResponse" operation, which can
be exchanged between the web service and the client application. The TutorialRequest
contains an item of the string form "TutorialID." Similarly, an element called
"TutorialName," also a form string is found in TutorialResponse.
< portType >-In fact, this defines the Web service operation that is referred to in our case
as known as Tutorial. This procedure will obtain 2 messages, one is input and the other is
output.
< binding >-The protocol that is used contains this element. And we describe this in our
case to use http (http:/schemas.xmlsoap.org/soap/http). Additional details on the body of
the operation are specified, including namespace and the encoding of the message.
UDDI is the standard in which webservices offered by a particular provider are described,
published and discovered. It provides a specification for hosting web services content.
In the previous topic, we discussed WSDL and how it provides details about the actual
activities of the Web service. Yet how can a client application consider a WSDL file to
recognize the various web-based operations. UDDI provides the solution and a server that
can host WSDL files. This means that the client application has full access to the UDDI, a
database which contains all WSDL files.
Just as a phone directory has a certain person's name, address and telephone number, so
the UDDI registry is fitted with the related web service information. That's why a developer
user knows where to find it.
But let's discuss some other advantages as to why web services are relevant.
Exposing Business Functionality on the network-a web server is a unit of managed code
which offers client applications or end users with some type of functionality. The HTTP
protocol allows this functionality to be called, so that it also can be called up via the
Internet. Both programs are already available on the internet, which makes web services
more useful. It ensures that the web service can be available on the Web anywhere and can
provide the required functionality.
A high degree of flexibility and modularity to make a cloud infrastructure work in the real
world. To support a range of workloads and business services a cloud must be designed.
One can tell when a service will be upgraded and when it can be downgraded.