0% found this document useful (0 votes)
62 views8 pages

Distributed System Assingment

This document discusses a Java implementation of multicast remote procedure call (RPC). It includes a reliable, first-in-first-out multicast provider and five total order broadcast algorithms. An RPC client and server are built using these technologies to demonstrate that the multicast RPC implementation works correctly. Future work is also discussed to improve the implementation.

Uploaded by

Loveday Osiagor
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views8 pages

Distributed System Assingment

This document discusses a Java implementation of multicast remote procedure call (RPC). It includes a reliable, first-in-first-out multicast provider and five total order broadcast algorithms. An RPC client and server are built using these technologies to demonstrate that the multicast RPC implementation works correctly. Future work is also discussed to improve the implementation.

Uploaded by

Loveday Osiagor
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

MuIti cast RPC ImpIementation for Java

Abstract
We propose an extension oI Remote Procedure Call (RPC) semantics to a multicast
environment and discuss a Java implementation thereoI. The implementation
consists oI a reliable, FIFO multicast provider, Iive interchangeable Total Order
Broad cast algorithms, and an RPC client and server that make use oI these
technologies. I demonstrate that the implementation works correctly. Throughout
the paper, we discuss Iuture work that could be done on this project.

INTRODUCTION

Remote Procedure Calls (RPC) |1| is a convenient paradigm to use when
programming a distributed system. A well-designed RPC implementation like
Java`s RMI is transparent to the programmer that is, remote procedure calls are
made with identical syntax to local procedure calls |4|. This brings the ease oI
programming oI shared memory multiprocessing systems to inexpensive multi
computer clusters built with commercial oII-the-shelI components.
Instead oI explicitly passing messages between processes, the programmer simply
sets the RPC mechanism up and perIorms method calls on objects. Threads can
thus be moved to other computers without aIIecting method invocation semantics.
This allows the programmer to Iocus on the higher level task oI writing a correct
program rather than worry about the explicit memory management required in
distributed memory multi processors. Point-to-point RPC between a client and a
server system is mostly a solved problem |1|, but RPC in a multicast environment
is still an active Iield oI research. RPC has been criticized Ior being inherently
point-to-point |3|, but we believe that the ability oI a client system to execute an
RPC call on many servers in parallel would be useIul in many situations. For
example, a master processor could send soItware updates or remote administration
commands to a network oI mobile devices with little eIIort on the part oI the
programmer. Thinking bigger, a network oI mobile devices could be treated as a
'grid, bringing massively parallel computing power to any client within wireless
range. Such a network could be scaled up or down without requiring any changes
to client programs oI the grid. This paper demonstrates an implementation oI
multicast RPC in Java. Because it is written entirely in soItware without any
modiIication to the language or compiler, our implementation does not have the
transparency oI Java RMI. However, it would be straightIorward to integrate this
code into the Java runtime system as an extension to RMI, providing more
transparency.
Literature Review

Multicast communication protocols

Multicast communication protocols have been an area oI active research Ior some
time.
Multicast typically extends message passing with some notion oI distributed,
addressable process or object groups. Many protocols and implementations exist.
The Iocus oI most implementations is on providing reliability through replication.
Perhaps the most diIIicult aspect oI providing multicast is choosing an appropriate
semantics, that is, how the ordering and delivery oI messages is coordinated by
receiving objects. It is generally accepted that no single model is appropriate Ior all
applications. Multicast provides a level oI indirection over message passing, but
most implementations require that participating objects are explicitly aware oI the
communication model. This typically means that application components are
programmed with implicit knowledge oI the multicast semantics and can be
diIIicult to reuse in a diIIerent communications environment

Remote Procedure Call


Remote procedure call (RPC)is perhaps the most popular interaction protocol
because oI its similarity to local procedure call. Use oI remote procedure call
typically involves the deIinition oI remote procedures using an interface definition
language (IDL) and the creation oI program stubs that approximate local procedure
call semantics Ior the client (caller) and server (receiver). It is thereIore relatively
easy to modiIy existing programs Ior distribution.

Many Ilavours oI RPC exist, each providing diIIerent semantic guarantees and data
typing. CORBA remote method invocation, Ior example, has a strong object
oriented Ilavor and supports multiple programming language bindings, where DCE
RPC is relatively unique in its support Ior data pointers. SunRPCis perhaps the
most widely used because reIerence implementations are Ireely available. SunRPC
is the basis Ior a number oI common distributed services, including the Network
File System (NFS). While RPC is generally easy to use, it is not suitable Ior all
applications. In particular: RPCs are inherently synchronous, which limits
opportunities Ior parallelism and causes perIormance problems on networks oI
high latency. RPC implies procedural interaction, so are unsuitable Ior applications
requiring streamed data, Ior example. RPCs are directed, in that they require
explicit identiIication oI the server by the client. Anonymous or mediated
communication gives considerably more Ilexibility and opportunity Ior reuse oI
application components.
Message Passing
Message passing is also quite common in distributed applications and
environments.
Message passing is used instead oI RPC where the interaction model is not strictly
synchronous or parallelism in communication is required. PVM is a commonly
used distributed environment based on message passing, with a Iocus on high-
perIormance parallel programming. Reliable, transactional message passing is
provided by environments such as IBM`s commercial MQ-series product. At an
abstract level, electronic mail (email) is a Iorm oI asynchronous message passing,
and this has gained wide acceptance in both research and commercial
environments.
Message passing can also provide a basis Ior the streamed communication,
although it is more common Ior stream-oriented communications to be provided
explicitly by an environment. Message passing has some drawbacks in distributed
applications, particularly:
Message passing is relatively low-level and provides minimal abstraction Ior
higher-level application protocols. As with RPC, message passing is directed,
requiring explicit identiIication oI the receiver and constraining reuse oI
application components.

Proposed System Design

I propose an extension oI #emote Procedure Call (#PC) semantics to a multicast
environment and discuss a Java implementation thereof. The implementation
consists oI a reliable, multicast provider, Iive interchangeable Total Order
Broadcast algorithms, and an RPC client and server that make use oI these
technologies. We demonstrate that the implementation works correctly.
Throughout the paper, we discuss Iuture work that could be done on this project


:ticast Imp0m03tatio3
Correct RPC semantics require both the sender and receiver to agree on the order
in which procedures are executed. Multicast RPC is no diIIerent in Iact, all
receivers should agree on the same ordering Ior procedure calls. This property,
called Total Order Broadcast |2|, is not provided in the Java standard library.
Java`s network library supports the core transport-layer protocols oI the Internet:
TCP, UDP, and multicast UDP. Since multicast UDP does not provide any
guarantee oI reliable or in-order delivery, it is not a total order broadcast algorithm,
and is thus unsuitable Ior use with RPC. Furthermore, we were unable to correctly
route multicast packets on any oI the networks we tested. Given the limited amount
oI time available Ior this project, we chose to implement a total order broadcast
protocol in soItware atop TCP connections. This is not truly a multicast protocol,
and in Iact it is ineIIicient Ior production-quality multicast RPC, but we believe
that the modular nature oI the code lends itselI to being replaced with a better
implementation in the Iuture, perhaps taking advantage oI advanced multicast
services in the operating system or network hardware. Our total order broadcast
implementation consists oI the Iollowing modules:
The multicast simulator. This is a reliable, multicast transport layer. It has a set
oI message sender and receiver classes, along with a multicast manager that tracks
membership in the multicast group and relays the inIormation to group members.




The total order broadcast modules. These use the multicast simulator Ior their
transport layer and add a total ordering guarantee to message transport. They are
based on the pseudo code
Multicast Simulator
Our multicast code is located in the network package. Messages are sent using
instances oI multicast sender, Iound in Multicast Message- Sender.java. A
message sender maintains a set oI message receiver addresses. It sends a message
by opening a TCP connection to each receiver in the set, serializing the message,
and sending the serialized byte stream over the connection. The set oI receiver
addresses is kept up to date with the help oI a separate thread that listens on a
socket Ior broadcast updates. Unicast messages are received with instances oI
MessageReceiver.java.
This class runs a thread that listens on a socket Ior incoming messages. The
messages are passed to the receiver`s message handler, a type oI extensible event
handler that the user provides. A handler might print the message to the console,
queue it, or hand it oII to a waiting processing thread, Ior example.
Multicast messages arrive at instances oI MulticastMessageReceiver.java. This
class extends MessageReceiver.java to maintain membership in the multicast
group, including sets oI the sender and receiver unique identiIiers. Updates these
sets are accomplished in the same way as Ior multicast senders. A core concept in
our multicast simulator is that oI the multicast manager. It is implemented in
MulticastManager.java. The multicast manager listens on a TCP socket Ior Join
and Leave messages Irom the senders and receivers. Upon receiving such a
message, it updates its internal membership sets and sends updates to members oI
the group. Multicast senders need to know the addresses oI the group`s receivers,
so those are broadcast to the senders whenever a receiver joins or leaves the group.
Similarly, sender and receiver membership changes are broadcast to all receivers in
the group.

Fixed Sequencer
The simplest total ordering algorithm uses a single process through which all
broadcast messages must be routed. This process, called a sequencer, receives one
message at a time, so a total order is imposed automatically. The sequencer adds a
sequence number to each incoming message, then broadcasts them to all receivers
in the multicast group. Our Iixed sequencer resides in the sequencer.fixed
package, in Fixed- Sequencer.java. It uses a message receiver to accept incoming
messages and broadcasts them with a multicast message sender. We have a test
program in test.FixedTester.java. Initializing the token, receiving it Irom the
previous node, and sending it to the subsequent node in the ring when either a
timeout occurs or it has been signaled to do so by another process. The Moving
Sequencer algorithm is implemented in the sequencer.moving package, in the Iile
Moving- Sequencer.java. This class is tested by the Iile MovingTest.java, Iound
in test.
The current implementation successIully constructs the token ring and relays only
one copy oI a message between S and D. There are no provisions Ior dealing with
the possibility oI a member oI the ring Iailing. The current implementation can be
extended towards this Iunctionality by maintaining an on line registry oI token ring
members. In the case oI member Iailure, each member oI the token ring is notiIied
by the registry, which then calculates a new token ring by randomly assigning new
previous and adjacent nodes to each member.

Moving Sequencer
The Fixed Sequencer scheme relies on a single, Iixed intermediary process to act
as a relay between the group oI senders (S) and the group oI destinations (D). The
main advantage oI FS is that it is relatively simple, but suIIers Irom a single point
oI Iailure. II the FS process ever crashes, then the entire system stops processing
messages. No message can be sent Irom S, and no member oI D can receive a
message. Instead oI a single intermediary process, we can use a group oI processes
to relay messages.
This is the Moving Sequencer scheme. It distributes both the responsibility oI
relaying messages between S and D and balances the load oI sending messages
amongst its members. As outlined in |2|, the MS implementation relies on a token
ring. A Token is an object that records a sequence number and keeps a list oI
messages that have been relayed between S and D. The algorithm proceeds thus:
1. Each Sender in S broadcasts to every member oI MS.
2. Each sequencer waits Ior both a message and the token. As soon as it holds both,
oI these objects, it attempts to add the message to the list oI already relayed
messages. II this operation Iails, then the sequencer knows that the message has
already been relayed, and should be dropped. Otherwise, it stamps the message
with a sequence number and sends the message to D.
The token ring is implemented in the util.token package. The primary Iiles are
Token.java and NetworkToken- Manager.java. A token manager is an object
that is responsible Ior implementation is in Privileged- Sender.java, package
privilege. It is tested in PrivilegeTest.java, Iound in the testing package. The
purpose oI the test is to create the token ring amongst S, and to send messages
Irom S to a destination.

Privilege-Based
The Privilege based scheme, like Moving Sequencer, is based on creating a token
ring oI processes. Instead oI sequencers Iorming the ring, the members oI S do so.
The algorithm given in |2| is:
1. For each member in S, wait Ior both a message Irom the upper layer and the
token.
2. When both oI these objects are held, assign the message a sequence number and
multi-cast it to D. PB is a relatively simple scheme that distributes the load oI
sending messages Irom the upper level amongst the members oI S. The token ring
is implemented in the same Iiles mentioned in Moving Sequencer. The algorithm
implementation is in Privileged- Sender.java, package privilege. It is tested in
PrivilegeTest.java, Iound in the testing package. The purpose oI the test is to
create the token ring amongst S, and to send messages Irom S to a destination.


Communications History
In contrast to the Fixed/Moving Sequencer and Privilege-based scheme, the
Communications History classes oI algorithms do not diIIerentiate between S and
D. Instead, all processes in the system belong to the same group P. OI the class oI
Communications History algorithms, we implemented the causal history algorithm,
which proceeds as Iollows
1. Every member p oI P has an array oI logical clocks that record that maximum
clock value oI process q oI P, as known by p. Initially, all value is zero. An update
occurs when p receives a message Irom q with a new clock value.
2. To broadcast, p update its own logical clock, time-stamps the message, and
sends it across some FIFO channel (a TCP stream in our implementation).
3. When a process p receives a message Irom q, p updates its array oI logical
clocks to be the maximum oI the current logical clock value Ior q and the value
embedded in the message. p then creates a set oI deliverable messages, deIined as
those that have been received but have not been delivered and the timestamp oI
each message is less than the minimum clock value Ior each process in the logical
clock array.
4. Each deliverable message is delivered to the layer above, and other destination.
II not, it records the sender id, keyed by the message.
II yes, then it Iinds the globally maximum timestamp oI the message, stamps a
copy oI the message with the global timestamp, and adds it to stamped.
5. For each message in stamped, the algorithm checks to see iI every message in
received has a logically later timestamp. II this is true, then the stamped message is
delivered to the layer above.
The algorithm enIorces a total order on the message delivery by Iinding the
logically latest copy oI each message, and ensuring that it cannot be delivered
unless all other received messages are logically later |2|. Our implementation is
Iound in the Iile greementDest.java, package destination.agreement. The
algorithm assumes that there is a way to discover only the set D oI destinations.
We were not able to implement this Iunctionality in the given time constraints.
Possible extensions to this project would be to use a diIIerent Multicast Manager to
negotiate the message channels amongst only D.


%0 RPC Lay0r
With a total order broadcast algorithm available, the actual RPC code becomes
very simple. A program that wishes to call a remote method constructs a new
RPC client instance, initializing it with a Sender implementation Irom one oI the
Iire total order broadcast algorithms. The relevant code is in RPCClient.java. A
client then invokes the call method, which accepts an object or Class instance, a
method name, and a variable number oI arguments. The call method packages
these arguments into an RPC Message and total order broadcasts it.
On the server side, an instance oI RPCServer.java receives the message, unpacks
its contents, and perIorms a dynamic class and method lookup using the Java
reIlection API. It then invokes the method using the provided arguments and
returns. Our implementation does not currently support return values. Any method
may be invoked, but return values are discarded on the server side. We can think oI
two ways to support return values: either allow multicast messages to aggregate
responses Irom the receivers and pass them back to the sender, or have the RPC
client wait Ior responses Irom all the servers. An earlier version oI the project tried
the Iormer solution, but it didn`t really work well and complicated much oI the
total order broadcast code.



Co3c:sio3
In this paper we discussed the problem oI making remote procedure calls to
multiple destinations as a multicast operation. We outlined the architecture to
achieve this goal in Java. The two primary components oI the architecture are the
multi-cast simulator and the total order algorithms. Together, these pieces create a
working implementation oI RPC semantics in a multicast environment. As
discussed throughout the paper, there is Iuture work to be done, starting with
implementing a true multicast routing layer that the message senders, receiver and
multicast managers communicate through. Some oI the total order broadcast
modules, such as Moving Sequencer, can be extended with Iurther Iunctionality,
and entirely new modules can be created. I am envisioning perIorming experiments
to obtain perIormance evaluation data Ior the diIIerent algorithms. I am conIident
that this work Iorms the basis Ior a modular, transparent testing and
implementation suite Ior multicast remote procedure calls.

You might also like