0% found this document useful (0 votes)
22 views17 pages

DSCCquestionsandanswers Unit 1 and 2

RPC and RMI are both techniques for remote procedure/method calls that allow processes to execute code on remote systems. RPC uses stubs and skeletons to abstract the remote call, while RMI is Java-specific and uses remote objects with stubs and skeletons. Key differences are that RPC supports procedural programming while RMI supports object-oriented programming, RPC passes basic data structures while RMI passes objects, and RMI allows pass by reference while RPC only allows pass by value.

Uploaded by

Rohan Jadhav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views17 pages

DSCCquestionsandanswers Unit 1 and 2

RPC and RMI are both techniques for remote procedure/method calls that allow processes to execute code on remote systems. RPC uses stubs and skeletons to abstract the remote call, while RMI is Java-specific and uses remote objects with stubs and skeletons. Key differences are that RPC supports procedural programming while RMI supports object-oriented programming, RPC passes basic data structures while RMI passes objects, and RMI allows pass by reference while RPC only allows pass by value.

Uploaded by

Rohan Jadhav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Unit 1.

Q.1. Compare RMI and RPC

Remote Procedure Call (RPC) is a programming language feature devised for


the distributed computing and based on semantics of local procedure calls. It is
the most common forms of remote service and was designed as a way to
abstract the procedure call mechanism to use between systems connected
through a network. It is similar to IPC mechanism where the operating system
allows the processes to manage shared data and deal with an environment
where different processes are executing on separate systems and necessarily
require message-based communication.

Let’s understand how RPC is implemented through the given steps:

 The client process calls the client stub with parameters, and its execution
is suspended until the call is completed.
 The parameters are then translated into machine-independent form by
marshalling through client stub. Then the message is prepared which
contain the representation of the parameters.
 To find the identity of the site the client stub intercommunicate with name
server at which remote procedure exists.
 Using blocking protocol the client stub sends the message to the site
where remote procedure call exists. This step halt the client stub until it
gets a reply.
 The server site receives the message sent from the client side and
converts it into machine specific format.
 Now server stub executes a call on the server procedure along with the
parameters, and the server stub is discontinued till the procedure gets
completed.
 The server procedure returns the generated results to the server stub,
and the results get converted into machine-independent format at server
stub and create a message containing the results.
 The result message is sent to the client stub which is converted back into
machine specific format suitable for the client stub.
 At last client, stub returns the results to the client process.

Definition of RMI

Remote Method Invocation (RMI) is similar to RPC but is language specific and
a feature of java. A thread is permitted to call the method on a remote object.
To maintain the transparency on the client and server side, it implements
remote object using stubs and skeletons. The stub resides with the client
and for the remote object it behaves as a proxy.

When a client calls a remote method, the stub for the remote method is called.
The client stub is accountable for creating and sending the parcel containing
the name of a method and the marshalled parameters, and the skeleton is
responsible for receiving the parcel.
The skeleton unmarshals parameters and invokes the desired method on the
server. The skeleton marshals the given value (or exceptions) with the parcel
and sends it to client stub. The stub reassembles the return parcel and sends
it to the client.

In Java, the parameters are passed to methods and returned in the form of
reference. This could be troublesome for RMI service since not all objects are
possibly remote methods. So, it must determine which could be passed as
reference and which could not.

Java uses process named as serialisation where the objects are passed as
value. The remote object is localised by pass by value. It can also pass an
object by reference through passing a remote reference to the object along
with the URL of the stub class. Pass by reference restricts a stub for the
remote object.

Key Differences Between RPC and RMI


1. RPC supports procedural programming paradigms thus is C based, while
RMI supports object-oriented programming paradigms and is java based.
2. The parameters passed to remote procedures in RPC are the ordinary
data structures. On the contrary, RMI transits objects as a parameter to
the remote method.
3. RPC can be considered as the older version of RMI, and it is used in the
programming languages that support procedural programming, and it can
only use pass by value method. As against, RMI facility is devised based
on modern programming approach, which could use pass by value or
reference. Another advantage of RMI is that the parameters passed by
reference can be changed.
4. RPC protocol generates more overheads than RMI.
5. The parameters passed in RPC must be “in-out” which means that the
value passed to the procedure and the output value must have the same
datatypes. In contrast, there is no compulsion of passing “in-out”
parameters in RMI.
6. In RPC, references could not be probable because the two processes
have the distinct address space, but it is possible in case of RMI.
Q.2. Explain Distributed Computing Models

Minicomputer Model

 The minicomputer model is a simple extension of the centralized time-sharing


system.
 A distributed computing system based on this model consists of a few
minicomputers interconnected by a communication network were each
minicomputer usually has multiple users simultaneously logged on to it.
 Several interactive terminals are connected to each minicomputer.Each user
logged on to one specific minicomputer has remote access to other
minicomputers.
 The network allows a user to access remote resources that are available on
some machine other than the one on to which the user is currently logged.The
minicomputer model may be used when resource sharing with remote users is
desired.
 The early ARPA net is an example of a distributed computing system based on
the minicomputer model.

2.Workstation Model
 A distributed computing system based on the workstation model consists of
several workstations interconnected by a communication network.
 An organization may have several workstations located throughout an
infrastructure were each workstation is equipped with its own disk & serves as a
single-user computer.
 In such an environment,at any one time a significant proportion of the
workstations are idle which results in the waste of large amounts of CPU time.
 Therefore,the idea of the workstation model is to interconnect all these
workstations by a high-speed LAN so that idle workstations may be used to
process jobs of users who are logged onto other workstations & do not have
sufficient processing power at their own workstations to get their jobs processed
efficiently.

 Example:Sprite system & Xerox PARC.

3.Workstation–Server Model
 The workstation model is a network of personal workstations having its own disk
& a local file system.
 A workstation with its own local disk is usually called a diskful workstation & a
workstation without a local disk is called a diskless workstation.Diskless
workstations have become more popular in network environments than diskful
workstations,making the workstation-server model more popular than the
workstation model for building distributed computing systems.
 A distributed computing system based on the workstation-server model consists
of a few minicomputers & several workstations interconnected by a
communication network.
 In this model,a user logs onto a workstation called his or her home
workstation.Normal computation activities required by the user's processes are
performed at the user's home workstation,but requests for services provided by
special servers are sent to a server providing that type of service that performs
the user's requested activity & returns the result of request processing to the
user's workstation.
 Therefore,in this model,the user's processes need not migrated to the server
machines for getting the work done by those machines.
 Example:The V-System.

4.Processor–Pool Model:
 The processor-pool model is based on the observation that most of the time a
user does not need any computing power but once in a while the user may need
a very large amount of computing power for a short time.
 Therefore,unlike the workstation-server model in which a processor is allocated
to each user,in processor-pool model the processors are pooled together to be
shared by the users as needed.
 The pool of processors consists of a large number of microcomputers &
minicomputers attached to the network.
 Each processor in the pool has its own memory to load & run a system program
or an application program of the distributed computing system.
 In this model no home machine is present & the user does not log onto any
machine.
 This model has better utilization of processing power & greater flexibility.
 Example:Amoeba & the Cambridge Distributed Computing System.

5.Hybrid Model:

 The workstation-server model has a large number of computer users only


performing simple interactive tasks &-executing small programs.
 In a working environment that has groups of users who often perform jobs
needing massive computation,the processor-pool model is more attractive &
suitable.
 To combine Advantages of workstation-server & processor-pool models,a hybrid
model can be used to build a distributed system.
 The processors in the pool can be allocated dynamically for computations that
are too large or require several computers for execution.
 The hybrid model gives guaranteed response to interactive jobs allowing them to
be more processed in local workstations of the users

Q.3. Explain Message Passing for Interprocess Communication

What is Message Passing in Interprocess Communication ?"


It refers to means of communication between

- Different thread with in a process .

- Different processes running on same node.

- Different processes running on different node.

In this a sender or a source process send a message to a non receiver or destination

process. Message has a predefined structure and message passing uses two system

call: Send and Receive

send(name of destination process, message);


receive(name of source process, message);

In this calls, the sender and receiver processes address each other by names. Mode

of communication between two process can take place through two methods

1) Direct Addressing

2) Indirect Addressing
Direct Addressing:

In this type that two processes need to name other to communicate. This become

easy if they have the same parent.

Example

If process A sends a message to process B, then

send(B, message);

Receive(A, message);

By message passing a link is established between A and B. Here the receiver knows

the Identity of sender message destination. This type of arrangement in direct

communication is known as Symmetric Addressing.

Another type of addressing known as asymmetric addressing where receiver does

not know the ID of the sending process in advance.

diagram:
Indirect addressing:

In this message send and receive from a mailbox. A mailbox can be abstractly

viewed as an object into which messages may be placed and from which messages

may be removed by processes. The sender and receiver processes should share a

mailbox to communicate.

The following types of communication link are possible through mailbox.

- One to One link: one sender wants to communicate with one receiver. Then single

link is established.

- Many to Many link: Multiple Sender want to communicate with single

receiver.Example in client server system, there are many crying processes and one

server process. The mailbox is here known as PORT.

- One to Many link: One sender wants to communicate with multiple receiver, that

is to broadcast message.

- Many to many: Multiple sender want to communicate with multiple receivers.


Unit 2

1. Explain Mutual Exclusion using Token Ring Algorothm

Token ring algorithm:

 In this algorithm it is assumed that all the processes in the system are organized
in a logical ring. The figure blow describes the structure.
 The ring positions may be allocated in numerical order of network addresses and
is unidirectional in the sense that all messages are passed only in clockwise or
anti-clockwise direction.
 When a process sends a request message to current coordinator and does not
receive a reply within a fixed timeout, it assumes the coordinator has crashed. It
then initializes the ring and process Pi is given a token.
 The token circulates around the ring. It is passed from process k to k+1 in point
to point messages. When a process acquires the token from its neighbor it
checks to see if it is attempting to enter a critical region. If so the process enters
the region does all the execution and leaves the region. After it has exited it
passes the token along the ring. It is not permitted to enter a second critical
region using the same token.
 If a process is handed the token by its neighbor and is not interested in entering
a critical region it just passes along. When no processes want to enter any critical
regions the token just circulates at high speed around the ring.
 Only one process has the token at any instant so only one process can actually
be in a critical region. Since the token circulates among the process in a well-
defined order, starvation cannot occur.
 Once a process decides it wants to enter a critical region, at worst it will have to
wait for every other process to enter and leave one critical region.
 The disadvantage is that if the token is lost it must be regenerated. But the
detection of lost token is difficult. If the token is not received for a long time it
might not be lost but is in use.
Q.2. Explain Mutual Exclusion Centralized Approach, Distributed Approach and Token Ring Algorithm

Mutual Exclusion: Centralized approach


Among concurrent processes only 1 process gets exclusive access to shared resource.
No starvation: Every process should get the requested resource
1.Request from P1→Pc
2.Reply access grant
3.Request from P2
4.Request from P3
5.Release by P1
6.Reply by P2
7.Release by P2
8.Reply by P3
9.Release

Distributed Approach
When a process wants to enter CS, it sends a message to all processes.
Actions by various processes:
1.If Pr is in CS, it queues message
2.If Pr is not in CS, but waiting for its turn, it checks TS of incoming message. If TS of its
request is lower, it sends ok else queues the message & defers reply
3.If Pr not interested in CS, sends a reply

Mutual Exclusion: A Distributed Algorithm


a)Two processes want to enter the same critical region at the same moment.
b)Process 0 has the lowest timestamp, so it wins.
c)When process 0 is done, it sends an OK also, so 2 can now enter the critical region.
Token passing approach
All processes are arranged in a logical ring.
Token is circulated among them.
A process keeps token , enters CS, after exiting from CS, sends token to neighbor.
If it does not want to enter, it circulates the token.
Total no. of messages to enter CS may vary from 0 to n-1.

Drawbacks:
Process failure
Lost token

Q.3 Explain Election Bully Algorithm, Ring Algorithm

Election of a central coordinator process from among currently running processes in such a
manner that only one coordinator is elected
Assumptions:
1.Each process has a unique priority no.
2.The process having highest priority no . is elected
3.On recovery a failed process can take proper action to rejoin the set of active processes

Bully Algorithm
A process Pi sends a request message to the coordinator & does not receive a reply within a
specified period
Pi initiates “election” message to every process with a higher priority
If Pi does not receive any response then, it assumes that its priority is highest & sends a
“coordinator” message to all other processes

Else
If Pi receives a response for its election message, it means that some other process has highest
priority & waits for result.

Bully Algorithm
1.P5 is coordinator and P5 crashes

2.P3 sends a request to P5 and does not receive a reply within T

3.P3 initiates an election by sending election message to P4 and P5

4.P4 sends ‘Alive message to P3 and will take over election activity. P5 cannot respond

5.P4 sends election message to P5

6.P5 does not respond

7.P4 wins the election and sends coordinator message to P1, P2 and P3 but P2 is down
8.P2 recovers and initiates an election by sending election message to P3, P4 and P5. P4 wins the
election

9.P5 recovers from failure. P5 is highest so becomes coordinator


Unit 3 : Distributed Shared Memory

Q.1. Discuss any two of the following consistency models

Strict Consistency

Sequential Consistency

PRAM Consistency

Easy Release Consistency

Lazy Release Consistency

Q.2. Explain the following implementation of Sequential Models

Nonreplicated Non Migrating Blocks

Nonreplicated migrating blocks

Replicated Migrating Blocks

Replicated Non Migrating Blocks


Q.3. List the different strategies used for locating data in NRMB and elaborate any one in detail

Q.4. Explain Write Invalidate and Write Update

Q.5. List and elaborate the replacement strategies

You might also like