Distributed Computing Lab Manual
Distributed Computing Lab Manual
INSTITUTE
OF TECHNOLOGY
(Affiliated to the Mumbai University, Approved by AICTE-New Delhi)
Near Shedung Toll Plaza, Old Mumbai-Pune Highway, Post - Shedung, Taluka Panvel, Dist. Raigad, Navi Mumbai, Maharashtra 410206
Certificate
This is to certify that Mr. /Ms. SAIF UMARSAHAB BODU Roll No: 06 Semester: VIII
Branch: COMPUTER ENGINEERING has conducted all practical work of the session
for Subject: DISTRIBUTED COMPUTING LAB (CSL801) as a part of academic
requirement of University of Mumbai and has completed all exercise satisfactorily during
the academic year 2022 - 2023.
Date: / /2023
Seal of
College
1
CHHATRAPATI SHIVAJI MAHARAJ INSTITUTE OF
TECHNOLOGY
(Affiliated to the Mumbai University, Approved by AICTE-New Delhi)
1. Inter-process communication.
3. Group Communication.
5. Election Algorithm.
8. Load Balancing.
2.
3.
4.
5.
6.
Signature of Student Signature of Staff
EXPERIMENT NO. 1
Aim: To implement Inter-process Communication using TCP Based on Socket
Programming.
Theory:
Inter-process communication is the mechanism provided by the operating system that allows
processes to communicate with each other. This communication could involve a process letting
another process know that some event has occurred or the transferring of data from one process
to another.
● Semaphore
A semaphore is a variable that controls the access to a common resource by multiple
processes. The two types of semaphores are binary semaphores and counting semaphores.
● Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the critical section at a
time. This is useful for synchronization and also prevents race conditions.
● Barrier
A barrier does not allow individual processes to proceed until all the processes reach it.
Many parallel languages and collective routines impose barriers.
● Spinlock
1
This is a type of lock. The processes trying to acquire this lock wait in a loop while
checking if the lock is available or not. This is known as busy waiting because the process
is not doing any useful operation even though it is active.
Approaches to Inter-process Communication
The different approaches to implement inter-process communication are given as follows −
● Pipe
A pipe is a data channel that is unidirectional. Two pipes can be used to create a two-way
data channel between two processes. This uses standard input and output methods. Pipes
are used in all POSIX systems as well as Windows operating systems.
● Socket
The socket is the endpoint for sending or receiving data in a network. This is true for data
sent between processes on the same computer or data sent between different computers on
the same network. Most of the operating systems use sockets for inter-process
communication.
● File
A file is a data record that may be stored on a disk or acquired on demand by a file server.
Multiple processes can access a file as required. All operating systems use files for data
storage.
● Signal
Signals are useful in inter-process communication in a limited way. They are system
messages that are sent from one process to another. Normally, signals are not used to
transfer data but are used for remote commands between processes.
● Shared Memory
Shared memory is the memory that can be simultaneously accessed by multiple processes.
This is done so that the processes can communicate with each other. All POSIX systems,
as well as Windows operating systems use shared memory.
● Message Queue
2
Multiple processes can read and write data to the message queue without being connected
to each other. Messages are stored in the queue until their recipient retrieves them.
Message queues are quite useful for interprocess communication and are used by most
operating systems.
A diagram that demonstrates message queue and shared memory methods of interprocess
communication is as follows –
Types of Sockets
3
The different types of sockets are given as follows –
● Sequential Packet Socket: This type of socket provides a reliable connection for
datagrams whose maximum length is fixed This connection is two-way as well as
sequenced.
● Datagram Socket: A two-way flow of messages is supported by the datagram socket. The
receiver in a datagram socket may receive messages in a different order than that in
which they were sent. The operation of datagram sockets is similar to that of passing
letters from the source to the destination through a mail.
● Stream Socket: Stream sockets operate like a telephone conversation and provide a two-
way and reliable flow of data with no record boundaries. This data flow is also sequenced
and unduplicated.
● Raw Socket: The underlying communication protocols can be accessed using the raw
sockets.
Socket Creation
Sockets can be created in a specific domain and the specific type using the following
declaration–
If the protocol is not specified in the above system call, the system uses a default protocol that
supports the socket type. The socket handle is returned. It is a descriptor.
The bind function call is used to bind an internet address or path to a socket. This is shown as
follows −
4
Connecting the stream sockets is not a symmetric process. One of the processes acts as a server
and the other acts as a client. The server specifies the number of connection requests that can be
queued using the following declaration −
The client initiates a connection to the server’s socket by using the following declaration −
A new socket descriptor which is valid for that particular connection is returned by the following
declaration −
The send() and recv() functions are used to send and receive data using sockets. These are
similar to the read() and write() functions but contain some extra flags. The declaration for send()
and recv() are as follows −
Stream Closing
Program:
5
// TCPServer.java
package com.saif.exp1;
import java.util.*;
import java.io.*;
import java.net.*;
// TCPClient.java
package com.saif.exp1;
import java.io.*;
import java.util.*;
import java.net.*;
6
System.out.println("Connected");
DataInputStream din = new DataInputStream(client.getInputStream());
DataOutputStream dout = new DataOutputStream(client.getOutputStream());
Scanner sc = new Scanner(System.in);
String send = "";
while (!send.equals("stop")) {
System.out.print("Send: ");
send = sc.nextLine();
dout.writeUTF(send);
}
dout.flush();
String recv = din.readUTF();
System.out.println("Sum of the integers is: " + recv);
dout.close();
din.close();
client.close();
}
}
Output:
7
EXPERIMENT NO. 2
Aim: To implement Client-Server Application using Java RMI.
Theory:
The RMI (Remote Method Invocation) is an API that provides a mechanism to create distributed
application in java. The RMI allows an object to invoke methods on an object running in another
JVM.
The RMI provides remote communication between the applications using two objects stub and
skeleton.
RMI uses stub and skeleton object for communication with the remote object.
A remote object is an object whose method can be invoked from another JVM. Let's understand
the stub and skeleton objects:
Stub
The stub is an object, acts as a gateway for the client side. All the outgoing requests are routed
through it. It resides at the client side and represents the remote object. When the caller invokes
method on the stub object, it does the following tasks:
Skeleton
The skeleton is an object, acts as a gateway for the server side object. All the incoming requests
are routed through it. When the skeleton receives the incoming request, it does the following
tasks:
8
2. It invokes the method on the actual remote object, and
3. It writes and transmits (marshals) the result to the caller.
In the Java 2 SDK, an stub protocol was introduced that eliminates the need for skeletons.
The RMI application have all these features, so it is called the distributed application.
9
4. Start the registry service by rmiregistry tool
5. Create and start the remote application
6. Create and start the client application
RMI Example
In this example, we have followed all the 6 steps to create and run the rmi application. The client
application need only two files, remote interface and client application. In the rmi application,
both client and server interacts with the remote interface. The client application invokes methods
on the proxy object, RMI sends the request to the remote JVM. The return value is sent back to
the proxy object and then to the client application.
For creating the remote interface, extend the Remote interface and declare the RemoteException
with all the methods of the remote interface. Here, we are creating a remote interface that
extends the Remote interface. There is only one method named add() and it declares
RemoteException.
10
2) Provide the implementation of the remote interface
Now provide the implementation of the remote interface. For providing the implementation of
the Remote interface, we need to
In case, you extend the UnicastRemoteObject class, you must define a constructor that declares
RemoteException.
3) Create the stub and skeleton objects using the rmic tool.
Next step is to create stub and skeleton objects using the rmi compiler. The rmic tool invokes the
RMI compiler and creates stub and skeleton objects.
Now start the registry service by using the rmiregistry tool. If you don't specify the port number,
it uses a default port number.
Now rmi services need to be hosted in a server process. The Naming class provides methods to
get and store the remote object. The Naming class provides 5 methods.
At the client we are getting the stub object by the lookup() method of the Naming class and
invoking the method on this object. In this example, we are running the server and client
applications, in the same machine so we are using localhost. If you want to access the remote
object from another machine, change the localhost to the host name (or IP address) where the
remote object is located.
11
Program:
//AddInterface.java
package com.saif.exp2;
import java.rmi.*;
public interface AddInterface extends Remote {
public int sum(int nl, int n2) throws RemoteException;
//Add.java
package com.saif.exp2;
import java.rmi.*;
import java.rmi.server.*;
public class Add extends UnicastRemoteObject implements AddInterface {
int num1, num2;
public Add() throws RemoteException {
}
public int sum(int n1, int n2) throws RemoteException {
num1 = n1;
num2 = n2;
return num1 + num2;
}
}
//AddServer.java
package com.saif.exp2;
import java.rmi.Naming;
public class AddServer {
//AddClient.java
package com.saif.exp2;
12
import java.rmi.Naming;
public class AddClient {
public static void main(String[] args) {
try {
AddInterface ai = (AddInterface) Naming.lookup("//localhost/Add");
System.out.println("The sum of 2 numbers is: " + ai.sum(10, 2));
} catch (Exception e) {
System.out.println("Client Exception: " + e);
}
}
}
Output:
13
EXPERIMENT NO. 3
Aim: To implement a program to demonstrate group communication.
Theory:
Communication between two processes in a distributed system is required to exchange various
data, such as code or a file, between the processes. When one source process tries to
communicate with multiple processes at once, it is called Group Communication. A group is a
collection of interconnected processes with abstraction. This abstraction is to hide the message
passing so that the communication looks like a normal procedure call. Group communication
also helps the processes from different hosts to work together and perform operations in a
synchronized manner, therefore increasing the overall performance of the system.
● Broadcast Communication: When the host process tries to communicate with every process
in a distributed system at same time. Broadcast communication comes in handy when a
common stream of information is to be delivered to each and every process in most efficient
manner possible. Since it does not require any processing whatsoever, communication is very
fast in comparison to other modes of communication. However, it does not support a large
number of processes and cannot treat a specific process individually.
14
Fig. A broadcast Communication: P1 process communicating with every process in the system
● Multicast Communication: When the host process tries to communicate with a designated
group of processes in a distributed system at the same time. This technique is mainly used to
find a way to address problem of a high workload on host system and redundant information
from process in system. Multitasking can significantly decrease time taken for message
handling.
Fig. A multicast Communication: P1 process communicating with only a group of the process in
the system
● Unicast Communication: When the host process tries to communicate with a single process
in a distributed system at the same time. Although, same information may be passed to
15
multiple processes. This works best for two processes communicating as only it has to treat a
specific process only. However, it leads to overheads as it has to find exact process and then
exchange information/data.
The ordering attribute of the messages is in charge of managing the order in which messages are
delivered. Message ordering types include:
● No order means message sending happens without regard for the order to the group.
● FIFO order means messages are shown in the order they are sent.
● Casual order means messages are shipped in a random order after receiving another
message.
● Total order means all communications are sent to all group members in the same order.
16
Group organization
Group communication systems can be classified as either closed or open. Only members of the
closed group can send messages to the group. Users who are not group members can send
messages to each member separately. Non-members in the open group can send messages to the
group. The program's objective determines the use of a closed or open group.
The group's internal structure can be determined based on its organization. All decisions in
egalitarian groupings are made collaboratively. In the event of a failure, the group proceeds
without a procedure. The coordinator makes decisions in hierarchical clusters. The loss of the
coordinator brings all processes to a standstill.
Program:
//GCServer.java
package com.saif.exp3;
import java.io.*;
import java.util.*;
import java.io.*;
import java.net.*;
class Message {
String msg;
17
public void setMsg(String msg) {
this.msg = msg;
}
18
conn = false;
System.out.println(e);
}
}
closeConn();
}
//GCMaster.java
package com.saif.exp3;
import java.util.*;
import java.io.*;
import java.net.*;
19
//GCSlave.java
package com.saif.exp3;
import java.io.DataInputStream;
import java.net.Socket;
Output:
20
EXPERIMENT NO. 4
Aim: To implement Lamport’s Clock Synchronization Algorithm.
Theory:
22
example, consider a
system with two
processes and a disk. The
processes send
messages to
each other, and also send
messages to the disk
requesting access. The
disk grants access in the
order the messages were
sent.
For example process A
sends a message to the
disk requesting write
access, and then sends a
23
read instruction message
to process B. Process B
receives the message,
and as a result sends
its own read request
message to the disk. If
there is a timing delay
causing the disk to
receive
both messages at the
same time, it can
determine which
message happened-
before the other: (
24
A A happens-before B B if
one can get from A A to B
B by a sequence of
moves of two types:
moving forward while
remaining in the same
process, and following a
message from its sending
to its reception.) A logical
clock algorithm provides
a mechanism to
determine facts about
the
order of such events.
25
The algorithm of Lamport
timestamps is a simple
algorithm used to
determine the order of
events in a distributed
computer system. As
different nodes or
processes will typically
not be
perfectly synchronized,
this algorithm is used
to provide a partial
ordering of events with
minimal overhead, and
conceptually provide a
26
starting point for the
more advanced vector
clock method. They are
named after their
creator, Leslie Lamport.
Distributed algorithms
such
as resource
synchronization often
depend on some method
of ordering events to
function. For
example, consider a
system with two
processes and a disk. The
27
processes send
messages to
each other, and also send
messages to the disk
requesting access. The
disk grants access in the
order the messages were
sent.
For example process A
sends a message to the
disk requesting write
access, and then sends a
read instruction message
to process B. Process B
28
receives the message,
and as a result sends
its own read request
message to the disk. If
there is a timing delay
causing the disk to
receive
both messages at the
same time, it can
determine which
message happened-
before the other: (
A A happens-before B B if
one can get from A A to B
29
B by a sequence of
moves of two types:
moving forward while
remaining in the same
process, and following a
message from its sending
to its reception.) A logical
clock algorithm provides
a mechanism to
determine facts about
the
order of such events.
The algorithm of Lamport
timestamps is a simple
30
algorithm used to
determine the order of
events in a distributed
computer system. As
different nodes or
processes will typically
not be
perfectly synchronized,
this algorithm is used
to provide a partial
ordering of events with
minimal overhead, and
conceptually provide a
starting point for the
more advanced vector
31
clock method. They are
named after their
creator, Leslie Lamport.
Distributed algorithms
such
as resource
synchronization often
depend on some method
of ordering events to
function. For
example, consider a
system with two
processes and a disk. The
processes send
messages to
32
each other, and also send
messages to the disk
requesting access. The
disk grants access in the
order the messages were
sent.
For example process A
sends a message to the
disk requesting write
access, and then sends a
read instruction message
to process B. Process B
receives the message,
and as a result sends
33
its own read request
message to the disk. If
there is a timing delay
causing the disk to
receive
both messages at the
same time, it can
determine which
message happened-
before the other: (
A A happens-before B B if
one can get from A A to B
B by a sequence of
moves of two types:
34
moving forward while
remaining in the same
process, and following a
message from its sending
to its reception.) A logical
clock algorithm provides
a mechanism to
determine facts about
the
order of such events.
The algorithm of Lamport timestamps is a simple algorithm used to determine the order of
events in a distributed computer system. As different nodes or processes will typically not be
perfectly synchronized, this algorithm is used to provide a partial ordering of events with
minimal overhead, and conceptually provide a starting point for the more advanced vector clock
method. They are named after their creator, Leslie Lamport. Distributed algorithms such as
resource synchronization often depend on some method of ordering events to function. For
example, consider a system with two processes and a disk. The processes send messages to each
other, and also send messages to the disk requesting access. The disk grants access in the order
the messages were sent. For example process A sends a message to the disk requesting write
access, and then sends a read instruction message to process B. Process B receives the message,
and as a result sends its own read request message to the disk. If there is a timing delay causing
35
the disk to receive both messages at the same time, it can determine which message happened-
before the other: ( A A happens-before B B if one can get from A A to B B by a sequence of
moves of two types: moving forward while remaining in the same process, and following a
message from its sending to its reception.) A logical clock algorithm provides a mechanism to
determine facts about the order of such events.
Lamport invented a simple mechanism by which the happened-before ordering can be captured
numerically. A Lamport logical clock is an incrementing software counter maintained in each
process. Conceptually, this logical clock can be thought of as a clock that only has meaning in
relation to messages moving between processes. When a process receives a message, it
resynchronizes its logical clock with that sender. The above-mentioned vector clock is a
generalization of the idea into the context of an arbitrary number of parallel, independent
processes. The algorithm follows some simple rules:
36
Program:
//Lamport.java
import java.util.*;
import java.util.HashMap;
import java.util.Scanner;
import javax.swing.*;
import java.awt.*;
import java.awt.geom.*;
void drawArrow(Graphics g1, int x1, int y1, int x2, int y2) {
37
g.fillPolygon(new int[]{len, len - ARR_SIZE, len - ARR_SIZE, len},
new int[]{0, -ARR_SIZE, ARR_SIZE, 0}, 4);
}
Graphics2D go = (Graphics2D) g;
go.setPaint(Color.black);
for (i = 1; i <= p; i++) {
go.drawLine(50, 100 * i, 450, 100 * i);
}
k = i * 10 + j;
go.setPaint(Color.blue);
38
System.out.println("Enter the number of process:");
p = sc.nextInt();
System.out.println("Enter the no of events per process:");
for (i = 1; i <= p; i++) {
ev[i] = sc.nextInt();
}
System.out.println("Enter the relationship:");
for (i = 1; i <= p; i++) {
System.out.println("For process:" + i);
for (j = 1; j <= ev[i]; j++) {
System.out.println("For event:" + (j));
int input = sc.nextInt();
k = i * 10 + j;
hm.put(k, input);
if (j == 1) {
en[i][j] = 1;
}
}
}
for (i = 1; i <= p; i++) {
k = i * 10 + j;
if (hm.get(k) == 0) {
en[i][j] = en[i][j - 1] + 1;
} else {
int a = hm.get(k);
int p1 = a / 10;
int e1 = a % 10;
if (en[p1][e1] > en[i][j - 1]) {
en[i][j] = en[p1][e1] + 1;
} else {
en[i][j] = en[i][j - 1] + 1;
}
}
}
}
for (i = 1; i <= p; i++) {
System.out.println(en[i][j]);
39
}
}
Output:
40
41
42
EXPERIMENT NO. 5
Aim: To implement Bully Election Algorithm.
Theory:
Distributed Algorithm is an algorithm that runs on a distributed system. Distributed system is a
collection of independent computers that do not share their memory. Each processor has its own
memory and they communicate via communication networks. Communication in networks is
implemented in a process on one machine communicating with a process on another machine.
Many algorithms used in the distributed system require a coordinator that performs functions
needed by other processes in the system.
Election Algorithms
Election algorithms choose a process from group of processors to act as a coordinator. If the
coordinator process crashes due to some reasons, then a new coordinator is elected on other
processor. Election algorithm basically determines where a new copy of coordinator should be
restarted. Election algorithm assumes that every active process in the system has a unique
priority number. The process with highest priority will be chosen as a new coordinator. Hence,
when a coordinator fails, this algorithm elects that active process which has highest priority
number. Then, this number is sent to every active process in the distributed system.
The Bully Algorithm – This algorithm applies to system where every process can send a
message to every other process in the system. Algorithm – Suppose process P sends a message to
the coordinator.
43
5. Then it sends a message to all lower priority number processes that it is elected as their
new coordinator.
6. However, if an answer is received within time T from any other process Q,
(I) Process P again waits for time interval T’ to receive another message from Q that it
has been elected as coordinator.
(II) If Q doesn’t respond within time interval T’ then it is assumed to have failed and
algorithm is restarted.
• Disadvantages
● A large number of messages are sent, this can overload the system.
● There may be cases in very large systems that multiple coordinators get elected.
44
Program:
//Bully.java
package com.saif.exp5;
import java.io.*;
import java.util.*;
public class Bully {
static int n;
static int pro[] = new int[100];
static int sta[] = new int[100];
static int co;
45
break;
}
if (cl == 1) {
System.out.print("Which process will initiate election? = ");
int ele = sc.nextInt();
elect(ele);
}
System.out.println("Final coordinator is " + co);
} while (choice);
}
46
Output:
47
EXPERIMENT NO. 6
Aim: To implement program for Mutual Exclusion Algorithm.
Theory:
Mutual exclusion is a concurrency control property which is introduced to prevent race
conditions. It is the requirement that a process cannot enter its critical section while another
concurrent process is currently present or executing in its critical section i.e only one process is
allowed to execute the critical section at any given instance of time.
In single computer system, memory and other resources are shared between different processes.
The status of shared resources and the status of users is easily available in the shared memory so
with the help of shared variable (For example: Semaphores) mutual exclusion problem can be
easily solved.
In Distributed systems, we neither have shared memory nor a common physical clock and there
for we cannot solve mutual exclusion problem using shared variables. To eliminate the mutual
exclusion problem in distributed system approach based on message passing is used.
A site in distributed system do not have complete information of state of the system due to lack
of shared memory and a common physical clock.
● No Deadlock: Two or more site should not endlessly wait for any message that will
never arrive.
● No Starvation: Every site who wants to execute critical section should get an
opportunity to execute it in finite time. Any site should not wait indefinitely to execute
critical section while other site are repeatedly executing critical section.
48
● Fairness: Each site should get a fair chance to execute critical section. Any request to
execute critical section must be executed in the order they are made i.e Critical section
execution requests should be executed in the order of their arrival in the system.
● Fault Tolerance: In case of failure, it should be able to recognize it by itself in order to
continue functioning without any disruption.
Program:
//MutualServer.java
package com.saif.exp6;
import java.io.*;
import java.net.*;
public class MutualServer implements Runnable {
Socket socket = null;
static ServerSocket ss;
MutualServer(Socket newSocket) {
this.socket = newSocket;
}
//ClientOne.java
package com.saif.exp6;
49
import java.io.*;
import java.net.*;
//ClientTwo.java
package com.saif.exp6;
import java.io.*;
import java.net.*;
public class ClientTwo {
public static void main(String args[]) throws IOException {
Socket s = new Socket("127.0.0.1", 7000);
PrintStream out = new PrintStream(s.getOutputStream());
Socket s2 = new Socket("127.0.0.1", 7001);
BufferedReader in2 = new BufferedReader(new InputStreamReader(s2.getInputStream()));
PrintStream out2 = new PrintStream(s2.getOutputStream());
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
String str = "Token";
50
while (true) {
System.out.println("Waiting for Token");
str = in2.readLine();
if (str.equalsIgnoreCase("Token")) {
System.out.println("Do you want to send some data");
System.out.println("Enter Yes or No");
}
str = br.readLine();
if (str.equalsIgnoreCase("Yes")) {
System.out.println("Enter the data");
str = br.readLine();
out.println(str);
}
out2.println("Token");
}
}
}
Output:
51
EXPERIMENT NO. 7
Aim: To implement Banker’s Algorithm for Deadlock Management.
Theory:
The banker’s algorithm is a resource allocation and deadlock avoidance algorithm that tests for
safety by simulating the allocation for predetermined maximum possible amounts of all
resources, then makes an “s-state” check to test for possible activities, before deciding whether
allocation should be allowed to continue.
Banker’s algorithm is named so because it is used in banking system to check whether loan can
be sanctioned to a person or not. Suppose there are n number of account holders in a bank and
the total sum of their money is S. If a person applies for a loan then the bank first subtracts the
loan amount from the total money that bank has and if the remaining amount is greater than S
then only the loan is sanctioned. It is done because if all the account holders comes to withdraw
their money then the bank can easily do it.
In other words, the bank would never allocate its money in such a way that it can no longer
satisfy the needs of all its customers. The bank would try to be in safe state always.
Let ‘n’ be the number of processes in the system and ‘m’ be the number of resources types.
Available :
● It is a 1-d array of size ‘m’ indicating the number of available resources of each type.
● Available[ j ] = k means there are ‘k’ instances of resource type Rj
Max :
● It is a 2-d array of size ‘n*m’ that defines the maximum demand of each process in a
system.
● Max[ i, j ] = k means process Pi may request at most ‘k’ instances of resource type Rj.
52
Allocation :
● It is a 2-d array of size ‘n*m’ that defines the number of resources of each type currently
allocated to each process.
● Allocation[ i, j ] = k means process Pi is currently allocated ‘k’ instances of resource type
Rj
Need :
● It is a 2-d array of size ‘n*m’ that indicates the remaining resource need of each process.
● Need [ i, j ] = k means process Pi currently need ‘k’ instances of resource type Rj
● Need [ i, j ] = Max [ i, j ] – Allocation [ i, j ]
Allocationi specifies the resources currently allocated to process Pi and Needi specifies the
additional resources that process Pi may still request to complete its task.
Safety Algorithm
The algorithm for finding out whether or not a system is in a safe state can be described as
follows:
1) Let Work and Finish be vectors of length ‘m’ and ‘n’ respectively.
a) Finish[i] = false
Finish[i] = true
53
goto step (2)
Resource-Request Algorithm
Let Requesti be the request array for process Pi. Request i [j] = k means process Pi wants k
instances of resource type Rj. When a request for resources is made by process Pi, the following
actions are taken:
Goto step (2) ; otherwise, raise an error condition, since the process has exceeded its maximum
claim.
Goto step (3); otherwise, Pi must wait, since the resources are not available.
3) Have the system pretend to have allocated the requested resources to process Pi by modifying
the state as
follows:
Program:
//Bankers.java
package com.saif.exp7;
import java.util.Scanner;
public class Bankers{
private int need[][],allocate[][],max[][],avail[][],np,nr;
private void input(){
Scanner sc=new Scanner(System.in);
54
System.out.print("Enter no. of processes and resources: ");
np=sc.nextInt(); //no. of process
nr=sc.nextInt(); //no. of resources
need=new int[np][nr]; //initializing arrays
max=new int[np][nr];
allocate= new int[np][nr];
avail=new int[1][nr];
System.out.println("Enter allocation matrix -->");
for(int i=0;i<np;i++){
for(int j=0;j<nr;j++){
allocate[i][j]=sc.nextInt(); //allocation matrix
}
}
System.out.println("Enter max matrix -->");
for(int i=0;i<np;i++) {
for (int j = 0; j < nr; j++){
max[i][j] = sc.nextInt(); //max matrix
}
}
System.out.println("Enter available matrix -->");
for(int j=0;j<nr;j++){
avail[0][j]=sc.nextInt(); //available matrix
}
sc.close();
}
private int[][] calc_need(){
for(int i=0;i<np;i++) {
for (int j = 0; j < nr; j++) { //calculating need matrix
need[i][j] = max[i][j] - allocate[i][j];
}
}
return need;
}
private boolean check(int i){
//checking if all resources for ith process can be allocated
for(int j=0;j<nr;j++){
if(avail[0][j]<need[i][j]){
return false;
}
}
return true;
}
public void isSafe(){
input();
calc_need();
boolean done[]=new boolean[np];
55
int j=0;
while(j<np) { //until all process allocated
boolean allocated=false;
for(int i=0;i<np;i++) {
if (!done[i] && check(i)) { //trying to allocate
for (int k = 0; k < nr; k++){
avail[0][k]=avail[0][k]-need[i][k]+max[i][k];
}
System.out.println("Allocated process : " + i);
}
allocated =done[i]=true;
j++;
}
if(!allocated) break; //if no allocation
}
if(j==np) { //if all processes are allocated
System.out.println("\nSafely allocated");
}else {
System.out.println("All process cant be allocated safely");
}
}
public static void main(String[] args){
new Bankers().isSafe();
}
}
Output:
56
EXPERIMENT NO. 8
Aim: To implement the program for demonstrating a load-balancing approach in a
distributed environment.
Theory:
A load balancer is a device that acts as a reverse proxy and distributes network or application
traffic across a number of servers. Load adjusting is the approach to conveying load units (i.e.,
occupations/assignments) across the organization which is associated with the distributed
system. Load adjusting should be possible by the load balancer. The load balancer is a
framework that can deal with the load and is utilized to disperse the assignments to the servers.
The load balancers allocates the primary undertaking to the main server and the second
assignment to the second server.
● Security: A load balancer provide safety to your site with practically no progressions to
your application.
● Protect applications from emerging threats: The Web Application Firewall (WAF) in
the load balancer shields your site.
● Authenticate User Access: The load balancer can demand a username and secret key
prior to conceding admittance to your site to safeguard against unapproved access.
● Protect against DDoS attacks: The load balancer can distinguish and drop conveyed
refusal of administration (DDoS) traffic before it gets to your site.
57
● Performance: Load balancers can decrease the load on your web servers and advance
traffic for a superior client experience.
● SSL Offload: Protecting traffic with SSL (Secure Sockets Layer) on the load balancer
eliminates the upward from web servers bringing about additional assets being accessible
for your web application.
● Traffic Compression: A load balancer can pack site traffic giving your clients a vastly
improved encounter with your site.
● Round Robin
● Least Connections
● Least Time
● Hash
● IP Hash
Following are a portion of the various classes of the load adjusting calculations.
● Static: In this model assuming any hub/node is found with a heavy load, an assignment
can be taken arbitrarily and move the undertaking to some other arbitrary system. .
● Dynamic: It involves the present status data for load adjusting. These are better
calculations than static calculations.
● Deterministic: These calculations utilize processor and cycle attributes to apportion
cycles to the hubs.
● Centralized: The framework states data is gathered by a single hub.
58
Advantages of Load Balancing:
Migration:
Another important policy to be used by a distributed operating system that supports process
migration is to decide about the total number of times a process should be allowed to migrate.
59
Migration Models:
● Code section
● Resource section
● Execution section
60
● Resource fragment: It contains a reference to outer resources required by the interaction.
● Execution section: It stores the ongoing execution condition of interaction, comprising
private information, the stack, and the program counter.
● Powerless movement: In the powerless relocation just the code section will be moved.
● Solid relocation: In this movement, both the code fragment and the execution portion will
be moved. The relocation additionally can be started by the source.
Program:
//LoadBalance.java
package com.saif.exp8;
import java.util.*;
public class LoadBalance {
static void printLoad(int servers, int processes){
int each = processes / servers;
int extra = processes % servers;
int total = 0;
int i = 0;
for (i = 0; i < extra; i++) {
System.out.println("Server "+(i+1)+" has "+(each+1)+" Processes");
}
for (;i<servers;i++){
System.out.println("Server "+(i+1)+" has "+each+" Processes");
}
}
public static void main(String[] args){
Scanner sc = new Scanner(System.in);
System.out.print("Enter the number of Servers: ");
int servers= sc.nextInt();
System.out.print("Enter the number of Processes: ");
int processes = sc.nextInt();
while (true){
printLoad(servers,processes);
System.out.println("\n1.Add Servers 2.Remove Server 3.Add Processes 4.Remove
Processes 5.Exit ");
System.out.print("> ");
switch(sc.nextInt()){
case 1:
System.out.print("How many more servers to add? ");
servers+=sc.nextInt();
break;
case 2:
System.out.print("How many more servers to remove? ");
servers-=sc.nextInt();
61
break;
case 3:
System.out.print("How many more Processes to add? ");
processes+=sc.nextInt();
break;
case 4:
System.out.print("How many more processes to remove? ");
processes-=sc.nextInt();
break;
case 5:
return;
}
}
}
}
Output:
EXPERIMENT NO. 9
62
Aim: Distributed Shared Memory.
Theory:
Distributed Shared Memory (DSM) implements the distributed systems shared memory model in
a distributed system, that hasn’t any physically shared memory. Shared model provides a virtual
address area shared between any or all nodes. To beat the high forged of communication in
distributed system. DSM memo, model provides a virtual address area shared between all nodes.
systems move information to the placement of access. Information moves between main memory
and secondary memory (within a node) and between main recollections of various nodes.
Every Greek deity object is in hand by a node. The initial owner is that the node that created the
object. Possessions will amendment as the object moves from node to node. Once a method
accesses information within the shared address space, the mapping manager maps shared
memory address to physical memory (local or remote).
63
DSM permits programs running on separate reasons to share information while not the software
engineer having to agitate causation message instead underlying technology can send the
messages to stay the DSM consistent between compute. DSM permits programs that won’t to
treat constant laptop to be simply tailored to control on separate reason. Programs access what
seems to them to be traditional memory.
Hence, programs that Pine Tree State DSM square measure sometimes shorter and easier to
grasp than programs that use message passing. But, DSM isn’t appropriate for all things. Client-
server systems square measure typically less suited to DSM, however, a server is also wont to
assist in providing DSM practicality for information shared between purchasers.
Every node consists of 1 or additional CPU’s and a memory unit. High-speed communication
network is employed for connecting the nodes. A straightforward message passing system
permits processes on completely different nodes to exchange one another.
Memory mapping manager routine in every node maps the native memory onto the shared
computer storage. For mapping operation, the shared memory house is divided into blocks.
Information caching may be a documented answer to deal with operation latency. DMA uses
information caching to scale back network latency. the most memory of the individual nodes is
employed to cache items of the shared memory house.
Memory mapping manager of every node reads its native memory as an enormous cache of the
shared memory house for its associated processors. The bass unit of caching may be a memory
block. Systems that support DSM, information moves between secondary memory and main
memory also as between main reminiscences of various nodes.
64
Communication Network Unit:
Once method access information within the shared address house mapping manager maps the
shared memory address to the physical memory. The mapped layer of code enforced either
within the operating kernel or as a runtime routine.
Physical memory on every node holds pages of shared virtual–address house. Native pages area
unit gift in some node’s memory. Remote pages in some other node’s memory.
Program:
// SharedMemory.java
package com.saif.exp9;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.PrintStream;
import java.net.ServerSocket;
import java.net.Socket;
65
} else {
cout.println("Check syntax");
//break;
}
System.out.println("Client count" + count);
}
}
}
// SharedMemoryClient.java
package com.saif.exp10;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.PrintStream;
import java.net.Socket;
import java.util.Scanner;
66
Output:
67
EXPERIMENT NO. 10
Aim: Distributed File System (AFS/CODA).
Theory:
A Distributed File System (DFS) as the name suggests, is a file system that is distributed on
multiple file servers or multiple locations. It allows programs to access or store isolated files as
they do with the local ones, allowing programmers to access files from any network or computer.
The main purpose of the Distributed File System (DFS) is to allows users of physically
distributed systems to share their data and resources by using a Common File System. A
collection of workstations and mainframes connected by a Local Area Network (LAN) is a
configuration on Distributed File System. A DFS is executed as a part of the operating system. In
DFS, a namespace is created and this process is transparent for the clients.
● Location Transparency –
● Redundancy –
In the case of failure and heavy load, these components together improve data availability
by allowing the sharing of data in different locations to be logically grouped under one
folder, which is known as the “DFS root”.
It is not necessary to use both the two components of DFS together, it is possible to use the
namespace component without using the file replication component and it is perfectly possible to
use the file replication component without using the namespace component between servers.
68
File system replication:
Early iterations of DFS made use of Microsoft’s File Replication Service (FRS), which allowed
for straightforward file replication between servers. The most recent iterations of the whole file
are distributed to all servers by FRS, which recognises new or updated files.
“DFS Replication” was developed by Windows Server 2003 R2 (DFSR). By only copying the
portions of files that have changed and minimising network traffic with data compression, it
helps to improve FRS. Additionally, it provides users with flexible configuration options to
manage network traffic on a configurable schedule.
Features of DFS:
● Transparency:
o Structure transparency –
There is no need for the client to know about the number or locations of file
servers and the storage devices. Multiple file servers should be provided for
performance, adaptability, and dependability.
o Access transparency –
Both local and remote files should be accessible in the same manner. The file
system should be automatically located on the accessed file and send it to the
client’s side.
o Naming transparency –
There should not be any hint in the name of the file to the location of the file.
Once a name is given to the file, it should not be changed during transferring from
one node to another.
o Replication transparency –
If a file is copied on multiple nodes, both the copies of the file and their locations
should be hidden from one node to another.
69
● User mobility :
It will automatically bring the user’s home directory to the node where the user logs in.
● Performance:
Performance is based on the average amount of time needed to convince the client
requests. This time covers the CPU time + time taken to access secondary storage +
network access time. It is advisable that the performance of the Distributed File System
be similar to that of a centralized file system.
The user interface of a file system should be simple and the number of commands in the
file should be small.
● High availability:
A Distributed File System should be able to continue in case of any partial failures like a
link failure, a node failure, or a storage drive crash.
A high authentic and adaptable distributed file system should have different and
independent file servers for controlling different and independent storage devices.
● Scalability:
Since growing the network by adding new machines or joining two networks together is
routine, the distributed system will inevitably grow over time. As a result, a good
distributed file system should be built to scale quickly as the number of nodes and users
in the system grows. Service should not be substantially disrupted as the number of nodes
and users grows.
● High reliability:
70
copies of key files that can be used if the originals are lost. Many file systems employ
stable storage as a high-reliability strategy.
● Data integrity:
Multiple users frequently share a file system. The integrity of data saved in a shared file
must be guaranteed by the file system. That is, concurrent access requests from many
users who are competing for access to the same file must be correctly synchronized using
a concurrency control method. Atomic transactions are a high-level concurrency
management mechanism for data integrity that is frequently offered to users by a file
system.
● Security:
A distributed file system should be secure so that its users may trust that their data will be
kept private. To safeguard the information contained in the file system from unwanted &
unauthorized access, security mechanisms must be implemented.
● Heterogeneity:
History:
The server component of the Distributed File System was initially introduced as an add-on
feature. It was added to Windows NT 4.0 Server and was known as “DFS 4.1”. Then later on it
was included as a standard component for all editions of Windows 2000 Server. Client-side
support has been included in Windows NT 4.0 and also in later on version of Windows.
Linux kernels 2.6.14 and versions after it come with an SMB client VFS known as “cifs” which
supports DFS. Mac OS X 10.7 (lion) and onwards supports Mac OS X DFS.
71
Properties:
● File transparency: users can access files without knowing where they are physically
stored on the network.
● Load balancing: the file system can distribute file access requests across multiple
computers to improve performance and reliability.
● Data replication: the file system can store copies of files on multiple computers to
ensure that the files are available even if one of the computers fails.
● Security: the file system can enforce access control policies to ensure that only
authorized users can access files.
● Scalability: the file system can support a large number of users and a large number of
files.
● Concurrent access: multiple users can access and modify the same file at the same time.
● Fault tolerance: the file system can continue to operate even if one or more of its
components fail.
● Data integrity: the file system can ensure that the data stored in the files is accurate and
has not been corrupted.
● File migration: the file system can move files from one location to another without
interrupting access to the files.
● Data consistency: changes made to a file by one user are immediately visible to all other
users.
● Support for different file types: the file system can support a wide range of file types,
including text files, image files, and video files.
Applications:
● NFS –
NFS stands for Network File System. It is a client-server architecture that allows a
computer user to view, store, and update files remotely. The protocol of NFS is one of the
several distributed file system standards for Network-Attached Storage (NAS).
● CIFS –
72
CIFS stands for Common Internet File System. CIFS is an accent of SMB. That is, CIFS
is an application of SIMB protocol, designed by Microsoft.
● SMB –
SMB stands for Server Message Block. It is a protocol for sharing a file and was invented
by IMB. The SMB protocol was created to allow computers to perform read and write
operations on files to a remote host over a Local Area Network (LAN). The directories
present in the remote host can be accessed via SMB and are called as “shares”.
● Hadoop –
● NetWare –
● Working of DFS :
It allows only for those DFS roots that exist on the local computer and are not using
Active Directory. A Standalone DFS can only be acquired on those computers on which
it is created. It does not provide any fault liberation and cannot be linked to any other
DFS. Standalone DFS roots are rarely come across because of their limited advantage.
73
It stores the configuration of DFS in Active Directory, creating the DFS namespace root
accessible at \\<domainname>\<dfsroot> or \\<FQDN>\<dfsroot>
Advantages:
● DFS allows multiple user to access or store the data.
● It allows the data to be share remotely.
● It improved the availability of file, access time, and network efficiency.
● Improved the capacity to change the size of the data and also improves the ability to
exchange the data.
● Distributed File System provides transparency of data even if server or disk fails.
Disadvantages:
● In Distributed File System nodes and connections needs to be secured therefore we can
say that security is at stake.
● There is a possibility of lose of messages and data in the network while movement from
one node to another.
● Database connection in case of Distributed File System is complicated.
● Also handling of the database is not easy in Distributed File System as compared to a
single user system.
● There are chances that overloading will take place if all nodes tries to send data at once.
Program:
// DistributedFileSystemServer.java
74
package com.saif.exp10;
import com.saif.exp11.DistributedFileSystem;
import java.io.DataInputStream;
import java.io.DataOutputStream;
import java.io.IOException;
import java.net.InetAddress;
import java.net.ServerSocket;
import java.net.Socket;
import java.util.List;
75
outputStream.writeBoolean(true);
} else if (command.equals("get")) {
String fileName = inputStream.readUTF();
List<InetAddress> addresses = fileSystem.getFileAddresses(fileName);
outputStream.writeInt(addresses.size());
for (InetAddress address : addresses) {
outputStream.writeUTF(address.getHostAddress());
}
} else if (command.equals("exit")) {
break;
} else {
System.err.println("Invalid command: " + command);
outputStream.writeBoolean(false);
}
}
} catch (IOException e) {
System.err.println("Client error: " + e.getMessage());
}
System.out.println("Client disconnected: " +
clientSocket.getInetAddress().getHostAddress());
}
}
// DistributedFileSystemClient.java
package com.saif.exp10;
import java.io.*;
import java.net.*;
import java.util.*;
76
System.out.print("Enter server address: ");
InetAddress address = InetAddress.getByName(scanner.nextLine());
outputStream.writeUTF(command);
outputStream.writeUTF(fileName);
outputStream.writeUTF(address.getHostAddress());
boolean success = inputStream.readBoolean();
if (success) {
System.out.println("File added successfully");
} else {
System.out.println("Failed to add file");
}
} else if (command.equals("remove")) {
System.out.print("Enter file name: ");
String fileName = scanner.nextLine();
System.out.print("Enter server address: ");
InetAddress address = InetAddress.getByName(scanner.nextLine());
outputStream.writeUTF(command);
outputStream.writeUTF(fileName);
outputStream.writeUTF(address.getHostAddress());
boolean success = inputStream.readBoolean();
if (success) {
System.out.println("File removed successfully");
} else {
System.out.println("Failed to remove file");
}
} else if (command.equals("get")) {
System.out.print("Enter file name: ");
String fileName = scanner.nextLine();
outputStream.writeUTF(command);
outputStream.writeUTF(fileName);
int numAddresses = inputStream.readInt();
if (numAddresses == 0) {
System.out.println("File not found");
} else {
System.out.println("File found on " + numAddresses + " servers:");
for (int i = 0; i < numAddresses; i++) {
String address = inputStream.readUTF();
System.out.println("- " + address);
}
}
} else if (command.equals("exit")) {
outputStream.writeUTF(command);
break;
} else {
System.err.println("Invalid command: " + command);
}
77
}
} catch (IOException e) {
System.err.println("Client error: " + e.getMessage());
}
}
}
Output:
78