0% found this document useful (0 votes)
12 views

Distributed Computing Lab Manual

This document certifies that Mr. Saif Umarshah Bodu has completed all practical work for the Distributed Computing Lab as part of his Computer Engineering degree at Chhatrapati Shivaji Maharaj Institute of Technology during the academic year 2022-2023. It includes a list of experiments and assignments related to inter-process communication, client-server applications, and group communication, detailing various methods and programming examples. The document serves as an official record of the student's academic achievements in the course.

Uploaded by

Nilay
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Distributed Computing Lab Manual

This document certifies that Mr. Saif Umarshah Bodu has completed all practical work for the Distributed Computing Lab as part of his Computer Engineering degree at Chhatrapati Shivaji Maharaj Institute of Technology during the academic year 2022-2023. It includes a list of experiments and assignments related to inter-process communication, client-server applications, and group communication, detailing various methods and programming examples. The document serves as an official record of the student's academic achievements in the course.

Uploaded by

Nilay
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 81

CHHATRAPATI SHIVAJI MAHARAJ

INSTITUTE
OF TECHNOLOGY
(Affiliated to the Mumbai University, Approved by AICTE-New Delhi)

Near Shedung Toll Plaza, Old Mumbai-Pune Highway, Post - Shedung, Taluka Panvel, Dist. Raigad, Navi Mumbai, Maharashtra 410206

Certificate
This is to certify that Mr. /Ms. SAIF UMARSAHAB BODU Roll No: 06 Semester: VIII
Branch: COMPUTER ENGINEERING has conducted all practical work of the session
for Subject: DISTRIBUTED COMPUTING LAB (CSL801) as a part of academic
requirement of University of Mumbai and has completed all exercise satisfactorily during
the academic year 2022 - 2023.

Date: / /2023

SAIF UMARSAHAB BODU PROF. HARISHCHANDRA MAURYA

Signature of Student Lecture In-Charge

Internal Examiner Head of Department

External Examiner Principal

Seal of
College

1
CHHATRAPATI SHIVAJI MAHARAJ INSTITUTE OF
TECHNOLOGY
(Affiliated to the Mumbai University, Approved by AICTE-New Delhi)

Academic Year: 2022-23 Semester: VIII Branch: Computer Engineering


Sr. Title of Experiments Date Of Date Of Sign Grade/
No Performance Completion Remark

1. Inter-process communication.

2. Client/Server using RPC/RMI.

3. Group Communication.

4. Clock Synchronization algorithms.

5. Election Algorithm.

6. Mutual Exclusion Algorithm.

7. Deadlock Management in Distributed System.

8. Load Balancing.

9. Distributed Shared Memory.

10. Distributed File System (AFS/CODA).

Sr. Title of Assignments Date Of Date Of Sign Grade/


No Performance Completion Remark
1.

2.

3.

4.

5.

6.
Signature of Student Signature of Staff
EXPERIMENT NO. 1
Aim: To implement Inter-process Communication using TCP Based on Socket
Programming.

Theory:

Inter-process communication is the mechanism provided by the operating system that allows
processes to communicate with each other. This communication could involve a process letting
another process know that some event has occurred or the transferring of data from one process
to another.

A diagram that illustrates inter-process communication is as follows –

Synchronization in Inter-process Communication


Synchronization is a necessary part of inter-process communication. It is either provided by the
inter-process control mechanism or handled by the communicating processes. Some of the
methods to provide synchronization are as follows −

● Semaphore
A semaphore is a variable that controls the access to a common resource by multiple
processes. The two types of semaphores are binary semaphores and counting semaphores.
● Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the critical section at a
time. This is useful for synchronization and also prevents race conditions.
● Barrier
A barrier does not allow individual processes to proceed until all the processes reach it.
Many parallel languages and collective routines impose barriers.
● Spinlock
1
This is a type of lock. The processes trying to acquire this lock wait in a loop while
checking if the lock is available or not. This is known as busy waiting because the process
is not doing any useful operation even though it is active.
Approaches to Inter-process Communication
The different approaches to implement inter-process communication are given as follows −

● Pipe
A pipe is a data channel that is unidirectional. Two pipes can be used to create a two-way
data channel between two processes. This uses standard input and output methods. Pipes
are used in all POSIX systems as well as Windows operating systems.
● Socket
The socket is the endpoint for sending or receiving data in a network. This is true for data
sent between processes on the same computer or data sent between different computers on
the same network. Most of the operating systems use sockets for inter-process
communication.
● File
A file is a data record that may be stored on a disk or acquired on demand by a file server.
Multiple processes can access a file as required. All operating systems use files for data
storage.
● Signal
Signals are useful in inter-process communication in a limited way. They are system
messages that are sent from one process to another. Normally, signals are not used to
transfer data but are used for remote commands between processes.
● Shared Memory
Shared memory is the memory that can be simultaneously accessed by multiple processes.
This is done so that the processes can communicate with each other. All POSIX systems,
as well as Windows operating systems use shared memory.

● Message Queue

2
Multiple processes can read and write data to the message queue without being connected
to each other. Messages are stored in the queue until their recipient retrieves them.
Message queues are quite useful for interprocess communication and are used by most
operating systems.
A diagram that demonstrates message queue and shared memory methods of interprocess
communication is as follows –

Inter-process Communication with Sockets:


One of the ways to manage inter-process communication is by using sockets. They provide
point-to-point, two-way communication between two processes. Sockets are an endpoint of
communication and a name can be bound to them. A socket can be associated with one or more
processes.

Types of Sockets

3
The different types of sockets are given as follows –

● Sequential Packet Socket: This type of socket provides a reliable connection for
datagrams whose maximum length is fixed This connection is two-way as well as
sequenced.
● Datagram Socket: A two-way flow of messages is supported by the datagram socket. The
receiver in a datagram socket may receive messages in a different order than that in
which they were sent. The operation of datagram sockets is similar to that of passing
letters from the source to the destination through a mail.
● Stream Socket: Stream sockets operate like a telephone conversation and provide a two-
way and reliable flow of data with no record boundaries. This data flow is also sequenced
and unduplicated.
● Raw Socket: The underlying communication protocols can be accessed using the raw
sockets.

Socket Creation

Sockets can be created in a specific domain and the specific type using the following
declaration–

int socket(int domain, int type, int protocol)

If the protocol is not specified in the above system call, the system uses a default protocol that
supports the socket type. The socket handle is returned. It is a descriptor.

The bind function call is used to bind an internet address or path to a socket. This is shown as
follows −

int bind(int s, const struct sockaddr *name, int namelen)

Connecting Stream Sockets

4
Connecting the stream sockets is not a symmetric process. One of the processes acts as a server
and the other acts as a client. The server specifies the number of connection requests that can be
queued using the following declaration −

int listen(int s, int backlog)

The client initiates a connection to the server’s socket by using the following declaration −

int connect(int s, struct sockaddr *name, int namelen)

A new socket descriptor which is valid for that particular connection is returned by the following
declaration −

int accept(int s, struct sockaddr *addr, int *addrlen)

Stream Data Transfer

The send() and recv() functions are used to send and receive data using sockets. These are
similar to the read() and write() functions but contain some extra flags. The declaration for send()
and recv() are as follows −

int send(int s, const char *msg, int len, int flags)

int recv(int s, char *buf, int len, int flags)

Stream Closing

The socket is discarded or closed by calling close().

Program:

5
// TCPServer.java
package com.saif.exp1;
import java.util.*;
import java.io.*;
import java.net.*;

public class TCPServer {

public static void main(String[] args) throws Exception {


ServerSocket server = new ServerSocket(25);
System.out.println("Connecting...");
Socket ss = server.accept();
System.out.println("Connected");
DataInputStream din = new DataInputStream(ss.getInputStream());
DataOutputStream dout = new DataOutputStream(ss.getOutputStream());
String str = "";
int sum = 0;
System.out.println("Receiving integers from client...");
while (true) {
str = din.readUTF();
if (str.equals("stop")) {
break;
}
sum = sum + Integer.parseInt(str);
}
dout.writeUTF(Integer.toString(sum));
dout.flush();
din.close();
ss.close();
server.close();
}
}

// TCPClient.java

package com.saif.exp1;

import java.io.*;
import java.util.*;
import java.net.*;

public class TCPClient {

public static void main(String[] args) throws Exception {


System.out.println("Connecting...");
Socket client = new Socket("127.0.0.1", 25);

6
System.out.println("Connected");
DataInputStream din = new DataInputStream(client.getInputStream());
DataOutputStream dout = new DataOutputStream(client.getOutputStream());
Scanner sc = new Scanner(System.in);
String send = "";
while (!send.equals("stop")) {
System.out.print("Send: ");
send = sc.nextLine();
dout.writeUTF(send);
}
dout.flush();
String recv = din.readUTF();
System.out.println("Sum of the integers is: " + recv);
dout.close();
din.close();
client.close();
}
}

Output:

7
EXPERIMENT NO. 2
Aim: To implement Client-Server Application using Java RMI.
Theory:
The RMI (Remote Method Invocation) is an API that provides a mechanism to create distributed
application in java. The RMI allows an object to invoke methods on an object running in another
JVM.

The RMI provides remote communication between the applications using two objects stub and
skeleton.

Understanding stub and skeleton

RMI uses stub and skeleton object for communication with the remote object.

A remote object is an object whose method can be invoked from another JVM. Let's understand
the stub and skeleton objects:

Stub

The stub is an object, acts as a gateway for the client side. All the outgoing requests are routed
through it. It resides at the client side and represents the remote object. When the caller invokes
method on the stub object, it does the following tasks:

1. It initiates a connection with remote Virtual Machine (JVM),


2. It writes and transmits (marshals) the parameters to the remote Virtual Machine (JVM),
3. It waits for the result
4. It reads (unmarshals) the return value or exception, and
5. It finally, returns the value to the caller.

Skeleton

The skeleton is an object, acts as a gateway for the server side object. All the incoming requests
are routed through it. When the skeleton receives the incoming request, it does the following
tasks:

1. It reads the parameter for the remote method

8
2. It invokes the method on the actual remote object, and
3. It writes and transmits (marshals) the result to the caller.

In the Java 2 SDK, an stub protocol was introduced that eliminates the need for skeletons.

Understanding requirements for the distributed applications

If any application performs these tasks, it can be distributed application.

● The application need to locate the remote method


● It need to provide the communication with the remote objects, and
● The application need to load the class definitions for the objects.

The RMI application have all these features, so it is called the distributed application.

Java RMI Example

The is given the 6 steps to write the RMI program.

1. Create the remote interface


2. Provide the implementation of the remote interface
3. Compile the implementation class and create the stub and skeleton objects using the rmic
tool

9
4. Start the registry service by rmiregistry tool
5. Create and start the remote application
6. Create and start the client application

RMI Example

In this example, we have followed all the 6 steps to create and run the rmi application. The client
application need only two files, remote interface and client application. In the rmi application,
both client and server interacts with the remote interface. The client application invokes methods
on the proxy object, RMI sends the request to the remote JVM. The return value is sent back to
the proxy object and then to the client application.

1) Create the remote interface

For creating the remote interface, extend the Remote interface and declare the RemoteException
with all the methods of the remote interface. Here, we are creating a remote interface that
extends the Remote interface. There is only one method named add() and it declares
RemoteException.

10
2) Provide the implementation of the remote interface

Now provide the implementation of the remote interface. For providing the implementation of
the Remote interface, we need to

● Either extend the UnicastRemoteObject class,


● or use the exportObject() method of the UnicastRemoteObject class

In case, you extend the UnicastRemoteObject class, you must define a constructor that declares
RemoteException.

3) Create the stub and skeleton objects using the rmic tool.

Next step is to create stub and skeleton objects using the rmi compiler. The rmic tool invokes the
RMI compiler and creates stub and skeleton objects.

4) Start the registry service by the rmiregistry tool

Now start the registry service by using the rmiregistry tool. If you don't specify the port number,
it uses a default port number.

5) Create and run the server application

Now rmi services need to be hosted in a server process. The Naming class provides methods to
get and store the remote object. The Naming class provides 5 methods.

6) Create and run the client application

At the client we are getting the stub object by the lookup() method of the Naming class and
invoking the method on this object. In this example, we are running the server and client
applications, in the same machine so we are using localhost. If you want to access the remote
object from another machine, change the localhost to the host name (or IP address) where the
remote object is located.

11
Program:
//AddInterface.java
package com.saif.exp2;

import java.rmi.*;
public interface AddInterface extends Remote {
public int sum(int nl, int n2) throws RemoteException;

//Add.java
package com.saif.exp2;

import java.rmi.*;
import java.rmi.server.*;
public class Add extends UnicastRemoteObject implements AddInterface {
int num1, num2;
public Add() throws RemoteException {
}
public int sum(int n1, int n2) throws RemoteException {
num1 = n1;
num2 = n2;
return num1 + num2;
}
}

//AddServer.java
package com.saif.exp2;

import java.rmi.Naming;
public class AddServer {

public static void main(String[] args) {


try {
Naming.rebind("Add", new Add());
System.out.println("Server is connected and waiting for the client");
} catch (Exception e) {
System.out.println("Server could not connect: " + e);
}
}
}

//AddClient.java
package com.saif.exp2;

12
import java.rmi.Naming;
public class AddClient {
public static void main(String[] args) {
try {
AddInterface ai = (AddInterface) Naming.lookup("//localhost/Add");
System.out.println("The sum of 2 numbers is: " + ai.sum(10, 2));
} catch (Exception e) {
System.out.println("Client Exception: " + e);
}
}
}

Output:

13
EXPERIMENT NO. 3
Aim: To implement a program to demonstrate group communication.
Theory:
Communication between two processes in a distributed system is required to exchange various
data, such as code or a file, between the processes. When one source process tries to
communicate with multiple processes at once, it is called Group Communication. A group is a
collection of interconnected processes with abstraction. This abstraction is to hide the message
passing so that the communication looks like a normal procedure call. Group communication
also helps the processes from different hosts to work together and perform operations in a
synchronized manner, therefore increasing the overall performance of the system.

Types of Group Communication in a Distributed System:

● Broadcast Communication: When the host process tries to communicate with every process
in a distributed system at same time. Broadcast communication comes in handy when a
common stream of information is to be delivered to each and every process in most efficient
manner possible. Since it does not require any processing whatsoever, communication is very
fast in comparison to other modes of communication. However, it does not support a large
number of processes and cannot treat a specific process individually.

14
Fig. A broadcast Communication: P1 process communicating with every process in the system

● Multicast Communication: When the host process tries to communicate with a designated
group of processes in a distributed system at the same time. This technique is mainly used to
find a way to address problem of a high workload on host system and redundant information
from process in system. Multitasking can significantly decrease time taken for message
handling.

Fig. A multicast Communication: P1 process communicating with only a group of the process in
the system

● Unicast Communication: When the host process tries to communicate with a single process
in a distributed system at the same time. Although, same information may be passed to

15
multiple processes. This works best for two processes communicating as only it has to treat a
specific process only. However, it leads to overheads as it has to find exact process and then
exchange information/data.

Fig. A unicast Communication: P1 process communicating with only P3 process

Group communication characteristics

Atomicity, often known as an all-or-nothing quality, is a crucial property in the group


communication mechanism. If one or more group members have a problem receiving the
message, the process that delivers it to them will get an error notice.

The ordering attribute of the messages is in charge of managing the order in which messages are
delivered. Message ordering types include:

● No order means message sending happens without regard for the order to the group.
● FIFO order means messages are shown in the order they are sent.
● Casual order means messages are shipped in a random order after receiving another
message.
● Total order means all communications are sent to all group members in the same order.

16
Group organization
Group communication systems can be classified as either closed or open. Only members of the
closed group can send messages to the group. Users who are not group members can send
messages to each member separately. Non-members in the open group can send messages to the
group. The program's objective determines the use of a closed or open group.

The group's internal structure can be determined based on its organization. All decisions in
egalitarian groupings are made collaboratively. In the event of a failure, the group proceeds
without a procedure. The coordinator makes decisions in hierarchical clusters. The loss of the
coordinator brings all processes to a standstill.

Program:
//GCServer.java
package com.saif.exp3;

import java.io.*;
import java.util.*;
import java.io.*;
import java.net.*;

public class GCServer {


static ArrayList<ClientHandler> clients = new ArrayList<ClientHandler>();

public static void main(String[] args) throws Exception {


ServerSocket server = new ServerSocket(25);
Message msg = new Message();
int count = 0;
while (true) {
Socket ss = server.accept();
DataInputStream din = new DataInputStream(ss.getInputStream());
DataOutputStream dout = new DataOutputStream(ss.getOutputStream());
ClientHandler chlr = new ClientHandler(ss, din, dout, msg);
Thread t = chlr;
clients.add(chlr);
count++;
t.start();
}
}
}

class Message {
String msg;

17
public void setMsg(String msg) {
this.msg = msg;
}

public void getMsg() {


System.out.println("\nNEW GROUP MESSAGE: " + this.msg);
for (int i = 0; i < GCServer.clients.size(); i++) {
try {
System.out.println("Client: " + GCServer.clients.get(i).ip + "; ");
GCServer.clients.get(i).out.writeUTF(this.msg);
GCServer.clients.get(i).out.flush();
} catch (Exception e) {
System.out.print(e);
}
}
}
}

class ClientHandler extends Thread {


DataInputStream in;
DataOutputStream out;
Socket socket;
int sum;
float res;
boolean conn;
Message msg;
String ip;

public ClientHandler(Socket s, DataInputStream din, DataOutputStream dout, Message msg) {


this.socket = s;
this.in = din;
this.out = dout;
this.conn = true;
this.msg = msg;
this.ip = (((InetSocketAddress)
this.socket.getRemoteSocketAddress()).getAddress()).toString().replace("/", "");
}

public void run() {


while (conn == true) {
try {
String input = this.in.readUTF();
this.msg.setMsg(input);
this.msg.getMsg();
} catch (Exception e) {

18
conn = false;
System.out.println(e);
}
}
closeConn();
}

public void closeConn() {


try {
this.out.close();
this.in.close();
this.socket.close();
} catch (Exception e) {
System.out.println(e);
}
}
}

//GCMaster.java
package com.saif.exp3;

import java.util.*;
import java.io.*;
import java.net.*;

public class GCMaster {


public static void main(String[] args) throws Exception {
Socket client = new Socket("127.0.0.1", 25);
DataInputStream din = new DataInputStream(client.getInputStream());
DataOutputStream dout = new DataOutputStream(client.getOutputStream());
System.out.println("Connected as Master");
Scanner sc = new Scanner(System.in);
String send = "";
do {
System.out.print("Message('close' to stop): ");
send = sc.nextLine();
dout.writeUTF(send);
dout.flush();
} while (!send.equals("stop"));
dout.close();
din.close();
client.close();
}
}

19
//GCSlave.java
package com.saif.exp3;
import java.io.DataInputStream;
import java.net.Socket;

public class GCSlave {


public static void main(String[] args) throws Exception{
Socket client = new Socket("127.0.0.1",25);
DataInputStream din = new DataInputStream(client.getInputStream());
System.out.println("Connected as Slave");
String recv = "";
do{
recv = din.readUTF();
System.out.println("Master says: " + recv);
}while(!recv.equals("stop"));
din.close();
client.close();
}
}

Output:

20
EXPERIMENT NO. 4
Aim: To implement Lamport’s Clock Synchronization Algorithm.
Theory:

The algorithm of Lamport


timestamps is a simple
algorithm used to
determine the order of
events in a distributed
computer system. As
different nodes or
processes will typically
not be
perfectly synchronized,
this algorithm is used
to provide a partial
ordering of events with
21
minimal overhead, and
conceptually provide a
starting point for the
more advanced vector
clock method. They are
named after their
creator, Leslie Lamport.
Distributed algorithms
such
as resource
synchronization often
depend on some method
of ordering events to
function. For

22
example, consider a
system with two
processes and a disk. The
processes send
messages to
each other, and also send
messages to the disk
requesting access. The
disk grants access in the
order the messages were
sent.
For example process A
sends a message to the
disk requesting write
access, and then sends a
23
read instruction message
to process B. Process B
receives the message,
and as a result sends
its own read request
message to the disk. If
there is a timing delay
causing the disk to
receive
both messages at the
same time, it can
determine which
message happened-
before the other: (

24
A A happens-before B B if
one can get from A A to B
B by a sequence of
moves of two types:
moving forward while
remaining in the same
process, and following a
message from its sending
to its reception.) A logical
clock algorithm provides
a mechanism to
determine facts about
the
order of such events.

25
The algorithm of Lamport
timestamps is a simple
algorithm used to
determine the order of
events in a distributed
computer system. As
different nodes or
processes will typically
not be
perfectly synchronized,
this algorithm is used
to provide a partial
ordering of events with
minimal overhead, and
conceptually provide a
26
starting point for the
more advanced vector
clock method. They are
named after their
creator, Leslie Lamport.
Distributed algorithms
such
as resource
synchronization often
depend on some method
of ordering events to
function. For
example, consider a
system with two
processes and a disk. The
27
processes send
messages to
each other, and also send
messages to the disk
requesting access. The
disk grants access in the
order the messages were
sent.
For example process A
sends a message to the
disk requesting write
access, and then sends a
read instruction message
to process B. Process B

28
receives the message,
and as a result sends
its own read request
message to the disk. If
there is a timing delay
causing the disk to
receive
both messages at the
same time, it can
determine which
message happened-
before the other: (
A A happens-before B B if
one can get from A A to B

29
B by a sequence of
moves of two types:
moving forward while
remaining in the same
process, and following a
message from its sending
to its reception.) A logical
clock algorithm provides
a mechanism to
determine facts about
the
order of such events.
The algorithm of Lamport
timestamps is a simple

30
algorithm used to
determine the order of
events in a distributed
computer system. As
different nodes or
processes will typically
not be
perfectly synchronized,
this algorithm is used
to provide a partial
ordering of events with
minimal overhead, and
conceptually provide a
starting point for the
more advanced vector
31
clock method. They are
named after their
creator, Leslie Lamport.
Distributed algorithms
such
as resource
synchronization often
depend on some method
of ordering events to
function. For
example, consider a
system with two
processes and a disk. The
processes send
messages to
32
each other, and also send
messages to the disk
requesting access. The
disk grants access in the
order the messages were
sent.
For example process A
sends a message to the
disk requesting write
access, and then sends a
read instruction message
to process B. Process B
receives the message,
and as a result sends

33
its own read request
message to the disk. If
there is a timing delay
causing the disk to
receive
both messages at the
same time, it can
determine which
message happened-
before the other: (
A A happens-before B B if
one can get from A A to B
B by a sequence of
moves of two types:

34
moving forward while
remaining in the same
process, and following a
message from its sending
to its reception.) A logical
clock algorithm provides
a mechanism to
determine facts about
the
order of such events.
The algorithm of Lamport timestamps is a simple algorithm used to determine the order of
events in a distributed computer system. As different nodes or processes will typically not be
perfectly synchronized, this algorithm is used to provide a partial ordering of events with
minimal overhead, and conceptually provide a starting point for the more advanced vector clock
method. They are named after their creator, Leslie Lamport. Distributed algorithms such as
resource synchronization often depend on some method of ordering events to function. For
example, consider a system with two processes and a disk. The processes send messages to each
other, and also send messages to the disk requesting access. The disk grants access in the order
the messages were sent. For example process A sends a message to the disk requesting write
access, and then sends a read instruction message to process B. Process B receives the message,
and as a result sends its own read request message to the disk. If there is a timing delay causing

35
the disk to receive both messages at the same time, it can determine which message happened-
before the other: ( A A happens-before B B if one can get from A A to B B by a sequence of
moves of two types: moving forward while remaining in the same process, and following a
message from its sending to its reception.) A logical clock algorithm provides a mechanism to
determine facts about the order of such events.

Lamport invented a simple mechanism by which the happened-before ordering can be captured
numerically. A Lamport logical clock is an incrementing software counter maintained in each
process. Conceptually, this logical clock can be thought of as a clock that only has meaning in
relation to messages moving between processes. When a process receives a message, it
resynchronizes its logical clock with that sender. The above-mentioned vector clock is a
generalization of the idea into the context of an arbitrary number of parallel, independent
processes. The algorithm follows some simple rules:

1. A process increments its counter before each event in that process;


2. When a process sends a message, it includes its counter value with the message;
3. On receiving a message, the counter of the recipient is updated, if necessary, to the
greater of its current counter and the timestamp in the received message. The counter is
then incremented by 1 before the message is considered received.

36
Program:
//Lamport.java

import java.util.*;
import java.util.HashMap;
import java.util.Scanner;
import javax.swing.*;
import java.awt.*;
import java.awt.geom.*;

public class Lamport {

int e[][] = new int[10][10];


int en[][] = new int[10][10];
int ev[] = new int[10];
int i, p, j, k;

HashMap<Integer, Integer> hm = new HashMap<Integer, Integer>();


int xpoints[] = new int[5];
int ypoints[] = new int[5];

class draw extends JFrame {


private final int ARR_SIZE = 4;

void drawArrow(Graphics g1, int x1, int y1, int x2, int y2) {

Graphics2D g = (Graphics2D) g1.create();

double dx = x2 - x1, dy = y2 - y1;


double angle = Math.atan2(dy, dx);
int len = (int) Math.sqrt(dx * dx + dy * dy);
AffineTransform at = AffineTransform.getTranslateInstance(x1, y1);
at.concatenate(AffineTransform.getRotateInstance(angle));
g.transform(at);
// Draw horizontal arrow starting in (0,0)

g.drawLine(0, 0, len, 0);

37
g.fillPolygon(new int[]{len, len - ARR_SIZE, len - ARR_SIZE, len},
new int[]{0, -ARR_SIZE, ARR_SIZE, 0}, 4);
}

public void paintComponent(Graphics g) {


for (int x = 15; x < 200; x += 16) {
drawArrow(g, x, x, x, 150);
drawArrow(g, 30, 300, 300, 190);
}
}

public void paint(Graphics g) {

int h1, h11, h12;

Graphics2D go = (Graphics2D) g;
go.setPaint(Color.black);
for (i = 1; i <= p; i++) {
go.drawLine(50, 100 * i, 450, 100 * i);
}

for (i = 1; i <= p; i++) {

for (j = 1; j <= ev[i]; j++) {

k = i * 10 + j;

go.setPaint(Color.blue);

go.fillOval(50 * j, 100 * i - 3, 5, 5);


go.drawString("e" + i + j + "(" + en[i][j] + ")", 50 * j, 100 * i - 5);
h1 = hm.get(k);
if (h1 != 0) {
h11 = h1 / 10;
h12 = h1 % 10;
go.setPaint(Color.red);
drawArrow(go, 50 * h12 + 2, 100 * h11, 50 * j + 2, 100 * i);
}
}
}
}
}

public void calc() {

Scanner sc = new Scanner(System.in);

38
System.out.println("Enter the number of process:");
p = sc.nextInt();
System.out.println("Enter the no of events per process:");
for (i = 1; i <= p; i++) {

ev[i] = sc.nextInt();
}
System.out.println("Enter the relationship:");
for (i = 1; i <= p; i++) {
System.out.println("For process:" + i);
for (j = 1; j <= ev[i]; j++) {
System.out.println("For event:" + (j));
int input = sc.nextInt();
k = i * 10 + j;
hm.put(k, input);
if (j == 1) {
en[i][j] = 1;
}
}
}
for (i = 1; i <= p; i++) {

for (j = 2; j <= ev[i]; j++) {

k = i * 10 + j;
if (hm.get(k) == 0) {
en[i][j] = en[i][j - 1] + 1;
} else {
int a = hm.get(k);
int p1 = a / 10;
int e1 = a % 10;
if (en[p1][e1] > en[i][j - 1]) {
en[i][j] = en[p1][e1] + 1;
} else {
en[i][j] = en[i][j - 1] + 1;
}
}

}
}
for (i = 1; i <= p; i++) {

for (j = 1; j <= ev[i]; j++) {

System.out.println(en[i][j]);

39
}
}

JFrame jf = new draw();


jf.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
jf.setSize(500,500);
jf.setVisible(true);
}
public static void main(String[] args) {
Lamport lam = new Lamport();
lam.calc();
}
}

Output:

40
41
42
EXPERIMENT NO. 5
Aim: To implement Bully Election Algorithm.
Theory:
Distributed Algorithm is an algorithm that runs on a distributed system. Distributed system is a
collection of independent computers that do not share their memory. Each processor has its own
memory and they communicate via communication networks. Communication in networks is
implemented in a process on one machine communicating with a process on another machine.
Many algorithms used in the distributed system require a coordinator that performs functions
needed by other processes in the system.

Election Algorithms

Election algorithms choose a process from group of processors to act as a coordinator. If the
coordinator process crashes due to some reasons, then a new coordinator is elected on other
processor. Election algorithm basically determines where a new copy of coordinator should be
restarted. Election algorithm assumes that every active process in the system has a unique
priority number. The process with highest priority will be chosen as a new coordinator. Hence,
when a coordinator fails, this algorithm elects that active process which has highest priority
number. Then, this number is sent to every active process in the distributed system.

The Bully Algorithm – This algorithm applies to system where every process can send a
message to every other process in the system. Algorithm – Suppose process P sends a message to
the coordinator.

The Bully Election Process

1. P sends a message to the coordinator.


2. If coordinator does not respond to it within a time interval T, then it is assumed that
coordinator has failed.
3. Now process P sends election message to every process with high priority number.
4. It waits for responses, if no one responds for time interval T then process P elects itself as
a coordinator.

43
5. Then it sends a message to all lower priority number processes that it is elected as their
new coordinator.
6. However, if an answer is received within time T from any other process Q,
(I) Process P again waits for time interval T’ to receive another message from Q that it
has been elected as coordinator.
(II) If Q doesn’t respond within time interval T’ then it is assumed to have failed and
algorithm is restarted.

• Disadvantages

● A large number of messages are sent, this can overload the system.
● There may be cases in very large systems that multiple coordinators get elected.

44
Program:
//Bully.java
package com.saif.exp5;

import java.io.*;
import java.util.*;
public class Bully {
static int n;
static int pro[] = new int[100];
static int sta[] = new int[100];
static int co;

public static void main(String[] args) {


System.out.print("Enter the number of process: ");
Scanner sc = new Scanner(System.in);
n = sc.nextInt();
int i, j, c, cl = 1;
for (i = 0; i < n; i++) {
sta[i] = 1;
pro[i] = i;
}
boolean choice = true;
int ch;
do {
System.out.println("Enter Your Choice");
System.out.println("1. Crash Process");
System.out.println("2. Recover Process");
System.out.println("3. Exit");
System.out.print("> ");
ch = sc.nextInt();
switch (ch) {
case 1:
System.out.print("Enter the process number: ");
c = sc.nextInt();
sta[c - 1] = 0;
cl = 1;
break;
case 2:
System.out.print("Enter the process number: ");
c = sc.nextInt();
sta[c - 1] = 1;
cl = 1;
break;
case 3:
choice = false;
cl = 0;

45
break;
}
if (cl == 1) {
System.out.print("Which process will initiate election? = ");
int ele = sc.nextInt();
elect(ele);
}
System.out.println("Final coordinator is " + co);
} while (choice);
}

static void elect(int ele) {


ele = ele - 1;
co = ele + 1;
for (int i = 0; i < n; i++) {
if (pro[ele] < pro[i]) {
System.out.println("Election message is sent from " + (ele + 1) + " to " + (i + 1));
if (sta[i] == 1) {
System.out.println("Ok message is sent from " + (i + 1) + " to " + (ele + 1));
}
if (sta[i] == 1) {
elect(i + 1);
}
}
}
}
}

46
Output:

47
EXPERIMENT NO. 6
Aim: To implement program for Mutual Exclusion Algorithm.
Theory:
Mutual exclusion is a concurrency control property which is introduced to prevent race
conditions. It is the requirement that a process cannot enter its critical section while another
concurrent process is currently present or executing in its critical section i.e only one process is
allowed to execute the critical section at any given instance of time.

Mutual exclusion in single computer system vs. distributed system:

In single computer system, memory and other resources are shared between different processes.
The status of shared resources and the status of users is easily available in the shared memory so
with the help of shared variable (For example: Semaphores) mutual exclusion problem can be
easily solved.

In Distributed systems, we neither have shared memory nor a common physical clock and there
for we cannot solve mutual exclusion problem using shared variables. To eliminate the mutual
exclusion problem in distributed system approach based on message passing is used.

A site in distributed system do not have complete information of state of the system due to lack
of shared memory and a common physical clock.

Mutual exclusion is a concurrency control property which is introduced to prevent race


conditions. It is the requirement that a process cannot enter its critical section while another
concurrent process is currently present or executing in its critical section i.e only one process is
allowed to execute the critical section at any given instance of time.

Requirements of Mutual exclusion Algorithm:

● No Deadlock: Two or more site should not endlessly wait for any message that will
never arrive.
● No Starvation: Every site who wants to execute critical section should get an
opportunity to execute it in finite time. Any site should not wait indefinitely to execute
critical section while other site are repeatedly executing critical section.

48
● Fairness: Each site should get a fair chance to execute critical section. Any request to
execute critical section must be executed in the order they are made i.e Critical section
execution requests should be executed in the order of their arrival in the system.
● Fault Tolerance: In case of failure, it should be able to recognize it by itself in order to
continue functioning without any disruption.

Program:
//MutualServer.java
package com.saif.exp6;

import java.io.*;
import java.net.*;
public class MutualServer implements Runnable {
Socket socket = null;
static ServerSocket ss;
MutualServer(Socket newSocket) {
this.socket = newSocket;
}

public static void main(String[] args) throws IOException {


ss = new ServerSocket(7000);
System.out.println("Server Started");
while (true) {
Socket s = ss.accept();
MutualServer es = new MutualServer(s);
Thread t = new Thread(es);
t.start();
}
}
@Override
public void run() {
try {
BufferedReader in = new BufferedReader(new
InputStreamReader(socket.getInputStream()));
while (true) {
System.out.println(in.readLine());
}
} catch (Exception e) {}
}
}

//ClientOne.java
package com.saif.exp6;

49
import java.io.*;
import java.net.*;

public class ClientOne {


public static void main(String[] args) throws IOException {
Socket s = new Socket("127.0.0.1", 7000);
PrintStream out = new PrintStream(s.getOutputStream());
ServerSocket ss = new ServerSocket(7001);
Socket s1 = ss.accept();
BufferedReader in1 = new BufferedReader(new InputStreamReader(s1.getInputStream()));
PrintStream out1 = new PrintStream(s1.getOutputStream());
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
String str = "Token";
while (true) {
if (str.equalsIgnoreCase("Token")) {
System.out.println("Do you want to send some data");
System.out.println("Enter Yes or No");
str = br.readLine();
if (str.equalsIgnoreCase("Yes")) {
System.out.println("Enter the data");
str = br.readLine();
out.println(str);
}
out1.println("Token");
}
System.out.println("Waiting for Token");
str = in1.readLine();
}
}
}

//ClientTwo.java
package com.saif.exp6;

import java.io.*;
import java.net.*;
public class ClientTwo {
public static void main(String args[]) throws IOException {
Socket s = new Socket("127.0.0.1", 7000);
PrintStream out = new PrintStream(s.getOutputStream());
Socket s2 = new Socket("127.0.0.1", 7001);
BufferedReader in2 = new BufferedReader(new InputStreamReader(s2.getInputStream()));
PrintStream out2 = new PrintStream(s2.getOutputStream());
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
String str = "Token";

50
while (true) {
System.out.println("Waiting for Token");
str = in2.readLine();
if (str.equalsIgnoreCase("Token")) {
System.out.println("Do you want to send some data");
System.out.println("Enter Yes or No");
}
str = br.readLine();
if (str.equalsIgnoreCase("Yes")) {
System.out.println("Enter the data");
str = br.readLine();
out.println(str);
}
out2.println("Token");
}
}
}
Output:

51
EXPERIMENT NO. 7
Aim: To implement Banker’s Algorithm for Deadlock Management.
Theory:
The banker’s algorithm is a resource allocation and deadlock avoidance algorithm that tests for
safety by simulating the allocation for predetermined maximum possible amounts of all
resources, then makes an “s-state” check to test for possible activities, before deciding whether
allocation should be allowed to continue.

Why Banker’s algorithm is named so?

Banker’s algorithm is named so because it is used in banking system to check whether loan can
be sanctioned to a person or not. Suppose there are n number of account holders in a bank and
the total sum of their money is S. If a person applies for a loan then the bank first subtracts the
loan amount from the total money that bank has and if the remaining amount is greater than S
then only the loan is sanctioned. It is done because if all the account holders comes to withdraw
their money then the bank can easily do it.

In other words, the bank would never allocate its money in such a way that it can no longer
satisfy the needs of all its customers. The bank would try to be in safe state always.

Following Data structures are used to implement the Banker’s Algorithm:

Let ‘n’ be the number of processes in the system and ‘m’ be the number of resources types.

Available :

● It is a 1-d array of size ‘m’ indicating the number of available resources of each type.
● Available[ j ] = k means there are ‘k’ instances of resource type Rj

Max :

● It is a 2-d array of size ‘n*m’ that defines the maximum demand of each process in a
system.
● Max[ i, j ] = k means process Pi may request at most ‘k’ instances of resource type Rj.

52
Allocation :

● It is a 2-d array of size ‘n*m’ that defines the number of resources of each type currently
allocated to each process.
● Allocation[ i, j ] = k means process Pi is currently allocated ‘k’ instances of resource type
Rj

Need :

● It is a 2-d array of size ‘n*m’ that indicates the remaining resource need of each process.
● Need [ i, j ] = k means process Pi currently need ‘k’ instances of resource type Rj
● Need [ i, j ] = Max [ i, j ] – Allocation [ i, j ]

Allocationi specifies the resources currently allocated to process Pi and Needi specifies the
additional resources that process Pi may still request to complete its task.

Banker’s algorithm consists of Safety algorithm and Resource request algorithm.

Safety Algorithm

The algorithm for finding out whether or not a system is in a safe state can be described as
follows:

1) Let Work and Finish be vectors of length ‘m’ and ‘n’ respectively.

Initialize: Work = Available

Finish[i] = false; for i=1, 2, 3, 4….n

2) Find an i such that both

a) Finish[i] = false

b) Needi <= Work

if no such i exists goto step (4)

3) Work = Work + Allocation[i]

Finish[i] = true

53
goto step (2)

4) if Finish [i] = true for all i

then the system is in a safe state

Resource-Request Algorithm

Let Requesti be the request array for process Pi. Request i [j] = k means process Pi wants k
instances of resource type Rj. When a request for resources is made by process Pi, the following
actions are taken:

1) If Requesti <= Needi

Goto step (2) ; otherwise, raise an error condition, since the process has exceeded its maximum
claim.

2) If Requesti <= Available

Goto step (3); otherwise, Pi must wait, since the resources are not available.

3) Have the system pretend to have allocated the requested resources to process Pi by modifying
the state as

follows:

Available = Available – Requesti

Allocationi = Allocationi + Requesti

Needi = Needi– Requesti

Program:
//Bankers.java

package com.saif.exp7;

import java.util.Scanner;
public class Bankers{
private int need[][],allocate[][],max[][],avail[][],np,nr;
private void input(){
Scanner sc=new Scanner(System.in);
54
System.out.print("Enter no. of processes and resources: ");
np=sc.nextInt(); //no. of process
nr=sc.nextInt(); //no. of resources
need=new int[np][nr]; //initializing arrays
max=new int[np][nr];
allocate= new int[np][nr];
avail=new int[1][nr];
System.out.println("Enter allocation matrix -->");
for(int i=0;i<np;i++){
for(int j=0;j<nr;j++){
allocate[i][j]=sc.nextInt(); //allocation matrix
}
}
System.out.println("Enter max matrix -->");
for(int i=0;i<np;i++) {
for (int j = 0; j < nr; j++){
max[i][j] = sc.nextInt(); //max matrix
}
}
System.out.println("Enter available matrix -->");
for(int j=0;j<nr;j++){
avail[0][j]=sc.nextInt(); //available matrix
}
sc.close();
}
private int[][] calc_need(){
for(int i=0;i<np;i++) {
for (int j = 0; j < nr; j++) { //calculating need matrix
need[i][j] = max[i][j] - allocate[i][j];
}
}
return need;
}
private boolean check(int i){
//checking if all resources for ith process can be allocated
for(int j=0;j<nr;j++){
if(avail[0][j]<need[i][j]){
return false;
}
}
return true;
}
public void isSafe(){
input();
calc_need();
boolean done[]=new boolean[np];

55
int j=0;
while(j<np) { //until all process allocated
boolean allocated=false;
for(int i=0;i<np;i++) {
if (!done[i] && check(i)) { //trying to allocate
for (int k = 0; k < nr; k++){
avail[0][k]=avail[0][k]-need[i][k]+max[i][k];
}
System.out.println("Allocated process : " + i);
}
allocated =done[i]=true;
j++;
}
if(!allocated) break; //if no allocation

}
if(j==np) { //if all processes are allocated
System.out.println("\nSafely allocated");
}else {
System.out.println("All process cant be allocated safely");
}
}
public static void main(String[] args){
new Bankers().isSafe();
}
}

Output:

56
EXPERIMENT NO. 8
Aim: To implement the program for demonstrating a load-balancing approach in a
distributed environment.
Theory:
A load balancer is a device that acts as a reverse proxy and distributes network or application
traffic across a number of servers. Load adjusting is the approach to conveying load units (i.e.,
occupations/assignments) across the organization which is associated with the distributed
system. Load adjusting should be possible by the load balancer. The load balancer is a
framework that can deal with the load and is utilized to disperse the assignments to the servers.
The load balancers allocates the primary undertaking to the main server and the second
assignment to the second server.

Purpose of Load Balancing in Distributed Systems:

● Security: A load balancer provide safety to your site with practically no progressions to
your application.
● Protect applications from emerging threats: The Web Application Firewall (WAF) in
the load balancer shields your site.
● Authenticate User Access: The load balancer can demand a username and secret key
prior to conceding admittance to your site to safeguard against unapproved access.
● Protect against DDoS attacks: The load balancer can distinguish and drop conveyed
refusal of administration (DDoS) traffic before it gets to your site.

57
● Performance: Load balancers can decrease the load on your web servers and advance
traffic for a superior client experience.
● SSL Offload: Protecting traffic with SSL (Secure Sockets Layer) on the load balancer
eliminates the upward from web servers bringing about additional assets being accessible
for your web application.
● Traffic Compression: A load balancer can pack site traffic giving your clients a vastly
improved encounter with your site.

Load Balancing Approaches:

● Round Robin
● Least Connections
● Least Time
● Hash
● IP Hash

Classes of Load Adjusting Calculations:

Following are a portion of the various classes of the load adjusting calculations.

● Static: In this model assuming any hub/node is found with a heavy load, an assignment
can be taken arbitrarily and move the undertaking to some other arbitrary system. .
● Dynamic: It involves the present status data for load adjusting. These are better
calculations than static calculations.
● Deterministic: These calculations utilize processor and cycle attributes to apportion
cycles to the hubs.
● Centralized: The framework states data is gathered by a single hub.

58
Advantages of Load Balancing:

● Load balancers minimize server response time and maximize throughput.


● Load balancer ensures high availability and reliability by sending requests only to online
servers
● Load balancers do continuous health checks to monitor the server’s capability of handling
the request.

Migration:

Another important policy to be used by a distributed operating system that supports process
migration is to decide about the total number of times a process should be allowed to migrate.

59
Migration Models:

● Code section
● Resource section
● Execution section

● Code section: It contains the real code.

60
● Resource fragment: It contains a reference to outer resources required by the interaction.
● Execution section: It stores the ongoing execution condition of interaction, comprising
private information, the stack, and the program counter.
● Powerless movement: In the powerless relocation just the code section will be moved.
● Solid relocation: In this movement, both the code fragment and the execution portion will
be moved. The relocation additionally can be started by the source.

Program:
//LoadBalance.java
package com.saif.exp8;
import java.util.*;
public class LoadBalance {
static void printLoad(int servers, int processes){
int each = processes / servers;
int extra = processes % servers;
int total = 0;
int i = 0;
for (i = 0; i < extra; i++) {
System.out.println("Server "+(i+1)+" has "+(each+1)+" Processes");
}
for (;i<servers;i++){
System.out.println("Server "+(i+1)+" has "+each+" Processes");
}
}
public static void main(String[] args){
Scanner sc = new Scanner(System.in);
System.out.print("Enter the number of Servers: ");
int servers= sc.nextInt();
System.out.print("Enter the number of Processes: ");
int processes = sc.nextInt();
while (true){
printLoad(servers,processes);
System.out.println("\n1.Add Servers 2.Remove Server 3.Add Processes 4.Remove
Processes 5.Exit ");
System.out.print("> ");
switch(sc.nextInt()){
case 1:
System.out.print("How many more servers to add? ");
servers+=sc.nextInt();
break;
case 2:
System.out.print("How many more servers to remove? ");
servers-=sc.nextInt();

61
break;
case 3:
System.out.print("How many more Processes to add? ");
processes+=sc.nextInt();
break;
case 4:
System.out.print("How many more processes to remove? ");
processes-=sc.nextInt();
break;
case 5:
return;
}
}
}
}

Output:

EXPERIMENT NO. 9

62
Aim: Distributed Shared Memory.
Theory:
Distributed Shared Memory (DSM) implements the distributed systems shared memory model in
a distributed system, that hasn’t any physically shared memory. Shared model provides a virtual
address area shared between any or all nodes. To beat the high forged of communication in
distributed system. DSM memo, model provides a virtual address area shared between all nodes.
systems move information to the placement of access. Information moves between main memory
and secondary memory (within a node) and between main recollections of various nodes.

Every Greek deity object is in hand by a node. The initial owner is that the node that created the
object. Possessions will amendment as the object moves from node to node. Once a method
accesses information within the shared address space, the mapping manager maps shared
memory address to physical memory (local or remote).

63
DSM permits programs running on separate reasons to share information while not the software
engineer having to agitate causation message instead underlying technology can send the
messages to stay the DSM consistent between compute. DSM permits programs that won’t to
treat constant laptop to be simply tailored to control on separate reason. Programs access what
seems to them to be traditional memory.

Hence, programs that Pine Tree State DSM square measure sometimes shorter and easier to
grasp than programs that use message passing. But, DSM isn’t appropriate for all things. Client-
server systems square measure typically less suited to DSM, however, a server is also wont to
assist in providing DSM practicality for information shared between purchasers.

Architecture of Distributed Shared Memory (DSM):

Every node consists of 1 or additional CPU’s and a memory unit. High-speed communication
network is employed for connecting the nodes. A straightforward message passing system
permits processes on completely different nodes to exchange one another.

Memory mapping manager unit:

Memory mapping manager routine in every node maps the native memory onto the shared
computer storage. For mapping operation, the shared memory house is divided into blocks.

Information caching may be a documented answer to deal with operation latency. DMA uses
information caching to scale back network latency. the most memory of the individual nodes is
employed to cache items of the shared memory house.

Memory mapping manager of every node reads its native memory as an enormous cache of the
shared memory house for its associated processors. The bass unit of caching may be a memory
block. Systems that support DSM, information moves between secondary memory and main
memory also as between main reminiscences of various nodes.

64
Communication Network Unit:

Once method access information within the shared address house mapping manager maps the
shared memory address to the physical memory. The mapped layer of code enforced either
within the operating kernel or as a runtime routine.

Physical memory on every node holds pages of shared virtual–address house. Native pages area
unit gift in some node’s memory. Remote pages in some other node’s memory.

Program:
// SharedMemory.java

package com.saif.exp9;

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.PrintStream;
import java.net.ServerSocket;
import java.net.Socket;

public class SharedMemory {


static int a = 50;
static int count = 0;
public static int getA(PrintStream cout) {
count++;
cout.println(a);
return a;
}

public static void main(String[] args) throws IOException {


ServerSocket ss = new ServerSocket(2000);
while (true) {
Socket sk = ss.accept();
BufferedReader cin = new BufferedReader(new
InputStreamReader(sk.getInputStream()));
PrintStream cout = new PrintStream(sk.getOutputStream());
System.out.println("Client from " + sk.getInetAddress().getHostAddress() + "
Accepted");
String s = cin.readLine();
if (s.equalsIgnoreCase("show")) {
getA(cout);

65
} else {
cout.println("Check syntax");
//break;
}
System.out.println("Client count" + count);
}
}
}

// SharedMemoryClient.java

package com.saif.exp10;

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.PrintStream;
import java.net.Socket;
import java.util.Scanner;

public class SharedMemoryClient {


public static void main(String[] args) throws IOException {
Socket sk = new Socket("Localhost", 2000);
BufferedReader sin = new BufferedReader(new InputStreamReader(sk.getInputStream()));
PrintStream sout = new PrintStream(sk.getOutputStream());
Scanner stdin = new Scanner(System.in);
String s;
while (true) {
System.out.println("Type show");
System.out.println("Client: ");
s = stdin.nextLine();
sout.println(s);
s = sin.readLine();
System.out.println("Answer" + s);
}
}
}

66
Output:

67
EXPERIMENT NO. 10
Aim: Distributed File System (AFS/CODA).
Theory:
A Distributed File System (DFS) as the name suggests, is a file system that is distributed on
multiple file servers or multiple locations. It allows programs to access or store isolated files as
they do with the local ones, allowing programmers to access files from any network or computer.

The main purpose of the Distributed File System (DFS) is to allows users of physically
distributed systems to share their data and resources by using a Common File System. A
collection of workstations and mainframes connected by a Local Area Network (LAN) is a
configuration on Distributed File System. A DFS is executed as a part of the operating system. In
DFS, a namespace is created and this process is transparent for the clients.

DFS has two components:

● Location Transparency –

Location Transparency achieves through the namespace component.

● Redundancy –

Redundancy is done through a file replication component.

In the case of failure and heavy load, these components together improve data availability
by allowing the sharing of data in different locations to be logically grouped under one
folder, which is known as the “DFS root”.

It is not necessary to use both the two components of DFS together, it is possible to use the
namespace component without using the file replication component and it is perfectly possible to
use the file replication component without using the namespace component between servers.

68
File system replication:

Early iterations of DFS made use of Microsoft’s File Replication Service (FRS), which allowed
for straightforward file replication between servers. The most recent iterations of the whole file
are distributed to all servers by FRS, which recognises new or updated files.

“DFS Replication” was developed by Windows Server 2003 R2 (DFSR). By only copying the
portions of files that have changed and minimising network traffic with data compression, it
helps to improve FRS. Additionally, it provides users with flexible configuration options to
manage network traffic on a configurable schedule.

Features of DFS:

● Transparency:
o Structure transparency –

There is no need for the client to know about the number or locations of file
servers and the storage devices. Multiple file servers should be provided for
performance, adaptability, and dependability.

o Access transparency –

Both local and remote files should be accessible in the same manner. The file
system should be automatically located on the accessed file and send it to the
client’s side.

o Naming transparency –

There should not be any hint in the name of the file to the location of the file.
Once a name is given to the file, it should not be changed during transferring from
one node to another.

o Replication transparency –

If a file is copied on multiple nodes, both the copies of the file and their locations
should be hidden from one node to another.

69
● User mobility :

It will automatically bring the user’s home directory to the node where the user logs in.

● Performance:

Performance is based on the average amount of time needed to convince the client
requests. This time covers the CPU time + time taken to access secondary storage +
network access time. It is advisable that the performance of the Distributed File System
be similar to that of a centralized file system.

● Simplicity and ease of use:

The user interface of a file system should be simple and the number of commands in the
file should be small.

● High availability:

A Distributed File System should be able to continue in case of any partial failures like a
link failure, a node failure, or a storage drive crash.

A high authentic and adaptable distributed file system should have different and
independent file servers for controlling different and independent storage devices.

● Scalability:

Since growing the network by adding new machines or joining two networks together is
routine, the distributed system will inevitably grow over time. As a result, a good
distributed file system should be built to scale quickly as the number of nodes and users
in the system grows. Service should not be substantially disrupted as the number of nodes
and users grows.

● High reliability:

The likelihood of data loss should be minimized as much as feasible in a suitable


distributed file system. That is, because of the system’s unreliability, users should not feel
forced to make backup copies of their files. Rather, a file system should create backup

70
copies of key files that can be used if the originals are lost. Many file systems employ
stable storage as a high-reliability strategy.

● Data integrity:

Multiple users frequently share a file system. The integrity of data saved in a shared file
must be guaranteed by the file system. That is, concurrent access requests from many
users who are competing for access to the same file must be correctly synchronized using
a concurrency control method. Atomic transactions are a high-level concurrency
management mechanism for data integrity that is frequently offered to users by a file
system.

● Security:

A distributed file system should be secure so that its users may trust that their data will be
kept private. To safeguard the information contained in the file system from unwanted &
unauthorized access, security mechanisms must be implemented.

● Heterogeneity:

Heterogeneity in distributed systems is unavoidable as a result of huge scale. Users of


heterogeneous distributed systems have the option of using multiple computer platforms
for different purposes.

History:

The server component of the Distributed File System was initially introduced as an add-on
feature. It was added to Windows NT 4.0 Server and was known as “DFS 4.1”. Then later on it
was included as a standard component for all editions of Windows 2000 Server. Client-side
support has been included in Windows NT 4.0 and also in later on version of Windows.

Linux kernels 2.6.14 and versions after it come with an SMB client VFS known as “cifs” which
supports DFS. Mac OS X 10.7 (lion) and onwards supports Mac OS X DFS.

71
Properties:

● File transparency: users can access files without knowing where they are physically
stored on the network.
● Load balancing: the file system can distribute file access requests across multiple
computers to improve performance and reliability.
● Data replication: the file system can store copies of files on multiple computers to
ensure that the files are available even if one of the computers fails.
● Security: the file system can enforce access control policies to ensure that only
authorized users can access files.
● Scalability: the file system can support a large number of users and a large number of
files.
● Concurrent access: multiple users can access and modify the same file at the same time.
● Fault tolerance: the file system can continue to operate even if one or more of its
components fail.
● Data integrity: the file system can ensure that the data stored in the files is accurate and
has not been corrupted.
● File migration: the file system can move files from one location to another without
interrupting access to the files.
● Data consistency: changes made to a file by one user are immediately visible to all other
users.
● Support for different file types: the file system can support a wide range of file types,
including text files, image files, and video files.

Applications:

● NFS –

NFS stands for Network File System. It is a client-server architecture that allows a
computer user to view, store, and update files remotely. The protocol of NFS is one of the
several distributed file system standards for Network-Attached Storage (NAS).

● CIFS –

72
CIFS stands for Common Internet File System. CIFS is an accent of SMB. That is, CIFS
is an application of SIMB protocol, designed by Microsoft.

● SMB –

SMB stands for Server Message Block. It is a protocol for sharing a file and was invented
by IMB. The SMB protocol was created to allow computers to perform read and write
operations on files to a remote host over a Local Area Network (LAN). The directories
present in the remote host can be accessed via SMB and are called as “shares”.

● Hadoop –

Hadoop is a group of open-source software services. It gives a software framework for


distributed storage and operating of big data using the MapReduce programming model.
The core of Hadoop contains a storage part, known as Hadoop Distributed File System
(HDFS), and an operating part which is a MapReduce programming model.

● NetWare –

NetWare is an abandon computer network operating system developed by Novell, Inc. It


primarily used combined multitasking to run different services on a personal computer,
using the IPX network protocol.

● Working of DFS :

There are two ways in which DFS can be implemented:

● Standalone DFS namespace –

It allows only for those DFS roots that exist on the local computer and are not using
Active Directory. A Standalone DFS can only be acquired on those computers on which
it is created. It does not provide any fault liberation and cannot be linked to any other
DFS. Standalone DFS roots are rarely come across because of their limited advantage.

● Domain-based DFS namespace –

73
It stores the configuration of DFS in Active Directory, creating the DFS namespace root
accessible at \\<domainname>\<dfsroot> or \\<FQDN>\<dfsroot>

Advantages:
● DFS allows multiple user to access or store the data.
● It allows the data to be share remotely.
● It improved the availability of file, access time, and network efficiency.
● Improved the capacity to change the size of the data and also improves the ability to
exchange the data.
● Distributed File System provides transparency of data even if server or disk fails.
Disadvantages:
● In Distributed File System nodes and connections needs to be secured therefore we can
say that security is at stake.
● There is a possibility of lose of messages and data in the network while movement from
one node to another.
● Database connection in case of Distributed File System is complicated.
● Also handling of the database is not easy in Distributed File System as compared to a
single user system.
● There are chances that overloading will take place if all nodes tries to send data at once.

Program:
// DistributedFileSystemServer.java
74
package com.saif.exp10;

import com.saif.exp11.DistributedFileSystem;

import java.io.DataInputStream;
import java.io.DataOutputStream;
import java.io.IOException;
import java.net.InetAddress;
import java.net.ServerSocket;
import java.net.Socket;
import java.util.List;

public class DistributedFileSystemServer {


private static final int SERVER_PORT = 12345;

public static void main(String[] args) {


DistributedFileSystem fileSystem = new DistributedFileSystem();
try (ServerSocket serverSocket = new ServerSocket(SERVER_PORT)) {
System.out.println("Server started on port " + SERVER_PORT);
while (true) {
try (Socket clientSocket = serverSocket.accept()) {
System.out.println("Client connected: " +
clientSocket.getInetAddress().getHostAddress());
handleClient(clientSocket, fileSystem);
}
}
} catch (IOException e) {
System.err.println("Server error: " + e.getMessage());
}
}

private static void handleClient(Socket clientSocket, DistributedFileSystem fileSystem) {


try (DataInputStream inputStream = new DataInputStream(clientSocket.getInputStream());
DataOutputStream outputStream = new
DataOutputStream(clientSocket.getOutputStream())) {
while (true) {
String command = inputStream.readUTF();
if (command.equals("add")) {
String fileName = inputStream.readUTF();
InetAddress address = InetAddress.getByName(inputStream.readUTF());
fileSystem.addFile(fileName, address);
outputStream.writeBoolean(true);
} else if (command.equals("remove")) {
String fileName = inputStream.readUTF();
InetAddress address = InetAddress.getByName(inputStream.readUTF());
fileSystem.removeFile(fileName, address);

75
outputStream.writeBoolean(true);
} else if (command.equals("get")) {
String fileName = inputStream.readUTF();
List<InetAddress> addresses = fileSystem.getFileAddresses(fileName);
outputStream.writeInt(addresses.size());
for (InetAddress address : addresses) {
outputStream.writeUTF(address.getHostAddress());
}
} else if (command.equals("exit")) {
break;
} else {
System.err.println("Invalid command: " + command);
outputStream.writeBoolean(false);
}
}
} catch (IOException e) {
System.err.println("Client error: " + e.getMessage());
}
System.out.println("Client disconnected: " +
clientSocket.getInetAddress().getHostAddress());
}
}

// DistributedFileSystemClient.java

package com.saif.exp10;
import java.io.*;
import java.net.*;
import java.util.*;

public class DistributedFileSystemClient {


private static final String SERVER_HOST = "localhost";
private static final int SERVER_PORT = 12345;

public static void main(String[] args) {


try (Scanner scanner = new Scanner(System.in);
Socket socket = new Socket(SERVER_HOST, SERVER_PORT);
DataInputStream inputStream = new DataInputStream(socket.getInputStream());
DataOutputStream outputStream = new DataOutputStream(socket.getOutputStream())) {
while (true) {
System.out.print("Enter command (add, remove, get, exit): ");
String command = scanner.nextLine();
if (command.equals("add")) {
System.out.print("Enter file name: ");
String fileName = scanner.nextLine();

76
System.out.print("Enter server address: ");
InetAddress address = InetAddress.getByName(scanner.nextLine());
outputStream.writeUTF(command);
outputStream.writeUTF(fileName);
outputStream.writeUTF(address.getHostAddress());
boolean success = inputStream.readBoolean();
if (success) {
System.out.println("File added successfully");
} else {
System.out.println("Failed to add file");
}
} else if (command.equals("remove")) {
System.out.print("Enter file name: ");
String fileName = scanner.nextLine();
System.out.print("Enter server address: ");
InetAddress address = InetAddress.getByName(scanner.nextLine());
outputStream.writeUTF(command);
outputStream.writeUTF(fileName);
outputStream.writeUTF(address.getHostAddress());
boolean success = inputStream.readBoolean();
if (success) {
System.out.println("File removed successfully");
} else {
System.out.println("Failed to remove file");
}
} else if (command.equals("get")) {
System.out.print("Enter file name: ");
String fileName = scanner.nextLine();
outputStream.writeUTF(command);
outputStream.writeUTF(fileName);
int numAddresses = inputStream.readInt();
if (numAddresses == 0) {
System.out.println("File not found");
} else {
System.out.println("File found on " + numAddresses + " servers:");
for (int i = 0; i < numAddresses; i++) {
String address = inputStream.readUTF();
System.out.println("- " + address);
}
}
} else if (command.equals("exit")) {
outputStream.writeUTF(command);
break;
} else {
System.err.println("Invalid command: " + command);
}

77
}
} catch (IOException e) {
System.err.println("Client error: " + e.getMessage());
}
}
}

Output:

78

You might also like