0% found this document useful (0 votes)
2 views45 pages

Distributed System1to4

The document is a lab report from Tribhuwan University's Department of Humanities & Social Science detailing experiments on various load balancing algorithms and socket programming. It covers simulations of IP Hashing, Weighted Round Robin, Least Connections, and client-server communication using sockets, as well as clock synchronization techniques like Cristian's Algorithm and Lamport's Logical Clock. Each section includes objectives, theoretical background, source code, results, and conclusions regarding the effectiveness of the methods discussed.

Uploaded by

bobbyneupane70
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views45 pages

Distributed System1to4

The document is a lab report from Tribhuwan University's Department of Humanities & Social Science detailing experiments on various load balancing algorithms and socket programming. It covers simulations of IP Hashing, Weighted Round Robin, Least Connections, and client-server communication using sockets, as well as clock synchronization techniques like Cristian's Algorithm and Lamport's Logical Clock. Each section includes objectives, theoretical background, source code, results, and conclusions regarding the effectiveness of the methods discussed.

Uploaded by

bobbyneupane70
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 45

Tribhuwan University

Department of Humanities & Social Science


Prithvi Narayan Campus

Lab Report on
Distributed System (CACS352)

Submitted to: Submitted by :


Nilam Thapa Rakshya Basyal
BCA Program Roll No: 4802018
Prithvi Narayan Campus BCA 6th Semester

2025
S.N Name of experiments Submission date Remarks
Lab no.1
IP HASHING LOAD BALANCING SIMULATION

OBJECTIVE:
 To simulate the allocation of client IPs to servers using a hashing mechanism.
 To analyze the distribution of load across servers.

THEORY:
Load balancing is a method for evenly distributing incoming network traffic across
multiple servers to optimize resource usage and minimize response times. One
common approach is IP Hashing, which assigns requests to servers based on the
client's IP address. This is typically done by applying a hash function—often a simple
modulo operation—on the IP address to determine the target server.
Hash Function:
int ipHash(int ip) {
return ip % NUM_SERVERS;
}
This function maps each IP address to a specific server by calculating the remainder
when the IP is divided by the number of servers.

SOURCE CODE:
#include <stdio.h>

#define NUM_SERVERS 3
#define NUM_CLIENTS 10

int ipHash(int ip) {


return ip % NUM_SERVERS;
}

void ipHashing(int clientIPs[]) {


int serverLoad[NUM_SERVERS] = {0};

for (int i = 0; i < NUM_CLIENTS; i++) {


int serverIndex = ipHash(clientIPs[i]);
serverLoad[serverIndex]++;
printf("Client IP %d assigned to Server %d\n", clientIPs[i], serverIndex + 1);
}

printf("\nFinal Server Load:\n");


for (int i = 0; i < NUM_SERVERS; i++) {
printf("Server %d handled %d clients\n", i + 1, serverLoad[i]);
}
}

int main() {
int clientIPs[NUM_CLIENTS] = {101, 202, 303, 404, 505, 606, 707, 808, 909, 1000};
printf("IP Hashing Load Balancing Simulation:\n");
ipHashing(clientIPs);
return 0;
}

OUTPUT:

RESULT:
The program effectively distributes client IP addresses across three servers using a
modulo-based hash function. As a result, the load is approximately balanced among
all servers.

CONCLUSION:
This lab demonstrated IP hashing as a fast and simple load balancing method. While
effective, it lacks adaptability to server load changes—an issue better handled by
advanced techniques like round-robin or consistent hashing.
Lab no.2
WEIGHTED ROUND ROBIN LOAD BALANCING SIMULATION

OBJECTIVE:
 To simulate load balancing using server weights.
 To assign client requests proportionally based on server capabilities.

THEORY:
Weighted Round Robin (WRR) is an advanced load balancing algorithm designed to
distribute requests across multiple servers based on assigned weights. Unlike standard
Round Robin, WRR considers the processing capacity of servers, allowing those with
higher capabilities to handle more requests proportionally.
 In WRR, each server is assigned a weight based on its processing power or
availability.
 Requests are distributed cyclically, but servers with higher weights receive
more requests than lower-weighted servers.
 The algorithm ensures efficient resource utilization by directing traffic
dynamically.

SOURCE CODE:
#include <stdio.h>

#define NUM_SERVERS 3
#define NUM_REQUESTS 10

int weights[NUM_SERVERS] = {2, 1, 3};

void weightedRoundRobin(int requests[]) {


int servers[NUM_SERVERS] = {0};
int totalWeight = 0;

for (int i = 0; i < NUM_SERVERS; i++) {


totalWeight += weights[i];
}

for (int i = 0, assigned = 0; assigned < NUM_REQUESTS; ) {


for (int j = 0; j < NUM_SERVERS; j++) {
for (int w = 0; w < weights[j] && assigned < NUM_REQUESTS; w++) {
printf("Request %d assigned to Server %d\n", requests[assigned], j + 1);
servers[j]++;
assigned++;
}
}
}

printf("\nFinal Server Load:\n");


for (int i = 0; i < NUM_SERVERS; i++) {
printf("Server %d handled %d requests\n", i + 1, servers[i]);
}
}
int main() {
int requests[NUM_REQUESTS] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
printf("Weighted Round Robin Load Balancing Simulation:\n");
weightedRoundRobin(requests);
return 0;
}

OUTPUT:

RESULT:
The program accurately simulates request distribution according to server weights.
Server 3 (weight 3) handles more requests than Server 2 (weight 1), reflecting realistic
load balancing.

CONCLUSION:
Weighted Round Robin is effective for environments with heterogeneous servers. It
improves performance and efficiency by aligning request distribution with server
capacities.
Lab no.3
LEAST CONNECTIONS LOAD BALANCING SIMULATIONS

OBJECTIVE:
 To implement and demonstrate how the Least Connections algorithm
distributes incoming client requests.
 To observe load distribution based on real-time server utilization.

THEORY:
The Least Connections algorithm distributes requests by selecting the server with the
fewest active connections at any given time. It dynamically balances loads based on
real-time utilization preventing overloaded servers.
Key features:

 Real-time load distribution – Sends requests to the least busy server.


 Efficient resource utilization – Ensures balanced traffic flow.
 Adaptive scaling – Adjusts as connections open/close dynamically.
 Optimized performance – Reduces response times for faster processing.

This method is ideal for systems handling variable workloads, ensuring fair allocation
and optimized server efficiency.

SOURCE CODE:
#include <stdio.h>

#define NUM_SERVERS 3
#define NUM_REQUESTS 10

int findLeastConnections(int connections[]) {


int minIndex = 0;
for (int i = 1; i < NUM_SERVERS; i++) {
if (connections[i] < connections[minIndex]) {
minIndex = i;
}
}
return minIndex;
}

void leastConnections(int requests[]) {


int connections[NUM_SERVERS] = {0};

for (int i = 0; i < NUM_REQUESTS; i++) {


int serverIndex = findLeastConnections(connections);
connections[serverIndex]++;
printf("Request %d assigned to Server %d\n", requests[i], serverIndex + 1);
}

printf("\nFinal Server Load:\n");


for (int i = 0; i < NUM_SERVERS; i++) {
printf("Server %d handled %d requests\n", i + 1, connections[i]);
}
}

int main() {
int requests[NUM_REQUESTS] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
printf("Least Connections Load Balancing Simulation:\n");
leastConnections(requests);
return 0;
}

OUTPUT:

RESULT:
Requests are dynamically sent to the server handling the fewest current connections,
ensuring better real-time balancing than static methods like round robin.

CONCLUSION:
Least Connections is a load balancing method that sends new requests to the server
with the fewest active connections. This helps make sure that no server gets
overloaded, especially when requests vary in size or complexity. It ensures resources
are used efficiently by spreading the load evenly.
Lab no. 4
SIMPLE CLIENT-SERVER COMMUNICATION USING
SOCKETS

OBJECTIVE:

 To understand socket programming and TCP connections.


 To create a server that listens for client requests.
 To create a client that sends a message and receives a response from the
server.

THEORY:

A socket is an endpoint in a network communication between two programs. It


enables applications like chat servers and file transfers by establishing reliable
connections. Using sockets, data can be sent and received across different systems
efficiently. This forms the foundation of many networking applications.

TCP ensures reliable communication by establishing a connection before transmitting


data. It guarantees data integrity by handling lost or corrupted packets through
retransmission. TCP/IP is widely used in network applications, ensuring seamless and
error-free communication. This protocol is essential for modern internet-based
interactions.

SOURCE CODE:

Server Code (server.c)

#include <stdio.h>
#include <string.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <unistd.h>

#define PORT 8080

int main() {
int server_fd, new_socket;
char buffer[1024] = {0};
char *message = "Hello from server";

struct sockaddr_in address;


int addrlen = sizeof(address);

server_fd = socket(AF_INET, SOCK_STREAM, 0);

address.sin_family = AF_INET;
address.sin_addr.s_addr = INADDR_ANY;
address.sin_port = htons(PORT);

bind(server_fd, (struct sockaddr *)&address, sizeof(address));


listen(server_fd, 3);

printf("Server waiting for connection...\n");


new_socket = accept(server_fd, (struct sockaddr *)&address, (socklen_t*)&addrlen);

read(new_socket, buffer, 1024);


printf("Message from client: %s\n", buffer);

send(new_socket, message, strlen(message), 0);


printf("Message sent to client\n");

close(new_socket);
close(server_fd);
return 0;
}

Client Code (client.c)

#include <stdio.h>

#include <string.h>
#include <sys/socket.h>
#include <arpa/inet.h>
#include <unistd.h>

#define PORT 8080

int main() {
int sock = 0;
struct sockaddr_in serv_addr;
char *hello = "Hello from client";
char buffer[1024] = {0};

sock = socket(AF_INET, SOCK_STREAM, 0);

serv_addr.sin_family = AF_INET;
serv_addr.sin_port = htons(PORT);

inet_pton(AF_INET, "127.0.0.1", &serv_addr.sin_addr);


connect(sock, (struct sockaddr *)&serv_addr, sizeof(serv_addr));

send(sock, hello, strlen(hello), 0);


printf("Message sent to server\n");

read(sock, buffer, 1024);


printf("Message from server: %s\n", buffer);
close(sock);
return 0;
}

COMPILATION & EXECUTION:

In Terminal 1 (Server):

gcc server.c -o server


./server

In Terminal 2 (Client):

gcc client.c -o client


./client

OUTPUT:

Server Side:

Server waiting for connection...


Message from client: Hello from client
Message sent to client

Client Side:

Message sent to server


Message from server: Hello from server

RESULT:

A reliable two-way communication is set up between the client and server using
sockets, allowing both sides to transmit and receive predefined messages seamlessly.

CONCLUSION:

This lab showcased how TCP socket programming in C enables reliable client-server
communication. It serves as a fundamental building block for advanced systems like
web servers, chat applications, and distributed computing frameworks.
Lab no. 5
CLOCK SYNCHRONIZATION – CRISTIAN’S ALGORITHM

OBJECTIVE:
 Synchronize a client's clock with a time server, accounting for network delays.

THEORY:
Cristian’s Algorithm is a method used in distributed systems to synchronize the clock
of a client computer with a trusted time server, especially when their clocks might be
running at different times. To do this, the client sends a request to the server asking
for the current time and notes the moment it sent the message. When the server
receives this request, it quickly responds with its current time. The client then records
the time it receives the reply and calculates the round-trip time (RTT), which is the
total time the message took to travel to the server and return. Assuming the delay is
roughly the same in both directions, the client adds half of the RTT to the server’s
time to estimate the actual current time, which it then uses to update its clock. This
approach is simple and works well when network delays are stable and symmetric.
Still, its accuracy can be affected if the delays in sending and receiving are not equal
or if the network is congested.

SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <unistd.h>

double get_current_time() {
struct timespec ts;
clock_gettime(CLOCK_REALTIME, &ts);
return ts.tv_sec + ts.tv_nsec / 1e9;
}

int main() {
double t1, t2, t3, estimated_time;
double server_time = get_current_time() + 5; // Server is 5 sec ahead
double delay = ((rand() % 150) + 50) / 1000.0; // 50ms to 200ms

srand(time(NULL));

t1 = get_current_time(); // Before request


usleep(delay * 1e6); // Simulate delay
t2 = server_time + delay;
usleep(delay * 1e6); // Simulate delay again
t3 = get_current_time(); // After receiving
estimated_time = t2 + ((t3 - t1) / 2.0);

printf("Client syncs to estimated time: %.3f seconds\n", estimated_time);

return 0;
}

OUTPUT:

RESULT:
The client’s clock was synchronized with the server’s time using Cristian’s Algorithm
by adjusting for the calculated round-trip delay.

CONCLUSION:
Cristian’s Algorithm proved to be a simple and effective method for clock
synchronization, though its accuracy was influenced by the stability and symmetry of
network delays.
Lab no. 6
LAMPORT’S LOGICAL CLOCK

OBJECTIVE:
 Establish a logical ordering of events in a distributed system without relying
on synchronized physical clocks.

THEORY:
Lamport's Logical Clock is a method used in distributed systems to assign timestamps
to events in such a way that the order of events can be determined logically, even if
the computers don’t share a synchronized physical clock. Each process maintains a
local counter, which it increments before each event. When a message is sent, the
timestamp is included; the receiving process updates its clock to be greater than both
its current clock and the received timestamp. This helps maintain a consistent order of
events across different systems without needing real-time synchronization.

SOURCE CODE:
#include <stdio.h>

int max(int a, int b) {


return (a > b) ? a : b;
}

// Structure to represent a Lamport Clock


typedef struct {
int time;
} LamportClock;

// Function to tick (increment) the clock


void tick(LamportClock *clock) {
clock->time++;
}

// Function to simulate sending an event


int send_event(LamportClock *clock) {
tick(clock);
return clock->time;
}

// Function to simulate receiving an event


void receive_event(LamportClock *clock, int received_time) {
clock->time = max(clock->time, received_time) + 1;
}

int main() {
LamportClock p1 = {0};
LamportClock p2 = {0};

printf("P1 sends a message...\n");


int send_time = send_event(&p1);

printf("P2 receives the message...\n");


receive_event(&p2, send_time);

printf("\nFinal Clock Values:\n");


printf("P1 Clock: %d\n", p1.time);
printf("P2 Clock: %d\n", p2.time);

return 0;
}

OUTPUT:

RESULT:
The simulation successfully demonstrated the Lamport Logical Clock mechanism.
Process P1 incremented its clock and sent a message, which P2 received and used to
update its clock based on the received timestamp. The final clock values reflected the
correct logical ordering of events across the two processes.

CONCLUSION:
The implemented code effectively illustrated how Lamport’s Logical Clock
maintained a consistent event sequence in a distributed system without relying on
physical time synchronization. It showed that processes could determine the causal
relationship between events through logical timestamps, ensuring proper coordination
even in asynchronous environments.
Lab no. 7
MUTUAL EXCLUSION: RICART-AGRAWALA ALGORITHM

OBJECTIVE:
 Ensure that only one process accesses the critical section at a time in a
distributed system.

THEORY:
The Ricart-Agrawala Algorithm is a decentralized mutual exclusion technique for
distributed systems. It avoids a central coordinator by having each process
communicate directly using timestamped messages. When a process wants to enter
the critical section (CS), it sends a REQUEST message with its timestamp to all
others. It can only enter the CS after receiving REPLY messages from every other
process. If a process receives a request while not interested in the CS, or if the sender
has higher priority (earlier timestamp), it immediately replies. Otherwise, it defers the
reply. This ensures that only one process enters the CS at a time, maintaining fairness
and avoiding conflict.

SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>

#define NUM_PROCESSES 2

typedef struct {
int timestamp;
int pid;
} Request;

typedef struct {
int pid;
int clock;
Request queue[NUM_PROCESSES];
int queue_size;
} Process;

int max(int a, int b) {


return (a > b) ? a : b;
}

void sort_queue(Request *queue, int size) {


for (int i = 0; i < size - 1; i++) {
for (int j = 0; j < size - i - 1; j++) {
if (queue[j].timestamp > queue[j + 1].timestamp ||
(queue[j].timestamp == queue[j + 1].timestamp && queue[j].pid > queue[j +
1].pid)) {
Request temp = queue[j];
queue[j] = queue[j + 1];
queue[j + 1] = temp;
}
}
}
}

Request request_cs(Process *p) {


p->clock++;
printf("Process %d requests CS at time %d\n", p->pid, p->clock);
Request req = {p->clock, p->pid};
p->queue[p->queue_size++] = req;
return req;
}

void receive_request(Process *p, Request incoming) {


p->clock = max(p->clock, incoming.timestamp) + 1;
p->queue[p->queue_size++] = incoming;
sort_queue(p->queue, p->queue_size);
}

void enter_cs(Process *p) {


if (p->queue[0].pid == p->pid) {
printf("Process %d enters CS at time %d\n", p->pid, p->clock);
} else {
printf("Process %d cannot enter CS now (lowest in queue: P%d)\n", p->pid, p-
>queue[0].pid);
}
}

int main() {
Process p1 = {1, 0, {}, 0};
Process p2 = {2, 0, {}, 0};

Request ts1 = request_cs(&p1);


Request ts2 = request_cs(&p2);

receive_request(&p1, ts2);
receive_request(&p2, ts1);

enter_cs(&p1);
enter_cs(&p2);

return 0;
}
OUTPUT:

RESULT:
The C code successfully simulated the Ricart-Agrawala Algorithm by managing
timestamped requests between two processes. Each process evaluated request priority
based on logical time and process ID, ensuring proper ordering of access to the
critical section.

CONCLUSION:
The implementation showed how mutual exclusion was achieved in a distributed
system without a central coordinator. By relying on logical clocks and message
ordering, the Ricart-Agrawala Algorithm ensured that only one process accessed the
critical section at a time, maintaining fairness and consistency.
Lab no. 8
ELECTION ALGORITHM: BULLY ALGORITHM

OBJECTIVE:
 Elect a coordinator among distributed processes, especially after a failure.

THEORY:
The Bully Algorithm is a leader election technique used in distributed systems to
select a coordinator among multiple processes, especially after a failure. It assumes
that each process has a unique ID and that processes are aware of all other
participants.
Working Steps:
 If a process detects that the coordinator has failed, it starts an election.
 It sends ELECTION messages to all processes with higher IDs.
 If no one responds, it becomes the new coordinator.
 If a higher process responds, it takes over the election.
 Eventually, the process with the highest ID declares itself coordinator.
 The new coordinator sends a COORDINATOR message to inform all others.

SOURCE CODE:
#include <stdio.h>

#define NUM_PROCESSES 3

typedef struct {
int pid;
int all_pids[4];
int num_all;
int coordinator;
} BullyProcess;

void start_election(BullyProcess *p) {


printf("Process %d starts election...\n", p->pid);
for (int i = 0; i < p->num_all; i++) {
if (p->all_pids[i] > p->pid) {
printf("Process %d is waiting for higher process %d...\n", p->pid, p->all_pids[i]);
return; // Higher process exists; election not won
}
}
p->coordinator = p->pid;
printf("Process %d becomes coordinator.\n", p->pid);
}

int main() {
int all_pids[] = {1, 2, 3, 4}; // All possible process IDs
BullyProcess processes[NUM_PROCESSES] = {
{1, {1, 2, 3, 4}, 4, -1},
{2, {1, 2, 3, 4}, 4, -1},
{4, {1, 2, 3, 4}, 4, -1}
};

for (int i = 0; i < NUM_PROCESSES; i++) {


start_election(&processes[i]);
}

return 0;
}

OUTPUT:

RESULT:
The C implementation of the Bully Algorithm successfully simulated the election
process among distributed processes. Each process checked for higher-ID processes
and only the one with the highest active ID declared itself as the coordinator, ensuring
proper leader selection.

CONCLUSION:
The simulation demonstrated how the Bully Algorithm effectively elects a new
coordinator after a failure, using only message logic and process IDs. It proved that
mutual agreement on leadership can be reached without a central controller,
maintaining coordination in a distributed system.
Lab no. 9
GOSSIP PROTOCOL

OBJECTIVE:
 Disseminate information across nodes in a distributed system efficiently.

THEORY:
The Gossip Protocol is a decentralized method for spreading information quickly and
reliably across all nodes in a distributed system. Inspired by how rumors propagate in
social groups, this protocol relies on each node randomly selecting peers to share new
data with. Once those peers receive the information, they repeat the process,
contacting others in turn. Over time, the message spreads rapidly and reaches the
entire network. One of the main strengths of the Gossip Protocol is its scalability—it
performs well even in large or dynamic systems where nodes frequently join or leave.
Additionally, it is fault-tolerant, meaning it can still function effectively even if some
messages are lost or certain nodes fail. This makes it particularly suitable for tasks
like data replication, system configuration updates, or membership tracking in peer-
to-peer applications, where maintaining strong consistency isn’t necessary but
eventual consistency is essential.

SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <stdbool.h>

#define NUM_NODES 5
#define NUM_ROUNDS 5

typedef struct {
int id;
bool informed;
} Node;

void gossip(Node nodes[], int index) {


if (nodes[index].informed) {
int target;
do {
target = rand() % NUM_NODES;
} while (target == index); // Avoid gossiping to self

if (!nodes[target].informed) {
nodes[target].informed = true;
printf("Node %d informed Node %d\n", nodes[index].id, nodes[target].id);
}
}
}

int main() {
Node nodes[NUM_NODES];
srand(time(NULL));

// Initialize nodes
for (int i = 0; i < NUM_NODES; i++) {
nodes[i].id = i;
nodes[i].informed = false;
}

// Start with Node 0 informed


nodes[0].informed = true;

// Run gossip rounds


for (int round = 1; round <= NUM_ROUNDS; round++) {
printf("\n--- Round %d ---\n", round);
for (int i = 0; i < NUM_NODES; i++) {
gossip(nodes, i);
}
}

// Final informed status


printf("\nFinal informed states: ");
for (int i = 0; i < NUM_NODES; i++) {
printf("%d ", nodes[i].informed);
}
printf("\n");

return 0;
}

OUTPUT:
RESULT:
The C code successfully simulated the Gossip Protocol by allowing informed nodes to
randomly inform others over multiple rounds. As the simulation progressed, more
nodes became informed, demonstrating the spread of information in a decentralized
manner.

CONCLUSION:
The implementation showed how the Gossip Protocol efficiently disseminates data
across distributed systems. Its randomized communication and fault-tolerant design
ensure eventual consistency without needing centralized control, making it ideal for
scalable, resilient networks.
Lab no. 10
CHECKPOINT AND RECOVERY ALGORITHM
(COORDINATED CHECKPOINTING)

OBJECTIVE:
 Capture consistent global states to facilitate recovery after failures.

THEORY:
The Coordinated Checkpointing algorithm is a fault tolerance technique used in
distributed systems to capture a consistent global state of all processes. Its primary
objective is to enable recovery from failures by saving system snapshots
(checkpoints) at synchronized points across all nodes. When a process initiates a
checkpoint, it requests all other processes to take a checkpoint at the same logical
moment. This coordination ensures that no messages are lost or inconsistently
recorded during recovery. If a failure occurs, the system can roll back to the last
global checkpoint and resume from a known good state. This approach simplifies
recovery while avoiding the domino effect, making it an effective and reliable
solution for distributed environments.

SOURCE CODE:
#include <stdio.h>

typedef struct {
int pid;
int state;
int checkpoint;
} Process;

// Simulates some work being done


void perform_action(Process *p, int action_value) {
p->state += action_value;
printf("Process %d state: %d\n", p->pid, p->state);
}

// Saves the current state as a checkpoint


void checkpoint_state(Process *p) {
p->checkpoint = p->state;
printf("Process %d checkpointed at: %d\n", p->pid, p->checkpoint);
}

// Restores state from the checkpoint


void recover(Process *p) {
p->state = p->checkpoint;
printf("Process %d recovered to: %d\n", p->pid, p->checkpoint);
}
int main() {
Process p1 = {1, 0, 0};
Process p2 = {2, 0, 0};

perform_action(&p1, 10);
perform_action(&p2, 20);

checkpoint_state(&p1);
checkpoint_state(&p2);

perform_action(&p1, 5);
perform_action(&p2, -10);

printf("\n--- Simulate failure ---\n");


recover(&p1);
recover(&p2);

return 0;
}

OUTPUT:

RESULT:
The simulation demonstrated that both processes successfully saved their states using
checkpoints. After performing additional actions and simulating a failure, each
process rolled back to its most recent consistent checkpoint, effectively recovering to
a stable state.

CONCLUSION:
The coordinated checkpointing algorithm ensured reliable recovery in a distributed
system. By synchronizing checkpoints across processes, the system avoided
inconsistent states and enabled safe restoration after failures, achieving fault tolerance
through consistent global state capturing.
Lab no. 11
TWO-PHASE COMMIT PROTOCOL

OBJECTIVE:
 Ensure atomic commitment of transactions across distributed systems.

THEORY:
The Two-Phase Commit (2PC) protocol is a distributed algorithm used to ensure the
atomic commitment of a transaction across multiple nodes or databases. Its main
goal is to guarantee that either all participants commit the transaction or none do,
even in the presence of failures, thereby maintaining consistency in distributed
systems.
1. Phase 1 – Voting Phase (Prepare Phase)
 The coordinator sends a PREPARE message to all participants.
 Each participant replies with either a VOTE_COMMIT or VOTE_ABORT
after ensuring it can safely commit.
2. Phase 2 – Commit Phase
 If all participants vote to commit, the coordinator sends a COMMIT
message.
 If any participant votes to abort, the coordinator sends an ABORT message.
 Participants then finalize the transaction accordingly.

SOURCE CODE:
#include <stdio.h>
#include <stdbool.h>
#include <stdlib.h>

// Define a Participant structure


typedef struct {
char name[10];
bool vote; // true for YES, false for NO
} Participant;

// Vote request phase


bool vote_request(Participant p) {
printf("%s votes %s\n", p.name, p.vote ? "YES" : "NO");
return p.vote;
}

// Commit phase
void commit(Participant p) {
printf("%s commits.\n", p.name);
}

// Renamed to avoid conflict with stdlib.h abort()


void abort_transaction(Participant p) {
printf("%s aborts.\n", p.name);
}

// Coordinator logic for Two-Phase Commit


void two_phase_commit(Participant participants[], int num) {
bool all_commit = true;

// Phase 1: Voting
for (int i = 0; i < num; i++) {
if (!vote_request(participants[i])) {
all_commit = false;
}
}

// Phase 2: Commit or Abort


if (all_commit) {
for (int i = 0; i < num; i++) {
commit(participants[i]);
}
} else {
for (int i = 0; i < num; i++) {
abort_transaction(participants[i]);
}
}
}

int main() {
Participant p1 = {"P1", true};
Participant p2 = {"P2", false}; // One vote is NO

Participant participants[] = {p1, p2};

two_phase_commit(participants, 2);

return 0;
}

OUTPUT:
RESULT:
The simulation clearly illustrated the two-phase commit process. In the voting phase,
P1 voted YES while P2 voted NO. Since not all participants agreed to commit, the
commit phase resulted in both participants executing an abort.

CONCLUSION:
This simulation validated how the Two-Phase Commit Protocol enforces atomicity in
distributed transactions. By requiring unanimous agreement before committing, it
prevents partial updates and maintains consistency, even when one participant
disagrees or fails.
Lab no. 12
THREE-PHASE COMMIT PROTOCOL
OBJECTIVE:
 Enhance the Two-Phase Commit by preventing blocking during coordinator
failures.

THEORY:
The Three-Phase Commit Protocol (3PC) is an advanced atomic commitment protocol
designed to overcome the blocking issue present in the Two-Phase Commit (2PC)
protocol. Its objective is to ensure that all participating nodes in a distributed
transaction either commit or abort in a coordinated manner—even in the event of
coordinator failure. 3PC introduces an additional phase—the pre-commit phase—
between the voting and commit phases to provide a buffer that increases fault
tolerance. The protocol operates in three steps: (1) the canCommit phase, where
participants vote on whether they can commit, (2) the preCommit phase, where the
coordinator sends a readiness message after receiving unanimous agreement, and (3)
the doCommit phase, where the final commit is issued. This extra step allows
participants to safely commit even if the coordinator crashes, thus preventing
indefinite blocking and ensuring non-blocking atomicity in crash-prone environments.

SOURCE CODE:
#include <stdio.h>
#include <stdbool.h>

// Participant structure
typedef struct {
char name[10];
bool can_commit_flag;
} Participant;

// Step 1: Can Commit Phase


bool can_commit(Participant p) {
printf("%s can_commit: %s\n", p.name, p.can_commit_flag ? "true" : "false");
return p.can_commit_flag;
}

// Step 2: Pre-Commit Phase


void pre_commit(Participant p) {
printf("%s enters PRE-COMMIT\n", p.name);
}

// Step 3: Do Commit Phase


void do_commit(Participant p) {
printf("%s commits\n", p.name);
}
// Abort Phase (renamed)
void abort_transaction(Participant p) {
printf("%s aborts\n", p.name);
}

// Coordinator logic
void run_3pc(Participant participants[], int count) {
bool all_can_commit = true;

// Phase 1: Can Commit


for (int i = 0; i < count; i++) {
if (!can_commit(participants[i])) {
all_can_commit = false;
}
}

if (!all_can_commit) {
for (int i = 0; i < count; i++) {
abort_transaction(participants[i]);
}
return;
}

// Phase 2: Pre-Commit
for (int i = 0; i < count; i++) {
pre_commit(participants[i]);
}

// Phase 3: Do Commit (assume all pre-commit acknowledgments received)


for (int i = 0; i < count; i++) {
do_commit(participants[i]);
}
}

int main() {
Participant participants[] = {
{"P1", true},
{"P2", true}
};

run_3pc(participants, 2);

return 0;
}
OUTPUT:

RESULT:
The simulation successfully executed the Three-Phase Commit Protocol. Both
participants agreed to commit in the first phase, moved into the PRE-COMMIT
phase, and eventually committed the transaction in the final phase. No abort was
necessary, and the coordinator managed the process correctly.

CONCLUSION:
This simulation demonstrated how 3PC enhances the standard Two-Phase Commit by
introducing a pre-commit stage that reduces blocking during coordinator failures. It
ensures non-blocking atomicity, providing better fault tolerance and coordination
across distributed systems.
Lab no. 13
RELIABLE MULTICAST WITH RETRANSMISSION

OBJECTIVE:
 Ensure message delivery to all recipients in the presence of potential message
loss.
THEORY:
Reliable multicast ensures that all recipients in a distributed system receive a
message, even if some messages are lost. The sender waits for ACKs
(acknowledgments) from all receivers. If any don’t respond within a timeout, the
message is retransmitted to them. This mechanism provides fault tolerance,
consistency, and reliable delivery, especially useful in replicated databases or group
communication systems.

SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#include <time.h>

#define NUM_RECEIVERS 3

typedef struct {
int id;
bool received;
} Receiver;

// Simulate message delivery with 80% success rate


bool receive(Receiver *r, const char *msg) {
float chance = rand() / (float)RAND_MAX;
if (chance < 0.8) {
r->received = true;
printf("Receiver %d received: %s\n", r->id, msg);
return true;
} else {
printf("Receiver %d missed: %s\n", r->id, msg);
return false;
}
}

// Multicast with retransmission


void reliable_multicast(Receiver receivers[], int count, const char *msg) {
bool acks[NUM_RECEIVERS];

// Initial send
for (int i = 0; i < count; i++) {
acks[i] = receive(&receivers[i], msg);
}

// Retransmit to missed receivers


for (int i = 0; i < count; i++) {
if (!acks[i]) {
printf("Retransmitting to Receiver %d...\n", receivers[i].id);
receive(&receivers[i], msg);
}
}
}

int main() {
srand(time(NULL));
Receiver receivers[NUM_RECEIVERS];

// Initialize receivers
for (int i = 0; i < NUM_RECEIVERS; i++) {
receivers[i].id = i;
receivers[i].received = false;
}

reliable_multicast(receivers, NUM_RECEIVERS, "Update #42");

return 0;
}

OUTPUT:

RESULT:
The simulation successfully showed how a message ("Update #42") was multicast to
three receivers. Some receivers may have initially missed the message due to
simulated loss, but the sender detected these and retransmitted the message, ensuring
that all receivers eventually received it.

CONCLUSION:
This implementation demonstrated the effectiveness of reliable multicast by handling
message loss through acknowledgments and retransmissions. It reinforced the
principle that delivery guarantees in distributed systems can be achieved even
over unreliable communication by implementing simple retransmission logic.

Lab no. 14
ENCRYPTION ALGORITHMS (RSA & AES)
OBJECTIVE:
 Secure data through encryption and decryption processes.
THEORY:
To keep our data safe, we use encryption—it hides the real message so that only the
right person can read it.
 RSA (Asymmetric Encryption):
o Uses two keys: one for locking (public key) and one for unlocking
(private key).
o Great for sending secrets safely and signing digital messages.
o Works by using large prime numbers—hard to guess, so it's secure.
 AES (Symmetric Encryption):
 Uses one secret key for both locking and unlocking the data.
 Very fast and good for encrypting large files like videos or backups.
 Comes in key sizes like 128, 192, or 256 bits—bigger = stronger.
These two are often used together: RSA to share the key securely, and AES to encrypt
the actual data quickly.

SOURCE CODE:
from cryptography.hazmat.primitives.asymmetric import rsa, padding
from cryptography.hazmat.primitives import serialization, hashes
from cryptography.fernet import Fernet

private_key = rsa.generate_private_key(public_exponent=65537, key_size=2048)


public_key = private_key.public_key()

message = b"Secret message"


ciphertext = public_key.encrypt(
message,
padding.OAEP(mgf=padding.MGF1(algorithm=hashes.SHA256()),
algorithm=hashes.SHA256(), label=None)
)

plaintext = private_key.decrypt(
ciphertext,
padding.OAEP(mgf=padding.MGF1(algorithm=hashes.SHA256()),
algorithm=hashes.SHA256(), label=None)
)

print(f"Encrypted: {ciphertext}")
print(f"Decrypted: {plaintext}")

key = Fernet.generate_key()
cipher = Fernet(key)

message = b"hello world"


encrypted = cipher.encrypt(message)

decrypted = cipher.decrypt(encrypted)

print(f"Encrypted: {encrypted}")
print(f"Decrypted: {decrypted}")

OUTPUT:

RESULT:
Both RSA and AES encryption successfully converted the original messages into
encrypted formats and decrypted them back to plaintext.

CONCLUSION:
This proves that RSA (for secure key exchange) and AES (for fast data encryption)
are effective and reliable tools for protecting digital information.
Lab no. 15
AUTHENTICATION PROTOCOL: KERBEROS
SIMULATION

OBJECTIVE:
 Authenticate users and grant access to services securely.

THEORY:
Kerberos is a secure protocol that authenticates users without sending passwords over
the network. It uses tickets and symmetric encryption to verify identity.
 The client logs in and gets a Ticket-Granting Ticket (TGT) from the
Authentication Server.
 It then uses the TGT to request access to services from the Ticket Granting
Server.
 The client presents the service ticket to access the service securely.
This prevents replay attacks and protects credentials in distributed systems.

SOURCE CODE:
from cryptography.fernet import Fernet

# Shared symmetric keys (would be secret in real life)


KEY_TGS = Fernet.generate_key()
KEY_SERVICE = Fernet.generate_key()
KEY_CLIENT = Fernet.generate_key()

fernet_tgs = Fernet(KEY_TGS)
fernet_service = Fernet(KEY_SERVICE)
fernet_client = Fernet(KEY_CLIENT)

# Step 1: Client contacts Authentication Server (AS)


def authentication_server(client_id):
print(f"[AS] Authenticating {client_id}...")

# Create a TGT (Ticket Granting Ticket)


tgt_data = f"{client_id}|TGS".encode()
tgt = fernet_tgs.encrypt(tgt_data)

# Return TGT + session key for TGS (encrypted with client key)
session_key = Fernet.generate_key()
client_package = fernet_client.encrypt(session_key + b"||" + tgt)

return client_package
# Step 2: Client sends TGT to Ticket Granting Server (TGS)
def ticket_granting_server(client_package):
decrypted = fernet_client.decrypt(client_package)
session_key, tgt = decrypted.split(b"||")
print(f"[Client] Extracted session key and TGT.")

# TGS verifies TGT


tgt_data = fernet_tgs.decrypt(tgt)
client_id, server = tgt_data.decode().split("|")
print(f"[TGS] Valid TGT for {client_id}")

# Create service ticket


service_ticket_data = f"{client_id}|ACCESS_SERVICE".encode()
service_ticket = fernet_service.encrypt(service_ticket_data)

# Encrypt service ticket with session key


session = Fernet(session_key)
client_service_package = session.encrypt(service_ticket)
return session_key, client_service_package

# Step 3: Client accesses Service using service ticket


def service_server(service_ticket):
# Service decrypts ticket
decrypted = fernet_service.decrypt(service_ticket)
client_id, action = decrypted.decode().split("|")
print(f"[Service] {client_id} is allowed to {action}")

# --- Simulate the Flow ---


client_id = "Alice"

print("\n=== Step 1: Authentication Server (AS) ===")


client_package = authentication_server(client_id)

print("\n=== Step 2: Ticket Granting Server (TGS) ===")


session_key, service_ticket_encrypted = ticket_granting_server(client_package)

print("\n=== Step 3: Service Server ===")


# Client decrypts service ticket before sending
session = Fernet(session_key)
service_ticket = session.decrypt(service_ticket_encrypted)
service_server(service_ticket)
OUTPUT:

RESULT:
The simulation successfully mimicked the Kerberos flow. The client authenticated
with the Authentication Server (AS), received a Ticket Granting Ticket (TGT), used it
to obtain a service ticket from the Ticket Granting Server (TGS), and finally accessed
the target service—all through secure, encrypted exchanges.

CONCLUSION:
This experiment demonstrated how the Kerberos protocol secures user authentication
in distributed systems. It showed how tickets and symmetric encryption prevent
password exposure and enable trusted access to services.
Lab no. 16
ACCESS CONTROL: ROLE-BASED ACCESS CONTROL (RBAC)

OBJECTIVE:
 Manage user permissions based on roles to enforce security policies.

THEORY:
RBAC is a security approach in which access permissions are assigned to roles rather
than individual users. Users are then given roles based on their responsibilities, and
each role has specific access rights to resources.
For example, in a company:
 A Manager role may have access to project files and reports.
 A Staff role might only access daily task documents.
 A Guest role could be restricted to view-only permissions.
This system makes access management scalable, consistent, and secure, especially in
large organizations. It reduces errors, avoids unnecessary access, and ensures that
users only have access to what their role requires.

SOURCE CODE:
OUTPUT:

RESULT:
The program correctly checked user permissions based on their assigned roles:
 Alice (admin) was allowed to delete.
 Bob (editor) was denied delete access.
 Charlie (viewer) was allowed to read.

CONCLUSION:
This simulation showed how RBAC effectively enforces access restrictions. By
assigning permissions to roles and users, the system ensures that users can only
perform actions appropriate to their responsibilities.
Lab no. 17
GOOGLE OAuth SETUP

OBJECTIVE:
 Provide a step-by-step Guide to create Google OAuth Credentials.

THEORY:
Google OAuth is an authorization framework that lets apps access user data securely
without needing their passwords. It works by letting users log in using their Google
account, and then gives the app limited access (like reading email or profile info)
through access tokens.
The process involves:
 OAuth Consent Screen: Shows users what the app wants to access.
 Client ID & Secret: Credentials generated in Google Cloud Console.
 Redirect URI: Where Google sends users back after login with the token.
OAuth keeps apps secure and users in control of what data gets shared.

STEPS & OUTPUT:


1. Go to Google Cloud Console
 Open: https://console.cloud.google.com/
 Log in with your Google account.

2. Create or Select a Project


 Click on the project dropdown at the top.
 Select an existing project or click "New Project" to create a new one.

3. Enable OAuth APIs


 In the left sidebar, go to APIs & Services > Library.
 Search for and enable the following APIs:
 Google People API (for user profile info)
 Any other API your app will use
4. Create OAuth 2.0 Credentials
 Go to APIs & Services > Credentials
Click "+ CREATE CREDENTIALS" > OAuth client ID

5. Configure Consent Screen (if not already done)


 You'll be prompted to configure the OAuth consent screen:
 Choose External (for public access).
 Fill in required fields: app name, support email, etc.
 Add scopes (e.g., email, profile).
 Add test users (for development; skip for production).

6. Choose Application Type


 Now choose your app type (most common):
 Web application – for web apps
 Desktop app – for desktop programs
 Android/iOS – for mobile
 Enter a name (e.g., “My OAuth App”).

7. Set Redirect URIs


 For web apps: Add Authorized redirect URIs like:
http://localhost:3000/auth/google/callback
https://yourdomain.com/auth/google/callback
 These are the URIs Google redirects to after authentication.

8. Get Your Client ID and Client Secret


 After creating, you’ll see:
 Client ID
 Client Secret
 Copy and store them securely.
OUTPUTS:
RESULT:
The Google OAuth setup successfully allowed the application to redirect users for
authentication, retrieve an access token, and securely access basic profile information
using the credentials provided in the Google Cloud Console.

CONCLUSION:
This lab demonstrated how OAuth 2.0 enables secure and user-consented access to
Google services. By using client credentials, consent screens, and access tokens, it
ensures that sensitive user data remains protected while granting limited access to
authorized applications.

You might also like