0% found this document useful (0 votes)
39 views57 pages

OS Lab Programs 508

fbsffgaefespuf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views57 pages

OS Lab Programs 508

fbsffgaefespuf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 57

Date: ExP.No.

: Page |1

CPU scheduling algorithms

Aim: To write programs to implement CPU scheduling


algorithms Description:
CPU scheduling algorithms are an essential component of operating systems that manage how
processes (tasks or threads) are allocated CPU time for execution. These algorithms ensure efficient
and fair utilization of the CPU, maximizing system throughput and responsiveness. Here is a brief
description of some common CPU scheduling algorithms:

1. First-Come, First-Served (FCFS):


This is the simplest CPU scheduling algorithm, where processes are executed in the order they arrive in
the ready queue. The first process to arrive gets the CPU first, and it continues until it completes or is
preempted by another process. FCFS may lead to poor average waiting times, especially if long processes
arrive before shorter ones.

2. Shortest Job Next (SJN) / Shortest Job First (SJF):


In this algorithm, the process with the shortest burst time (execution time) is chosen to run next. It
aims to minimize average waiting time and overall turnaround time. However, predicting the exact
burst times of processes in advance is usually difficult, which may require estimates or assumptions.

3. Priority Scheduling:
Each process is assigned a priority, and the CPU is allocated to the process with the highest priority.
Priority can be predefined based on process characteristics or dynamic, changing during execution
based on various factors. This algorithm can be either preemptive or non-preemptive.

4. Round Robin (RR):


In Round Robin scheduling, each process is allocated a fixed time slice or quantum of CPU time. The
processes are executed in a circular order, and if a process does not complete within its time slice, it is
preempted and moved to the back of the queue. RR provides fair execution to all processes but may
suffer from high context switch overhead for small time slices.

5. Shortest Remaining Time First:


Shortest Remaining Time First (SRTF), also known as Shortest Remaining Time Next (SRTN), is a CPU
scheduling algorithm and a preemptive version of the Shortest Job Next (SJN) algorithm. SRTF aims to
minimize the average waiting time and turnaround time of processes by giving preference to the
process with the shortest remaining execution time.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: Page |2

Algorithms:
1) First Come First Serve:
Program:
def fcfs_scheduling(processes, arrival_time, burst_time):
n = len(processes)
waiting_time = [0] * n
completion_time = [0] * n
turnaround_time = [0] * n
current_time = 0

# Calculate completion time, waiting time, and turnaround time


for i in range(n):
if current_time < arrival_time[i]:
current_time = arrival_time[i]
completion_time[i] = current_time + burst_time[i]
turnaround_time[i] = completion_time[i] - arrival_time[i]
waiting_time[i] = turnaround_time[i] - burst_time[i]
current_time = completion_time[i]

# Print the Gantt chart


print("\nGantt chart:")
for i in range(n):
print(f"| {processes[i]} {completion_time[i]} ", end="")
print("|")

# Print the process information


print("\nProcess\tArrival Time\tBurst Time\tCompletion Time\tWaiting Time\tTurnaround Time")
for i in range(n):
print(f"{processes[i]}\t{arrival_time[i]}\t\t{burst_time[i]}\t\t{completion_time[i]}\t\t{waiting_time[i]}\
t\t{turnaround_time[i]}")

avg_waiting_time = sum(waiting_time) / n
print(f"\nAverage waiting time: {avg_waiting_time:.2f}")

def get_inputs():
n = int(input("Enter the number of processes: "))
processes = [f"p{i}" for i in range(n)]
arrival_time = []
burst_time = []

for i in range(n):
at = int(input(f"Enter the arrival time of {processes[i]}: "))
bt = int(input(f"Enter the burst time of {processes[i]}: "))
arrival_time.append(at)
burst_time.append(bt)

return processes, arrival_time, burst_time

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: Page |3

processes, arrival_time, burst_time = get_inputs()


fcfs_scheduling(processes, arrival_time, burst_time)
SAMPLE OUTPUT:
Enter the number of processes: 3
Enter the arrival time of p0: 0
Enter the burst time of p0: 5
Enter the arrival time of p1: 1
Enter the burst time of p1: 3
Enter the arrival time of p2: 2
Enter the burst time of p2: 2
Gantt chart:
| p0 5 | p1 8 | p2 10 |

Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 5 5 0 5
p1 1 3 8 4 7
p2 2 2 10 6 8

Average waiting time: 3.33


Executed Output:
Enter the number of processes: 3
Enter the arrival time of p0: 0
Enter the burst time of p0: 5
Enter the arrival time of p1: 1
Enter the burst time of p1: 3
Enter the arrival time of p2: 2
Enter the burst time of p2: 2
Gantt chart:
| p0 5 | p1 8 | p2 10 |

Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 5 5 0 5
p1 1 3 8 4 7
p2 2 2 10 6 8

Average waiting time: 3.33

Shortest Job first:

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: Page |4

Program:
def sjf_scheduling(processes, arrival_time, burst_time):
n = len(processes)
waiting_time = [0] * n
completion_time = [0] * n
turnaround_time = [0] * n
remaining_time = burst_time.copy()
current_time = 0
completed_processes = 0

# Process the jobs


while completed_processes < n:
# Find the process with the shortest burst time that has arrived
min_time = float('inf')
min_index = -1
for i in range(n):
if arrival_time[i] <= current_time and remaining_time[i] < min_time and remaining_time[i] > 0:
min_time = remaining_time[i]
min_index = i

if min_index == -1:
current_time += 1
continue

# Update time and remaining time


current_time += remaining_time[min_index]
completion_time[min_index] = current_time
turnaround_time[min_index] = completion_time[min_index] - arrival_time[min_index]
waiting_time[min_index] = turnaround_time[min_index] - burst_time[min_index]
remaining_time[min_index] = 0
completed_processes += 1

# Print the Gantt chart


print("\nGantt chart:")
print("|", end="")
current_time = 0
for i in range(n):
print(f" {processes[i]} {completion_time[i]} |", end="")
print()

# Print the process information


print("\nProcess\tArrival Time\tBurst Time\tCompletion Time\tWaiting Time\tTurnaround Time")
for i in range(n):
print(f"{processes[i]}\t{arrival_time[i]}\t\t{burst_time[i]}\t\t{completion_time[i]}\t\t{waiting_time[i]}\
t\t{turnaround_time[i]}")

avg_waiting_time = sum(waiting_time) / n
print(f"\nAverage waiting time: {avg_waiting_time:.2f}")

def get_inputs():
n = int(input("Enter the number of processes: "))

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: Page |5

processes = [f"p{i}" for i in range(n)]


arrival_time = []
burst_time = []

for i in range(n):
at = int(input(f"Enter the arrival time of {processes[i]}: "))
bt = int(input(f"Enter the burst time of {processes[i]}: "))
arrival_time.append(at)
burst_time.append(bt)

return processes, arrival_time, burst_time

processes, arrival_time, burst_time = get_inputs()


sjf_scheduling(processes, arrival_time, burst_time)
Sample Output:
Enter the number of processes: 4
Enter the arrival time of p0: 0
Enter the burst time of p0: 8
Enter the arrival time of p1: 1
Enter the burst time of p1: 4
Enter the arrival time of p2: 2
Enter the burst time of p2: 9
Enter the arrival time of p3: 3
Enter the burst time of p3: 5
Gantt chart:
| p0 8 | p1 12 | p3 17 | p2 26 |

Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 8 8 0 8
p1 1 4 12 7 11
p2 2 9 26 15 24
p3 3 5 17 9 14

Average waiting time: 7.75


Executed Output:
Enter the number of processes: 4
Enter the arrival time of p0: 0
Enter the burst time of p0: 8
Enter the arrival time of p1: 1
Enter the burst time of p1: 4
Enter the arrival time of p2: 2
Enter the burst time of p2: 9
Enter the arrival time of p3: 3
Enter the burst time of p3: 5
Gantt chart:
| p0 8 | p1 12 | p3 17 | p2 26 |

Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 8 8 0 8
p1 1 4 12 7 11
p2 2 9 26 15 24
p3 3 5 17 9 14

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: Page |6

Average waiting time: 7.75

Priority scheduling algorithm:


Program:

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: Page |7

def priority_scheduling(processes, arrival_time, burst_time, priority):


n = len(processes)
waiting_time = [0] * n
completion_time = [0] * n
turnaround_time = [0] * n
sorted_processes = sorted(range(n), key=lambda i: (arrival_time[i], -priority[i]))

current_time = 0
completed_processes = 0

while completed_processes < n:


# Select the process with the highest priority that has arrived
available_processes = [i for i in sorted_processes if arrival_time[i] <= current_time and
completion_time[i] == 0]
if not available_processes:
current_time += 1
continue

# Find the process with the highest priority (lowest number)


current_process = min(available_processes, key=lambda i: priority[i])
burst = burst_time[current_process]

# Update times
completion_time[current_process] = current_time + burst
turnaround_time[current_process] = completion_time[current_process] -
arrival_time[current_process]
waiting_time[current_process] = turnaround_time[current_process] - burst_time[current_process]
current_time += burst
completed_processes += 1

# Print the Gantt chart


print("\nGantt chart:")
last_time = 0
for i in range(n):
if completion_time[i] > last_time:
print(f"| {processes[i]} {completion_time[i]} ", end="")
last_time = completion_time[i]
print("|")

# Print the process information


print("\nProcess\tArrival Time\tBurst Time\tPriority\tCompletion Time\tWaiting Time\tTurnaround
Time")
for i in range(n):
print(f"{processes[i]}\t{arrival_time[i]}\t\t{burst_time[i]}\t\t{priority[i]}\t\t{completion_time[i]}\t\
t{waiting_time[i]}\t\t{turnaround_time[i]}")

avg_waiting_time = sum(waiting_time) / n
print(f"\nAverage waiting time: {avg_waiting_time:.2f}")

def get_inputs():
n = int(input("Enter the number of processes: "))

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: Page |8

processes = [f"p{i}" for i in range(n)]


arrival_time = []
burst_time = []
priority = []

for i in range(n):
at = int(input(f"Enter the arrival time of {processes[i]}: "))
bt = int(input(f"Enter the burst time of {processes[i]}: "))
pr = int(input(f"Enter the priority of {processes[i]} (lower number means higher priority): "))
arrival_time.append(at)
burst_time.append(bt)
priority.append(pr)

return processes, arrival_time, burst_time, priority

processes, arrival_time, burst_time, priority = get_inputs()


priority_scheduling(processes, arrival_time, burst_time, priority)
Sample Output:
Enter the number of processes: 4
Enter the arrival time of p0: 0
Enter the burst time of p0: 8
Enter the priority of p0 (lower number means higher priority): 3
Enter the arrival time of p1: 1
Enter the burst time of p1: 4
Enter the priority of p1 (lower number means higher priority): 1
Enter the arrival time of p2: 2
Enter the burst time of p2: 9
Enter the priority of p2 (lower number means higher priority): 4
Enter the arrival time of p3: 3
Enter the burst time of p3: 5
Enter the priority of p3 (lower number means higher priority): 2
Gantt chart:
| p0 8 | p1 12 | p3 17 | p2 26 |

Process Arrival Time Burst Time PriorityCompletion Time Waiting Time Turnaround Time
p0 0 8 3 8 0
8
p1 1 4 1 12 7
11
p2 2 9 4 26 15
24
p3 3 5 2 17 9
14

Average waiting time: 7.75

Executed Output:
Enter the number of processes: 4
Enter the arrival time of p0: 0
Enter the burst time of p0: 8
Enter the priority of p0 (lower number means higher priority): 3
Enter the arrival time of p1: 1

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: Page |9

Enter the burst time of p1: 4


Enter the priority of p1 (lower number means higher priority): 1
Enter the arrival time of p2: 2
Enter the burst time of p2: 9
Enter the priority of p2 (lower number means higher priority): 4
Enter the arrival time of p3: 3
Enter the burst time of p3: 5
Enter the priority of p3 (lower number means higher priority): 2
Gantt chart:
| p0 8 | p1 12 | p3 17 | p2 26 |

Process Arrival Time Burst Time PriorityCompletion Time Waiting Time Turnaround Time
p0 0 8 3 8 0
8
p1 1 4 1 12 7
11
p2 2 9 4 26 15
24
p3 3 5 2 17 9
14

Average waiting time: 7.75

Round robin scheduling:


Program:
def round_robin_scheduling(processes, arrival_time, burst_time, quantum):

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 10

n = len(processes)
remaining_time = burst_time.copy()
waiting_time = [0] * n
turnaround_time = [0] * n
completion_time = [0] * n

current_time = 0
queue = []
process_indices = list(range(n))

while process_indices or queue:


# Add all processes that have arrived to the queue
for i in list(process_indices):
if arrival_time[i] <= current_time:
queue.append(i)
process_indices.remove(i)

if not queue:
# If no processes are in the queue, advance time
current_time += 1
continue

# Get the next process from the queue


current_process = queue.pop(0)

# Calculate the execution time for the current quantum


execute_time = min(quantum, remaining_time[current_process])
remaining_time[current_process] -= execute_time
current_time += execute_time

if remaining_time[current_process] == 0:
completion_time[current_process] = current_time
turnaround_time[current_process] = completion_time[current_process] -
arrival_time[current_process]
waiting_time[current_process] = turnaround_time[current_process] - burst_time[current_process]
else:
# If process is not completed, put it back in the queue
queue.append(current_process)

# Print the Gantt chart


print("\nGantt chart:")
current_time = 0
for i in range(n):
print(f"| {processes[i]} {completion_time[i]} ", end="")
print("|")

# Print the process information


print("\nProcess\tArrival Time\tBurst Time\tCompletion Time\tWaiting Time\tTurnaround Time")
for i in range(n):
print(f"{processes[i]}\t{arrival_time[i]}\t\t{burst_time[i]}\t\t{completion_time[i]}\t\
t{waiting_time[i]}\t\t{turnaround_time[i]}")

avg_waiting_time = sum(waiting_time) / n

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 11

print(f"\nAverage waiting time: {avg_waiting_time:.2f}")

def get_inputs():
n = int(input("Enter the number of processes: "))
processes = [f"p{i}" for i in range(n)]
arrival_time = []
burst_time = []

for i in range(n):
at = int(input(f"Enter the arrival time of {processes[i]}: "))
bt = int(input(f"Enter the burst time of {processes[i]}: "))
arrival_time.append(at)
burst_time.append(bt)

quantum = int(input("Enter the time quantum for Round Robin scheduling: "))
return processes, arrival_time, burst_time, quantum

processes, arrival_time, burst_time, quantum = get_inputs()


round_robin_scheduling(processes, arrival_time, burst_time, quantum)

Sample Output:
Enter the number of processes: 3
Enter the arrival time of p0: 0
Enter the burst time of p0: 4
Enter the arrival time of p1: 1
Enter the burst time of p1: 5
Enter the arrival time of p2: 2
Enter the burst time of p2: 2
Enter the time quantum for Round Robin scheduling: 3
Gantt chart:
| p0 3 | p1 6 | p2 8 | p0 9 | p1 12 |

Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 4 9 5 9
p1 1 5 12 6 11
p2 2 2 8 4 6

Average waiting time: 5.00


Executed Output:

Enter the number of processes: 3


Enter the arrival time of p0: 0
Enter the burst time of p0: 4
Enter the arrival time of p1: 1
Enter the burst time of p1: 5
Enter the arrival time of p2: 2
Enter the burst time of p2: 2
Enter the time quantum for Round Robin scheduling: 3
Gantt chart:
| p0 3 | p1 6 | p2 8 | p0 9 | p1 12 |

Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 4 9 5 9
p1 1 5 12 6 11
p2 2 2 8 4 6

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 12

Average waiting time: 5.00

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 13

Shortest remaining time first:


Program:
def srtf_scheduling(processes, arrival_time, burst_time):
n = len(processes)
remaining_time = burst_time.copy()
waiting_time = [0] * n
turnaround_time = [0] * n
completion_time = [0] * n

current_time = 0
completed_processes = 0
process_queue = []

while completed_processes < n:


# Add all processes that have arrived to the queue
for i in range(n):
if arrival_time[i] <= current_time and remaining_time[i] > 0:
process_queue.append(i)

if not process_queue:
current_time += 1
continue

# Find the process with the shortest remaining time


current_process = min(process_queue, key=lambda i: remaining_time[i])

# Execute the current process for 1 unit of time


remaining_time[current_process] -= 1
current_time += 1

# Print Gantt chart segment


print(f"| {processes[current_process]} {current_time} ", end="")

# If the current process is completed


if remaining_time[current_process] == 0:
completion_time[current_process] = current_time
turnaround_time[current_process] = completion_time[current_process] -
arrival_time[current_process]
waiting_time[current_process] = turnaround_time[current_process] - burst_time[current_process]
process_queue.remove(current_process)
completed_processes += 1

# Print the Gantt chart


print("|")

# Print the process information


print("\nProcess\tArrival Time\tBurst Time\tCompletion Time\tWaiting Time\tTurnaround Time")
for i in range(n):
print(f"{processes[i]}\t{arrival_time[i]}\t\t{burst_time[i]}\t\t{completion_time[i]}\t\t{waiting_time[i]}\
t\t{turnaround_time[i]}")

avg_waiting_time = sum(waiting_time) / n
print(f"\nAverage waiting time: {avg_waiting_time:.2f}")

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 14

def get_inputs():
n = int(input("Enter the number of processes: "))
processes = [f"p{i}" for i in range(n)]
arrival_time = []
burst_time = []

for i in range(n):
at = int(input(f"Enter the arrival time of {processes[i]}: "))
bt = int(input(f"Enter the burst time of {processes[i]}: "))
arrival_time.append(at)
burst_time.append(bt)

return processes, arrival_time, burst_time

processes, arrival_time, burst_time = get_inputs()


srtf_scheduling(processes, arrival_time, burst_time)
Sample Output:
Enter the number of processes: 4
Enter the arrival time of p0: 0
Enter the burst time of p0: 8
Enter the arrival time of p1: 1
Enter the burst time of p1: 4
Enter the arrival time of p2: 2
Enter the burst time of p2: 9
Enter the arrival time of p3: 3
Enter the burst time of p3: 5
| p0 1 | p1 5 | p2 14 | p3 19 | p2 27 |

Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 8 19 11 19
p1 1 4 5 0 4
p2 2 9 27 16 25
p3 3 5 14 6 11

Average waiting time: 8.25


Executed Output:
Enter the number of processes: 4
Enter the arrival time of p0: 0
Enter the burst time of p0: 8
Enter the arrival time of p1: 1
Enter the burst time of p1: 4
Enter the arrival time of p2: 2
Enter the burst time of p2: 9
Enter the arrival time of p3: 3
Enter the burst time of p3: 5
| p0 1 | p1 5 | p2 14 | p3 19 | p2 27 |

Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 8 19 11 19
p1 1 4 5 0 4
p2 2 9 27 16 25

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 15

p3 3 5 14 6 11

Average waiting time: 8.25


Result:
Thus python programs to implement CPU scheduling algorithms successfully executed.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 16

Write a program for implementation of semaphore

Aim:
To write a program for the implementation of semaphores
Description:
Semaphores are a synchronization mechanism used in concurrent programming to control access to
shared resources and coordinate the execution of multiple processes or threads. They were introduced
by Edsger W. Dijkstra in 1965 and have since become a fundamental concept in operating systems and
parallel computing.

A semaphore is a simple integer variable, often referred to as a "counting semaphore." It can take
on non- negative integer values and supports two main operations:

1. **Wait (P) Operation**: When a process (or thread) wants to access a shared resource, it must first
perform a "wait" operation on the semaphore. If the semaphore value is greater than zero, the
process decrements the semaphore value and continues its execution, indicating that the resource is
available. If the semaphore value is zero, the process is blocked (put to sleep) until the semaphore
value becomes positive again.

2. **Signal (V) Operation**: After a process finishes using a shared resource, it performs a "signal"
operation on the semaphore. This operation increments the semaphore value, indicating that the
resource is now available for other processes or threads to use. If there were blocked processes waiting
for the semaphore to become positive (due to previous "wait" operations), one of them will be
awakened and granted access to the resource.

Semaphores help prevent race conditions and ensure that critical sections of code (regions of code that
access shared resources) are mutually exclusive, meaning only one process or thread can access the
shared resource at a time.

There are two main types of semaphores:


1. **Binary Semaphore**: A binary semaphore is a special type of semaphore that can only take on
two values, typically 0 and 1. It is often used for signaling purposes, where 0 means the resource is
unavailable, and 1 means the resource is available.

2. **Counting Semaphore**: As mentioned earlier, a counting semaphore can take on non-negative


integer values, allowing more flexible coordination among multiple processes or threads.

However, it's essential to use semaphores carefully to avoid potential issues like deadlocks (circular
waiting) or race conditions. More advanced synchronization mechanisms, such as mutexes and
condition variables, are often used in modern programming languages and libraries to manage
concurrency more effectively.

Program:
import time
import threading

# Semaphore functions
def semaphore_wait(semaphore):
while semaphore[0] <= 0:

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 17

pass
semaphore[0] -= 1
return semaphore

def semaphore_signal(semaphore):
semaphore[0] += 1
return semaphore

# Shared variable to act as a semaphore


shared_semaphore = [1] # Using a list to allow modification within threads

# Function for the tasks


def task_function(task_id):
global shared_semaphore

print(f"Task {task_id} is trying to acquire the semaphore.")


semaphore_wait(shared_semaphore)
print(f"Task {task_id} has acquired the semaphore.")

# Simulate some work


print(f"Task {task_id} is performing some work.")
time.sleep(2)

# Release the semaphore


print(f"Task {task_id} is releasing the semaphore.")
semaphore_signal(shared_semaphore)

# Create and start multiple tasks


k = int(input("Enter the number of tasks that want to share resources: "))
tasks = []

# Create and start threads for each task


for i in range(k):
thread = threading.Thread(target=task_function, args=(i,))
tasks.append(thread)
thread.start()

# Wait for all threads to complete


for task in tasks:
task.join()

print("All tasks have completed.")


Sample Output:
Enter the number of tasks that want to share resources: 3
Task 0 is trying to acquire the semaphore.
Task 0 has acquired the semaphore.
Task 0 is performing some work.
Task 1 is trying to acquire the semaphore.
Task 2 is trying to acquire the semaphore.
Task 0 is releasing the semaphore.
Task 1 has acquired the semaphore.
Task 1 is performing some work.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 18

Task 2 is trying to acquire the semaphore.


Task 1 is releasing the semaphore.
Task 2 has acquired the semaphore.
Task 2 is performing some work.
Task 2 is releasing the semaphore.
All tasks have completed.
Executed output:
Enter the number of tasks that want to share resources: 3
Task 0 is trying to acquire the semaphore.
Task 0 has acquired the semaphore.
Task 0 is performing some work.
Task 1 is trying to acquire the semaphore.
Task 2 is trying to acquire the semaphore.
Task 0 is releasing the semaphore.
Task 1 has acquired the semaphore.
Task 1 is performing some work.
Task 2 is trying to acquire the semaphore.
Task 1 is releasing the semaphore.
Task 2 has acquired the semaphore.
Task 2 is performing some work.
Task 2 is releasing the semaphore.
All tasks have completed.
Result:
Thus a python program for the implementation of semaphores is successfully executed.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 19

Write program for implementation of Shared memory and


IPC. Implementation of Shared memory:
Aim:
To write a python program for the implementation of shared memory.
Description:
Shared memory is a communication and synchronization mechanism that allows multiple processes or
threads to access the same region of memory in a concurrent manner. It enables efficient data sharing
and interprocess communication (IPC) among different entities running on the same system.

In shared memory, a region of memory is mapped into the address space of multiple processes or
threads. These processes/threads can then read from or write to the shared memory region just like
accessing regular memory. This shared memory area acts as a shared buffer, facilitating the exchange of
data between different processes without the need for copying data between them.

However, shared memory also requires careful synchronization to avoid data inconsistencies and race
conditions. Developers need to use synchronization primitives like semaphores, mutexes, or atomic
operations to ensure that multiple processes or threads access the shared memory in a coordinated
and controlled manner. Proper synchronization helps maintain data integrity and prevent conflicts
when multiple entities attempt to modify the shared data concurrently.

To use shared memory, operating systems provide APIs for creating and managing shared memory
regions. Developers need to be cautious while using shared memory to prevent data corruption and
ensure proper synchronization to avoid race conditions and data inconsistencies.
Program:
import multiprocessing
import time

def worker(shared_data, lock):


"""Function to be run by the worker process."""
for _ in range(5):
# Acquire the lock to ensure exclusive access to shared data
with lock:
# Modify shared data
shared_data.value += 1
print(f"Worker process: {shared_data.value}")

# Simulate some work


time.sleep(1)

def main():
# Create a shared integer with initial value 0
shared_data = multiprocessing.Value('i', 0)

# Create a lock for synchronizing access to shared memory


lock = multiprocessing.Lock()

# Create worker processes


process1 = multiprocessing.Process(target=worker, args=(shared_data, lock))
process2 = multiprocessing.Process(target=worker, args=(shared_data, lock))

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 20

# Start the processes


process1.start()
process2.start()

# Wait for both processes to finish


process1.join()
process2.join()

print(f"Final shared data value: {shared_data.value}")

if __name__ == "__main__":
main()
Sample Output:

Worker process: 1
Worker process: 2
Worker process: 3
Worker process: 4
Worker process: 5
Worker process: 6
Worker process: 7
Worker process: 8
Worker process: 9
Worker process: 10
Final shared data value: 10
Executed Output:
Worker process: 1
Worker process: 2
Worker process: 3
Worker process: 4
Worker process: 5
Worker process: 6
Worker process: 7
Worker process: 8
Worker process: 9
Worker process: 10
Final shared data value: 10
Result:
Thus a python program for the implementation of Shared memory successfully executed.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 21

Implementation of IPC:
Aim:
To write a program for the implementation of Inter Process Communication (IPC)

Description:
Inter-Process Communication (IPC) facilitates communication and data sharing among concurrent
processes on a computer. Shared memory allows direct access to a common memory area for high-
speed data exchange, while message passing involves sending messages through channels like pipes or
sockets. Synchronization tools like semaphores and mutexes prevent conflicts and ensure data integrity.
IPC finds applications in parallel computing, client-server systems, and real-time applications. Proper
design and implementation are essential to handle complexities and maintain security in IPC
mechanisms.

Inter-Process Communication (IPC) enables data exchange and coordination between concurrent
processes in a computer system. It uses shared memory or message passing channels for
communication. Synchronization mechanisms like semaphores ensure proper resource access. IPC is
crucial for parallel computing, client-server models, and process coordination. However, it requires
careful design to avoid complexities and security issues.

Program:
import multiprocessing
import time

def producer(queue):
"""Function to be run by the producer process."""
for i in range(5):
item = f"item-{i}"
print(f"Producer putting {item} into queue")
queue.put(item) # Put item into the queue
time.sleep(1) # Simulate work

def consumer(queue):
"""Function to be run by the consumer process."""
while True:
item = queue.get() # Get item from the queue
if item is None: # Sentinel value to end the consumer process
break
print(f"Consumer got {item} from queue")
time.sleep(2) # Simulate work

def main():
# Create a queue for IPC
queue = multiprocessing.Queue()

# Create producer and consumer processes


producer_process = multiprocessing.Process(target=producer, args=(queue,))
consumer_process = multiprocessing.Process(target=consumer, args=(queue,))

# Start the processes


producer_process.start()
consumer_process.start()

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 22

# Wait for the producer process to finish


producer_process.join()

# Add sentinel values to the queue to signal the consumer to stop


queue.put(None)

# Wait for the consumer process to finish


consumer_process.join()

print("Both producer and consumer processes have completed.")

if __name__ == "__main__":
main()

Sample Output:
Producer putting item-0 into queue
Producer putting item-1 into queue
Producer putting item-2 into queue
Producer putting item-3 into queue
Producer putting item-4 into queue
Consumer got item-0 from queue
Consumer got item-1 from queue
Consumer got item-2 from queue
Consumer got item-3 from queue
Consumer got item-4 from queue
Both producer and consumer processes have completed.
Executed Output:
Producer putting item-0 into queue
Producer putting item-1 into queue
Producer putting item-2 into queue
Producer putting item-3 into queue
Producer putting item-4 into queue
Consumer got item-0 from queue
Consumer got item-1 from queue
Consumer got item-2 from queue
Consumer got item-3 from queue
Consumer got item-4 from queue
Both producer and consumer processes have completed.
Result:
Thus a python program for the implementation of IPC successfully executed.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 23

Write a program to implement Banker’s algorithm for dead lock


avoidance:
Aim:
To write a program to implement Banker’s algorithm for dead lock avoidance
Description:
The Banker's algorithm is a deadlock avoidance algorithm used to prevent deadlocks in a resource
allocation system. It is primarily employed in operating systems to ensure that processes' resource
requests are granted in a way that avoids deadlock situations.
Key points about the Banker's algorithm:

1. **Resource Allocation Model**: The algorithm operates under the assumption that the system has
a fixed number of resources of different types (e.g., memory, CPU, printers) and multiple processes
compete for these resources.

2. **Maximum Claim**: Each process declares its maximum possible resource requirements
(maximum claim) before it begins its execution. This information is known to the system.

3. **Available Resources**: The system maintains a record of the available resources for each
resource type at any given time.

4. **Safety and Request Matrices**: The algorithm uses two matrices: the safety matrix and the
request matrix. The safety matrix is used to assess the system's ability to allocate resources safely to
processes without causing deadlock. The request matrix stores the resource requests made by each
process during its execution.

5. **Deadlock Avoidance**: The Banker's algorithm employs a conservative approach to resource


allocation. When a process makes a resource request, the system simulates the allocation and checks if
it can still maintain a safe state (no deadlock) after granting the requested resources. If the state
remains safe, the request is granted; otherwise, the process must wait until sufficient resources
become available.

6. **Safe State**: A system state is considered safe if there exists a sequence of resource allocations
where each process can complete its execution and release resources, allowing other processes to
complete without getting stuck in a deadlock.

7. **Resource Allocation and Deallocation**: The algorithm allows resource allocation to


processes and resource deallocation when processes release resources upon completion. It
continuously updates the available resources and request matrices based on these allocations and
deallocations.

8. **Dynamic Resource Requests**: The Banker's algorithm allows processes to make multiple
resource requests during their execution. It checks each request's safety before granting it.

The Banker's algorithm is a preventive measure against deadlocks, ensuring that resource allocations are
done in a manner that avoids deadlock scenarios. It provides a way for processes to request resources
safely, considering the system's current resource availability and avoiding situations that could lead to
deadlock.

Program:
import numpy as np

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 24

def is_safe_state(available, max_claim, allocation, need, sequence):


""" Check if the system is in a safe state using Banker's Algorithm """
n = len(available)
work = available.copy()
finish = np.zeros(n, dtype=bool)
safe_sequence = []

for _ in range(n):
for i in range(n):
if not finish[i] and all(need[i] <= work):
work += allocation[i]
finish[i] = True
safe_sequence.append(i)
break
else:
return False, []

return True, safe_sequence

def bankers_algorithm(available, max_claim, allocation):


n = len(available)
need = max_claim - allocation

# Check if initial state is safe


safe, sequence = is_safe_state(available, max_claim, allocation, need, [])
if not safe:
print("Initial state is not safe. Deadlock detected.")
return

print("Initial state is safe. Executing processes...")

# Simulate execution by requesting resources and releasing them


for i in range(n):
# Simulate resource request and release
print(f"Process {i} is requesting resources...")

request = np.random.randint(1, need[i] + 1) # Random request within need


if all(request <= available):
available -= request
allocation[i] += request
need[i] -= request
print(f"Process {i} acquired resources. New state:")
print(f"Available: {available}")
print(f"Allocation: {allocation}")
print(f"Need: {need}")
else:
print(f"Process {i} must wait. Insufficient resources available.")

# Simulate release of resources after some processing


release = np.random.randint(1, allocation[i] + 1) # Random release within allocation

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 25

available += release
allocation[i] -= release
need[i] += release
print(f"Process {i} released resources. New state:")
print(f"Available: {available}")
print(f"Allocation: {allocation}")
print(f"Need: {need}")

# Check if system is still in a safe state after each iteration


safe, sequence = is_safe_state(available, max_claim, allocation, need, sequence)
if safe:
print(f"System is in a safe state after Process {i}. Safe sequence: {sequence}")
else:
print(f"System is not in a safe state after Process {i}. Deadlock detected.")
break

# Dynamic input from user


n = int(input("Enter the number of processes: "))
m = int(input("Enter the number of resource types: "))

max_claim = np.zeros((n, m), dtype=int)


allocation = np.zeros((n, m), dtype=int)
available = np.zeros(m, dtype=int)

print("Enter the maximum claim matrix:")


for i in range(n):
for j in range(m):
max_claim[i][j] = int(input(f"Process {i}, Resource {j}: "))

print("Enter the allocation matrix:")


for i in range(n):
for j in range(m):
allocation[i][j] = int(input(f"Process {i}, Resource {j}: "))

print("Enter the available resources:")


for j in range(m):
available[j] = int(input(f"Resource {j}: "))

print("\nExecuting Banker's Algorithm...\n")


bankers_algorithm(available, max_claim, allocation)
Sample Output:
Enter the number of processes: 3
Enter the number of resource types: 2

Enter the maximum claim matrix:


Process 0, Resource 0: 7
Process 0, Resource 1: 5
Process 1, Resource 0: 3
Process 1, Resource 1: 2
Process 2, Resource 0: 9
Process 2, Resource 1: 0

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 26

Enter the allocation matrix:


Process 0, Resource 0: 0
Process 0, Resource 1: 1
Process 1, Resource 0: 2
Process 1, Resource 1: 0
Process 2, Resource 0: 3
Process 2, Resource 1: 0

Enter the available resources:


Resource 0: 3
Resource 1: 2

Executing Banker's Algorithm...

Initial state is safe. Executing processes...


Process 0 is requesting resources...
Process 0 acquired resources. New state:
Available: [2 2]
Allocation: [[1 1]
[2 0]
[3 0]]
Need: [[6 4]
[1 2]
[6 0]]
Process 0 released resources. New state:
Available: [2 2]
Allocation: [[0 1]
[2 0]
[3 0]]
Need: [[7 4]
[1 2]
[6 0]]
System is in a safe state after Process 0. Safe sequence: [0]

Process 1 is requesting resources...


Process 1 acquired resources. New state:
Available: [2 2]
Allocation: [[0 1]
[3 0]
[3 0]]
Need: [[7 4]
[0 2]
[6 0]]
Process 1 released resources. New state:
Available: [3 2]
Allocation: [[0 1]
[2 0]
[3 0]]
Need: [[7 4]
[1 2]
[6 0]]

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 27

System is in a safe state after Process 1. Safe sequence: [0, 1]

Process 2 is requesting resources...


Process 2 acquired resources. New state:
Available: [2 2]
Allocation: [[0 1]
[2 0]
[4 0]]
Need: [[7 4]
[1 2]
[5 0]]
Process 2 released resources. New state:
Available: [3 2]
Allocation: [[0 1]
[2 0]
[3 0]]
Need: [[7 4]
[1 2]
[6 0]]
System is in a safe state after Process 2. Safe sequence: [0, 1, 2]

Both processes have completed.


Executed Output:
Enter the number of processes: 3
Enter the number of resource types: 2

Enter the maximum claim matrix:


Process 0, Resource 0: 7
Process 0, Resource 1: 5
Process 1, Resource 0: 3
Process 1, Resource 1: 2
Process 2, Resource 0: 9
Process 2, Resource 1: 0

Enter the allocation matrix:


Process 0, Resource 0: 0
Process 0, Resource 1: 1
Process 1, Resource 0: 2
Process 1, Resource 1: 0
Process 2, Resource 0: 3
Process 2, Resource 1: 0

Enter the available resources:


Resource 0: 3
Resource 1: 2

Executing Banker's Algorithm...

Initial state is safe. Executing processes...


Process 0 is requesting resources...
Process 0 acquired resources. New state:
Available: [2 2]
Allocation: [[1 1]
[2 0]
[3 0]]
Need: [[6 4]
[1 2]
[6 0]]
Process 0 released resources. New state:

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 28

Available: [2 2]
Allocation: [[0 1]
[2 0]
[3 0]]
Need: [[7 4]
[1 2]
[6 0]]
System is in a safe state after Process 0. Safe sequence: [0]

Process 1 is requesting resources...


Process 1 acquired resources. New state:
Available: [2 2]
Allocation: [[0 1]
[3 0]
[3 0]]
Need: [[7 4]
[0 2]
[6 0]]
Process 1 released resources. New state:
Available: [3 2]
Allocation: [[0 1]
[2 0]
[3 0]]
Need: [[7 4]
[1 2]
[6 0]]
System is in a safe state after Process 1. Safe sequence: [0, 1]

Process 2 is requesting resources...


Process 2 acquired resources. New state:
Available: [2 2]
Allocation: [[0 1]
[2 0]
[4 0]]
Need: [[7 4]
[1 2]
[5 0]]
Process 2 released resources. New state:
Available: [3 2]
Allocation: [[0 1]
[2 0]
[3 0]]
Need: [[7 4]
[1 2]
[6 0]]
System is in a safe state after Process 2. Safe sequence: [0, 1, 2]

Both processes have completed.


Result:
Thus a python program for the implementation of Banker’s algorithm for dead lock avoidance
successfully executed.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 29

Write a program for Implementation of dead lock


detection
Aim:
To write a program for implementation of dead lock detection
Description:
Deadlock detection algorithms identify deadlocks in a resource allocation system by
periodically scanning the resource allocation graph for cycles. Once a deadlock is detected, appropriate
actions are taken to resolve it, such as process termination or resource preemption. However, these
algorithms do not prevent deadlocks; they reactively handle them after they occur.
Program:
import numpy as np

def detect_deadlock(allocated, max_claim, available):


""" Check for deadlock using the Resource Allocation Graph method. """
n = len(available) # Number of processes
m = len(available) # Number of resources

# Calculate the need matrix


need = max_claim - allocated

# Initialize work and finish arrays


work = available.copy()
finish = np.zeros(n, dtype=bool)

while True:
progress_made = False
for i in range(n):
if not finish[i] and all(need[i] <= work):
# If process i can finish, update work and finish arrays
work += allocated[i]
finish[i] = True
progress_made = True
print(f"Process {i} can finish; updated work: {work}")

if not progress_made:
break

# Check if all processes are finished


if all(finish):
print("No deadlock detected. All processes can finish.")
else:
print("Deadlock detected. Not all processes can finish.")

# Function to take dynamic input from the user


def main():
n = int(input("Enter the number of processes: "))
m = int(input("Enter the number of resources: "))

max_claim = np.zeros((n, m), dtype=int)


allocated = np.zeros((n, m), dtype=int)
Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula
Date: ExP.No.: P a g e | 30

available = np.zeros(m, dtype=int)

print("Enter the maximum claim matrix:")


for i in range(n):
for j in range(m):
max_claim[i][j] = int(input(f"Process {i}, Resource {j}: "))

print("Enter the allocation matrix:")


for i in range(n):
for j in range(m):
allocated[i][j] = int(input(f"Process {i}, Resource {j}: "))

print("Enter the available resources:")


for j in range(m):
available[j] = int(input(f"Resource {j}: "))

print("\nDetecting deadlock...\n")
detect_deadlock(allocated, max_claim, available)

if __name__ == "__main__":
main()
Sample Output:
Enter the number of processes: 3
Enter the number of resources: 2

Enter the maximum claim matrix:


Process 0, Resource 0: 7
Process 0, Resource 1: 5
Process 1, Resource 0: 3
Process 1, Resource 1: 2
Process 2, Resource 0: 9
Process 2, Resource 1: 0

Enter the allocation matrix:


Process 0, Resource 0: 2
Process 0, Resource 1: 1
Process 1, Resource 0: 2
Process 1, Resource 1: 1
Process 2, Resource 0: 3
Process 2, Resource 1: 0

Enter the available resources:


Resource 0: 2
Resource 1: 1

Detecting deadlock...

Process 0 can finish; updated work: [4 2]


Process 1 can finish; updated work: [6 3]
Process 2 can finish; updated work: [9 3]
No deadlock detected. All processes can finish.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 31

Executed Output:
Enter the number of processes: 3
Enter the number of resources: 2

Enter the maximum claim matrix:


Process 0, Resource 0: 7
Process 0, Resource 1: 5
Process 1, Resource 0: 3
Process 1, Resource 1: 2
Process 2, Resource 0: 9
Process 2, Resource 1: 0

Enter the allocation matrix:


Process 0, Resource 0: 2
Process 0, Resource 1: 1
Process 1, Resource 0: 2
Process 1, Resource 1: 1
Process 2, Resource 0: 3
Process 2, Resource 1: 0

Enter the available resources:


Resource 0: 2
Resource 1: 1

Detecting deadlock...

Process 0 can finish; updated work: [4 2]


Process 1 can finish; updated work: [6 3]
Process 2 can finish; updated work: [9 3]
No deadlock detected. All processes can finish.

Result:
Thus a python program for the implementation of dead lock detection algorithm successfully
executed.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 32

Write a program for Implementation of threading and


synchronization
Aim:
To write a program for implementation of threading and synchronization
Description:
Multithreading and synchronization are concepts related to concurrent programming in computer
systems:

1. **Multithreading**: Multithreading is the ability of an operating system or programming language


to support the execution of multiple threads within a single process. Threads are lightweight units of
execution, and multithreading allows a program to perform multiple tasks concurrently, making more
efficient use of the available CPU resources.

2. **Benefits of Multithreading**: Multithreading can improve performance and


responsiveness in applications by enabling tasks to be performed in parallel. It allows for better
utilization of multi-core processors and can be particularly beneficial in tasks involving I/O
operations or parallel processing.

3. **Thread Creation and Management**: Threads are created and managed by the operating system
or the programming language's runtime environment. They share the same memory space within a
process, making communication and data sharing between threads easier.

4. **Synchronization**: Synchronization is the process of coordinating the execution of multiple


threads to avoid race conditions and ensure data integrity. When multiple threads access shared
resources simultaneously, synchronization mechanisms, such as mutexes, semaphores, and condition
variables, are used to enforce mutual exclusion and control access to the shared data.

5. **Race Conditions**: Race conditions occur when multiple threads access shared resources in an
uncontrolled manner, leading to unpredictable and potentially incorrect behavior. Synchronization
mechanisms help prevent race conditions by allowing only one thread to access the shared resource at a
time.

6. **Deadlocks**: Deadlocks can occur when two or more threads are each waiting for a resource that
is held by another thread, resulting in a circular dependency. Properly designed synchronization can
help avoid deadlocks by ensuring that threads release resources in a coordinated manner.

7. **Challenges**: Multithreading introduces challenges like thread coordination, resource


sharing, and avoiding synchronization overhead. Developers need to carefully design and manage
threads and synchronization to avoid issues like deadlocks, livelocks, and priority inversion.

In summary, multithreading allows a program to execute multiple tasks concurrently, improving


performance and responsiveness. However, proper synchronization is essential to prevent race
conditions and deadlocks and ensure correct behavior in multithreaded applications.
Program:
import threading
import time

# Shared counter and a lock for synchronization


shared_counter = 0
lock = threading.Lock()

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 33

def increment_counter(thread_id, increments):


global shared_counter
for _ in range(increments):
# Acquire the lock to ensure exclusive access to the shared resource
with lock:
current_value = shared_counter
print(f"Thread {thread_id} is incrementing counter from {current_value}", end=" ")
time.sleep(0.1) # Simulate some processing time
shared_counter = current_value + 1
print(f"to {shared_counter}")

def main():
num_threads = int(input("Enter the number of threads: "))
increments_per_thread = int(input("Enter the number of increments per thread: "))

threads = []

# Create and start threads


for i in range(num_threads):
thread = threading.Thread(target=increment_counter, args=(i, increments_per_thread))
threads.append(thread)
thread.start()

# Wait for all threads to complete


for thread in threads:
thread.join()

print(f"Final value of shared_counter: {shared_counter}")

if __name__ == "__main__":
main()
Sample Output:
Enter the number of threads: 3
Enter the number of increments per thread: 5
Thread 0 is incrementing counter from 0 to 1
Thread 1 is incrementing counter from 1 to 2
Thread 2 is incrementing counter from 2 to 3
Thread 0 is incrementing counter from 3 to 4
Thread 1 is incrementing counter from 4 to 5
Thread 2 is incrementing counter from 5 to 6
Thread 0 is incrementing counter from 6 to 7
Thread 1 is incrementing counter from 7 to 8
Thread 2 is incrementing counter from 8 to 9
Thread 0 is incrementing counter from 9 to 10
Thread 1 is incrementing counter from 10 to 11
Thread 2 is incrementing counter from 11 to 12
Thread 0 is incrementing counter from 12 to 13
Thread 1 is incrementing counter from 13 to 14
Thread 2 is incrementing counter from 14 to 15
Thread 0 is incrementing counter from 15 to 16
Thread 1 is incrementing counter from 16 to 17
Thread 2 is incrementing counter from 17 to 18

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 34

Thread 0 is incrementing counter from 18 to 19


Thread 1 is incrementing counter from 19 to 20
Thread 2 is incrementing counter from 20 to 21
Final value of shared_counter: 21
Executed Output:
Enter the number of threads: 3
Enter the number of increments per thread: 5
Thread 0 is incrementing counter from 0 to 1
Thread 1 is incrementing counter from 1 to 2
Thread 2 is incrementing counter from 2 to 3
Thread 0 is incrementing counter from 3 to 4
Thread 1 is incrementing counter from 4 to 5
Thread 2 is incrementing counter from 5 to 6
Thread 0 is incrementing counter from 6 to 7
Thread 1 is incrementing counter from 7 to 8
Thread 2 is incrementing counter from 8 to 9
Thread 0 is incrementing counter from 9 to 10
Thread 1 is incrementing counter from 10 to 11
Thread 2 is incrementing counter from 11 to 12
Thread 0 is incrementing counter from 12 to 13
Thread 1 is incrementing counter from 13 to 14
Thread 2 is incrementing counter from 14 to 15
Thread 0 is incrementing counter from 15 to 16
Thread 1 is incrementing counter from 16 to 17
Thread 2 is incrementing counter from 17 to 18
Thread 0 is incrementing counter from 18 to 19
Thread 1 is incrementing counter from 19 to 20
Thread 2 is incrementing counter from 20 to 21
Final value of shared_counter: 21
Result:
Thus a python program for implementation of threading and synchronization successfully
executed.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 35

Write a program for Implementation of memory allocation methods for fixed partitions:
a) First fit
b) best fit
c)worst fit
Aim:
To Write a program for Implementation of memory allocation methods for fixed partitions:
a) First fit b) best fit c) worst fit

Description:
Memory allocation methods for fixed partitions are used in operating systems to manage memory in a
system with a fixed number of memory partitions of different sizes. These methods determine how
processes are allocated to these fixed partitions based on their size requirements.

1. **First Fit**:
- In the first-fit memory allocation method, when a new process arrives and needs memory, the
system searches the memory partitions sequentially from the beginning.
- The first partition that is large enough to accommodate the process is allocated to it.
- This method is relatively simple and efficient in terms of time complexity since it stops searching
once a suitable partition is found.
- However, it may lead to fragmentation, where small blocks of unused memory are scattered
across the memory, making it challenging to allocate larger processes in the future.

2. **Best Fit**:
- In the best-fit memory allocation method, when a new process arrives, the system searches
for the smallest partition that can hold the process size.
- Among all the partitions that are large enough to accommodate the process, the one with the
smallest size is chosen for allocation.
- This method helps in reducing fragmentation, as it tries to fit processes into the smallest possible
available partitions.
- However, it may be slightly slower than the first-fit method, as it needs to search the entire
list of partitions to find the best fit.

3. **Worst Fit**:
- In the worst-fit memory allocation method, when a new process arrives, the system searches
for the largest partition available that can hold the process size.
- The largest partition among all the partitions that can accommodate the process is allocated to it.
- This method may lead to more fragmentation compared to first-fit and best-fit, as larger
partitions are used for smaller processes, leaving behind smaller unused spaces.
- It is less commonly used than first-fit and best-fit because it does not efficiently utilize memory space.

In summary, memory allocation methods for fixed partitions, such as first-fit, best-fit, and worst-fit,
determine how processes are assigned to available memory partitions based on their size requirements.
Each method has its advantages and disadvantages, affecting fragmentation and memory utilization in
the system. The choice of the allocation method depends on the specific requirements and
characteristics of the system.

Program:
def first_fit(partitions, processes):
allocation = [-1] * len(processes)
for i, process in enumerate(processes):
for j, partition in enumerate(partitions):

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 36

if partition >= process:


allocation[i] = j
partitions[j] -= process
break
return allocation

def best_fit(partitions, processes):


allocation = [-1] * len(processes)
for i, process in enumerate(processes):
best_index = -1
for j, partition in enumerate(partitions):
if partition >= process and (best_index == -1 or partition < partitions[best_index]):
best_index = j
if best_index != -1:
allocation[i] = best_index
partitions[best_index] -= process
return allocation

def worst_fit(partitions, processes):


allocation = [-1] * len(processes)
for i, process in enumerate(processes):
worst_index = -1
for j, partition in enumerate(partitions):
if partition >= process and (worst_index == -1 or partition > partitions[worst_index]):
worst_index = j
if worst_index != -1:
allocation[i] = worst_index
partitions[worst_index] -= process
return allocation

def print_allocation(allocation, processes, method):


print(f"\n{method} Allocation:")
for i, alloc in enumerate(allocation):
if alloc != -1:
print(f"Process {i} allocated to Partition {alloc}")
else:
print(f"Process {i} not allocated")

def main():
num_partitions = int(input("Enter the number of partitions: "))
partitions = [int(input(f"Enter size of partition {i}: ")) for i in range(num_partitions)]

num_processes = int(input("Enter the number of processes: "))


processes = [int(input(f"Enter size of process {i}: ")) for i in range(num_processes)]

# Copy of partitions for each method


partitions_copy = partitions.copy()

# First Fit
allocation_first_fit = first_fit(partitions_copy, processes)
print_allocation(allocation_first_fit, processes, "First Fit")

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 37

# Restore partitions and allocate using Best Fit


partitions_copy = partitions.copy()
allocation_best_fit = best_fit(partitions_copy, processes)
print_allocation(allocation_best_fit, processes, "Best Fit")

# Restore partitions and allocate using Worst Fit


partitions_copy = partitions.copy()
allocation_worst_fit = worst_fit(partitions_copy, processes)
print_allocation(allocation_worst_fit, processes, "Worst Fit")

if __name__ == "__main__":
main()
Sample Output:
Enter the number of partitions: 3
Enter size of partition 0: 100
Enter size of partition 1: 500
Enter size of partition 2: 200
Enter the number of processes: 4
Enter size of process 0: 212
Enter size of process 1: 417
Enter size of process 2: 112
Enter size of process 3: 426

First Fit Allocation:


Process 0 not allocated
Process 1 allocated to Partition 1
Process 2 allocated to Partition 2
Process 3 not allocated

Best Fit Allocation:


Process 0 allocated to Partition 2
Process 1 allocated to Partition 1
Process 2 allocated to Partition 2
Process 3 not allocated

Worst Fit Allocation:


Process 0 allocated to Partition 1
Process 1 allocated to Partition 2
Process 2 allocated to Partition 0
Process 3 not allocated
Executed Output:
Enter the number of partitions: 3
Enter size of partition 0: 100
Enter size of partition 1: 500
Enter size of partition 2: 200
Enter the number of processes: 4
Enter size of process 0: 212
Enter size of process 1: 417
Enter size of process 2: 112
Enter size of process 3: 426

First Fit Allocation:

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 38

Process 0 not allocated


Process 1 allocated to Partition 1
Process 2 allocated to Partition 2
Process 3 not allocated

Best Fit Allocation:


Process 0 allocated to Partition 2
Process 1 allocated to Partition 1
Process 2 allocated to Partition 2
Process 3 not allocated

Worst Fit Allocation:


Process 0 allocated to Partition 1
Process 1 allocated to Partition 2
Process 2 allocated to Partition 0
Process 3 not allocated
Result:
Thus a python program for Implementation of memory allocation methods for fixed partitions is
successfully executed

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 39

Write a program for the implementation of paging technique of memory


management:
Aim:
To write a program for the implementation of paging technique of memory management
Description:
Paging is a memory management technique used by operating systems to handle the organization and
allocation of physical memory. It allows the operating system to present a uniform, logical view of
memory to processes, while efficiently managing the available physical memory.

In the paging technique, the physical memory is divided into fixed-size blocks called "frames," and the
logical memory used by processes is divided into fixed-size blocks called "pages." These pages are usually
of the same size as the frames. The size of a page is typically a power of 2, such as 4 KB or 8 KB.

When a process needs to access a specific memory address, the virtual address generated by the
CPU is divided into two parts: a "page number" and an "offset." The page number is used as an
index to access a page table, which is a data structure maintained by the operating system to keep
track of the mapping between virtual pages and physical frames.

The page table provides the corresponding physical frame number for the given page number. The
offset is used to determine the exact location within the physical frame where the data is stored.

If a page is not currently present in physical memory (i.e., it's not loaded into a frame), the CPU
generates a page fault. The operating system then retrieves the required page from the secondary
storage (e.g., hard disk) and loads it into an available frame in physical memory. The page table is
updated to reflect the new mapping between the virtual page and the physical frame.

Paging allows for several advantages in memory management:

1. **Simplified Address Translation:** With paging, the CPU only needs to perform a single
level of translation, from virtual addresses to physical addresses, making the address translation
process more efficient.

2. **Non-contiguous Allocation:** Processes can be allocated non-contiguous sections of physical


memory, as pages from a single process can be scattered across various frames in memory.

3. **Protection and Isolation:** Each page is assigned specific access permissions, allowing the
operating system to enforce memory protection and isolate processes from one another.

4. **Memory Sharing:** Multiple processes can share the same physical page if they need access to the
same data, such as code libraries or read-only data.

Paging, however, requires the overhead of maintaining the page table, and page faults can lead to
performance penalties due to the need to fetch data from secondary storage. To mitigate these
issues, modern processors often incorporate hardware support, such as Translation Lookaside
Buffers (TLBs), to speed up address translation and reduce the impact of page faults.

Program:
import random

# Constants
PAGE_SIZE = 4 # Size of a page in bytes

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 40

MEMORY_SIZE = 16 # Total size of memory in bytes


NUM_PAGES = MEMORY_SIZE // PAGE_SIZE # Number of pages

# Initialize memory
memory = [-1] * NUM_PAGES # -1 indicates an empty frame

def page_number(address):
"""Returns the page number for a given address."""
return address // PAGE_SIZE

def frame_number(page):
"""Returns the frame number where a page is stored."""
return memory.index(page) if page in memory else -1

def page_fault(address, page_table):


"""Simulates a page fault by loading a page into memory."""
page = page_number(address)
frame = frame_number(page)

if frame == -1: # Page not in memory


empty_frame = memory.index(-1) if -1 in memory else None
if empty_frame is not None:
memory[empty_frame] = page
print(f"Page {page} loaded into frame {empty_frame}.")
else:
# If no empty frame, replace a page using FIFO (First In
First Out) algorithm
replaced_page = memory.pop(0)
memory.append(page)
print(f"Page {replaced_page} replaced with page {page} in
frame 0.")
page_table[page] = memory.index(page)
else:
print(f"Page {page} found in frame {frame}.")

def access_memory(address, page_table):


"""Accesses a memory address and handles page faults."""
page = page_number(address)
if page in page_table:
print(f"Accessing address {address} (Page {page})")
page_fault(address, page_table)
else:
print(f"Page fault at address {address} (Page {page})")
page_fault(address, page_table)

def main():
page_table = {} # Dictionary to store the page table

# Simulate memory access


addresses = [random.randint(0, MEMORY_SIZE - 1) for _ in
range(10)] # Random addresses
print(f"Accessing memory addresses: {addresses}")

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 41

for address in addresses:


access_memory(address, page_table)

print("\nFinal state of memory frames:")


for i, frame in enumerate(memory):
print(f"Frame {i}: Page {frame}")

if __name__ == "__main__":
main()

Sample Output:
Accessing memory addresses: [6, 1, 3, 7, 14, 5, 12, 11, 2, 9]

Accessing address 6 (Page 1)


Page 1 loaded into frame 0.
Accessing address 1 (Page 0)
Page 0 loaded into frame 1.
Accessing address 3 (Page 0)
Page 0 found in frame 1.
Accessing address 7 (Page 1)
Page 1 found in frame 0.
Accessing address 14 (Page 3)
Page 3 loaded into frame 2.
Accessing address 5 (Page 1)
Page 1 found in frame 0.
Accessing address 12 (Page 3)
Page 3 found in frame 2.
Accessing address 11 (Page 2)
Page 2 loaded into frame 3.
Accessing address 2 (Page 0)
Page 0 found in frame 1.
Accessing address 9 (Page 2)
Page 2 found in frame 3.

Final state of memory frames:


Frame 0: Page 1
Frame 1: Page 0
Frame 2: Page 3
Frame 3: Page 2

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 42

Executed Output:
Accessing memory addresses: [6, 1, 3, 7, 14, 5, 12, 11, 2, 9]

Accessing address 6 (Page 1)


Page 1 loaded into frame 0.
Accessing address 1 (Page 0)
Page 0 loaded into frame 1.
Accessing address 3 (Page 0)
Page 0 found in frame 1.
Accessing address 7 (Page 1)
Page 1 found in frame 0.
Accessing address 14 (Page 3)
Page 3 loaded into frame 2.
Accessing address 5 (Page 1)
Page 1 found in frame 0.
Accessing address 12 (Page 3)
Page 3 found in frame 2.
Accessing address 11 (Page 2)
Page 2 loaded into frame 3.
Accessing address 2 (Page 0)
Page 0 found in frame 1.
Accessing address 9 (Page 2)
Page 2 found in frame 3.

Final state of memory frames:


Frame 0: Page 1
Frame 1: Page 0
Frame 2: Page 3
Frame 3: Page 2
Result:
Thus a python program to implement paging technique of memory management is successfully
executed.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 43

Write program for the Implementation of following page replacement


algorithms Aim:
To write a program for the implementation of page replacement algorithms.
Description:
Page replacement algorithms are used in computer operating systems to manage the memory
efficiently when there is a need to swap pages between the main memory (RAM) and the secondary
storage (usually a hard disk) due to limited physical memory. These algorithms decide which page to
evict from memory when there is a page fault (a requested page is not present in RAM) and a new
page needs to be loaded.

1. FIFO (First-In-First-Out):
FIFO is one of the simplest page replacement algorithms. It works based on the principle of the first
page that entered the memory will be the first one to be replaced. In other words, the page that has
been in memory the longest will be evicted. This algorithm uses a queue data structure to keep track of
the order in which pages were brought into memory. However, FIFO may suffer from the "Belady's
Anomaly," where increasing the number of frames can lead to more page faults.

2. LRU (Least Recently Used):


The LRU algorithm replaces the page that has not been accessed for the longest period of time. It is
based on the idea that the least recently used page is the best candidate for replacement since it is
likely that pages accessed long ago are less likely to be needed in the near future. To implement LRU, a
stack or a linked list of pages ordered by their recent usage timestamp is maintained. The main
challenge with LRU is that it requires tracking and updating the usage information for all pages, which
can be computationally expensive.

3. LFU (Least Frequently Used):


LFU algorithm replaces the page that has been used the least number of times. It works on the
assumption that the pages with the least frequency of access are less likely to be used in the future.
Each page has a counter associated with it that is incremented each time the page is accessed. The page
with the lowest count is selected for replacement when needed. However, LFU can face issues when a
page is accessed frequently for a short period and then not accessed again, leading to unnecessary
retention of such pages.

4. MFU (Most Frequently Used):


MFU is the opposite of LFU, where it selects the page with the highest access count for replacement.
The idea is to retain pages that have been heavily used, assuming they are likely to be needed again
soon. MFU can work well in scenarios where some pages are accessed very frequently, but it might not
be the best choice in all cases, especially if a page is used heavily initially but becomes obsolete later.

5. OPTIMAL (Optimal Page Replacement):


The OPTIMAL algorithm is a theoretical algorithm that serves as the upper bound for other page
replacement algorithms. It makes the assumption that it knows the future sequence of page requests
and selects the page that will not be used for the longest time in the future for replacement. In practice,
this algorithm is impossible to implement since it requires knowledge of future events. However,
OPTIMAL is often used as a benchmark to compare the performance of other algorithms.

Each page replacement algorithm has its advantages and disadvantages. The best choice of algorithm
depends on the specific use case, workload characteristics, and available hardware resources. Real-world
implementations often involve a trade-off between algorithm complexity and performance.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 44

Algorithms:
FIFO page replacement algorithms:
Program:
from collections import deque

PAGE_SIZE = 4
MEMORY_SIZE = 16
NUM_PAGES = MEMORY_SIZE // PAGE_SIZE

# Initialize memory
memory = [-1] * NUM_PAGES
page_queue = deque() # To keep track of pages for FIFO

def fifo_page_replacement(address, page_table):


page = address // PAGE_SIZE
if page not in page_table:
if len(page_queue) >= NUM_PAGES:
old_page = page_queue.popleft()
memory[frame_number(old_page)] = -1
page_table.pop(old_page, None)
empty_frame = memory.index(-1) if -1 in memory else None
if empty_frame is not None:
memory[empty_frame] = page
else:
empty_frame = page_queue.index(page_table.pop(next(iter(page_table)), -1))
memory[empty_frame] = page
page_queue.append(page)
page_table[page] = empty_frame
print(f"Page {page} loaded into frame {page_table[page]}.")

def frame_number(page):
return memory.index(page) if page in memory else -1

def access_memory_fifo(address, page_table):


page = address // PAGE_SIZE
if page in page_table:
print(f"Page {page} found in frame {page_table[page]}.")
else:
print(f"Page fault at address {address} (Page {page})")
fifo_page_replacement(address, page_table)

def main_fifo():
page_table = {}
addresses = [random.randint(0, MEMORY_SIZE - 1) for _ in range(10)]
print(f"Accessing memory addresses: {addresses}")
for address in addresses:
access_memory_fifo(address, page_table)
print("\nFinal state of memory frames:")
for i, frame in enumerate(memory):
print(f"Frame {i}: Page {frame}")

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 45

if __name__ == "__main__":
main_fifo()
Sample Output:
Accessing memory addresses: [9, 5, 8, 13, 3, 14, 2, 6, 11, 7]

Page 2 loaded into frame 0.


Page 1 loaded into frame 1.
Page 2 found in frame 0.
Page 3 loaded into frame 2.
Page 4 loaded into frame 3.
Page 3 found in frame 2.
Page 4 found in frame 3.
Page 5 loaded into frame 0.
Page 6 loaded into frame 1.
Page 7 loaded into frame 2.
Page 8 loaded into frame 3.

Final state of memory frames:


Frame 0: Page 5
Frame 1: Page 6
Frame 2: Page 7
Frame 3: Page 8
Executed Output:
Accessing memory addresses: [9, 5, 8, 13, 3, 14, 2, 6, 11, 7]

Page 2 loaded into frame 0.


Page 1 loaded into frame 1.
Page 2 found in frame 0.
Page 3 loaded into frame 2.
Page 4 loaded into frame 3.
Page 3 found in frame 2.
Page 4 found in frame 3.
Page 5 loaded into frame 0.
Page 6 loaded into frame 1.
Page 7 loaded into frame 2.
Page 8 loaded into frame 3.

Final state of memory frames:


Frame 0: Page 5
Frame 1: Page 6
Frame 2: Page 7
Frame 3: Page 8

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 46

LFU:
Program:
from collections import defaultdict
n = int(input("Enter the number of pages: "))
capacity = int(input("Enter the number of page
frames: ")) pages = [0]*n
print("Enter the
pages: ") for i in
range(n):
pages[i] =
int(input()) pf = 0
v = []
mp =
defaultdict(int)
for i in
range(n):
if pages[i] not in v:
if len(v) ==
capacity:
mp[v[0]] =
mp[v[0]] - 1
v.pop(0)
v.append(pa
ges[i])
mp[pages[i]]
+=1 pf = pf +
1
else:
mp[pages[i]]
+=1
v.remove(pa
ges[i])
v.append(pa
ges[i])
k = len(v) - 2
while k>=0 and
mp[v[k]]>mp[v[k+1]]: v[k],
v[k+1] = v[k+1], v[k]
k-=1
print("The number of page faults for LFU algorithm is:",pf)

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 47

Sample Output:
Enter the number of pages: 7
Enter the number of page frames: 3
Enter the pages:
7
0
1
2
0
3
0
The number of page faults for LFU algorithm is: 5
Executed Output:

Enter the number of pages: 7


Enter the number of page frames: 3
Enter the pages:
7
0
1
2
0
3
0
The number of page faults for LFU algorithm is: 5

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 48

LRU:
Program:
from collections import OrderedDict

PAGE_SIZE = 4
MEMORY_SIZE = 16
NUM_PAGES = MEMORY_SIZE // PAGE_SIZE

# Initialize memory
memory = [-1] * NUM_PAGES
page_order = OrderedDict() # To keep track of pages for LRU

def lru_page_replacement(address, page_table):


page = address // PAGE_SIZE
if page not in page_table:
if len(page_order) >= NUM_PAGES:
oldest_page = next(iter(page_order))
frame_to_replace = page_table.pop(oldest_page, None)
if frame_to_replace is not None:
memory[frame_to_replace] = -1
page_order.pop(oldest_page)
empty_frame = memory.index(-1) if -1 in memory else None
if empty_frame is not None:
memory[empty_frame] = page
else:
empty_frame = page_order.popitem(last=False)[1]
memory[empty_frame] = page
page_table[page] = empty_frame
page_order[page] = empty_frame
else:
page_order.move_to_end(page)
print(f"Page {page} loaded into frame {page_table[page]}.")

def access_memory_lru(address, page_table):


page = address // PAGE_SIZE
if page in page_table:
print(f"Page {page} found in frame {page_table[page]}.")
else:
print(f"Page fault at address {address} (Page {page})")
lru_page_replacement(address, page_table)

def main_lru():
page_table = {}
addresses = [random.randint(0, MEMORY_SIZE - 1) for _ in range(10)]
print(f"Accessing memory addresses: {addresses}")
for address in addresses:
access_memory_lru(address, page_table)
print("\nFinal state of memory frames:")
for i, frame in enumerate(memory):
print(f"Frame {i}: Page {frame}")

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 49

if __name__ == "__main__":
main_lru()
Sample Output:

Accessing memory addresses: [7, 12, 3, 0, 11, 8, 7, 1, 14, 6]


Page fault at address 7 (Page 1)
Page 1 loaded into frame 0.
Page fault at address 12 (Page 3)
Page 3 loaded into frame 1.
Page fault at address 3 (Page 0)
Page 0 loaded into frame 2.
Page fault at address 0 (Page 2)
Page 2 loaded into frame 3.
Page fault at address 11 (Page 2)
Page 2 loaded into frame 0.
Page fault at address 8 (Page 1)
Page 1 loaded into frame 1.
Page fault at address 7 (Page 0)
Page 0 loaded into frame 2.
Page fault at address 1 (Page 1)
Page 1 loaded into frame 3.
Page fault at address 14 (Page 3)
Page 3 loaded into frame 0.
Page fault at address 6 (Page 2)
Page 2 loaded into frame 1.

Final state of memory frames:


Frame 0: Page 14
Frame 1: Page 6
Frame 2: Page 7
Frame 3: Page 1

Executed Output:

Accessing memory addresses: [7, 12, 3, 0, 11, 8, 7, 1, 14, 6]


Page fault at address 7 (Page 1)
Page 1 loaded into frame 0.
Page fault at address 12 (Page 3)
Page 3 loaded into frame 1.
Page fault at address 3 (Page 0)
Page 0 loaded into frame 2.
Page fault at address 0 (Page 2)
Page 2 loaded into frame 3.
Page fault at address 11 (Page 2)
Page 2 loaded into frame 0.
Page fault at address 8 (Page 1)
Page 1 loaded into frame 1.
Page fault at address 7 (Page 0)
Page 0 loaded into frame 2.
Page fault at address 1 (Page 1)
Page 1 loaded into frame 3.
Page fault at address 14 (Page 3)
Page 3 loaded into frame 0.
Page fault at address 6 (Page 2)
Page 2 loaded into frame 1.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 50

Final state of memory frames:


Frame 0: Page 14
Frame 1: Page 6
Frame 2: Page 7
Frame 3: Page 1

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 51

Optimal page replacement


algorithm: Program:
import numpy as np

PAGE_SIZE = 4
MEMORY_SIZE = 16
NUM_PAGES = MEMORY_SIZE // PAGE_SIZE

# Initialize memory
memory = [-1] * NUM_PAGES

def optimal_page_replacement(address, page_table, addresses):


page = address // PAGE_SIZE
if page not in page_table:
if len(page_table) >= NUM_PAGES:
future_use = [float('inf')] * NUM_PAGES
for i, frame in enumerate(memory):
try:
future_use[i] = addresses.index(frame, addresses.index(address)) if frame in
addresses[addresses.index(address):] else float('inf')
except ValueError:
future_use[i] = float('inf')
replace_index = future_use.index(max(future_use))
replaced_page = memory[replace_index]
memory[replace_index] = page
page_table.pop(replaced_page, None)
print(f"Page {replaced_page} replaced with page {page} in frame {replace_index}.")
else:
empty_frame = memory.index(-1)
memory[empty_frame] = page
page_table[page] = memory.index(page)
print(f"Page {page} loaded into frame {page_table[page]}.")

def access_memory_optimal(address, page_table, addresses):


page = address // PAGE_SIZE
if page in page_table:
print(f"Page {page} found in frame {page_table[page]}.")
else:
print(f"Page fault at address {address} (Page {page})")
optimal_page_replacement(address, page_table, addresses)

def main_optimal():
page_table = {}
addresses = [random.randint(0, MEMORY_SIZE - 1) for _ in range(10)]
print(f"Accessing memory addresses: {addresses}")
for address in addresses:
access_memory_optimal(address, page_table, addresses)
print("\nFinal state of memory frames:")
for i, frame in enumerate(memory):
print(f"Frame {i}: Page {frame}")

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 52

if __name__ == "__main__":
main_optimal()
Sample Output:
Accessing memory addresses: [7, 12, 3, 0, 11, 8, 7, 1, 14, 6]
Page fault at address 7 (Page 1)
Page 1 loaded into frame 0.
Page fault at address 12 (Page 3)
Page 3 loaded into frame 1.
Page fault at address 3 (Page 0)
Page 0 loaded into frame 2.
Page fault at address 0 (Page 2)
Page 2 loaded into frame 3.
Page fault at address 11 (Page 2)
Page 2 replaced with page 11 in frame 0.
Page 11 loaded into frame 0.
Page fault at address 8 (Page 2)
Page 2 replaced with page 8 in frame 1.
Page 8 loaded into frame 1.
Page fault at address 7 (Page 0)
Page 0 replaced with page 7 in frame 2.
Page 7 loaded into frame 2.
Page fault at address 1 (Page 1)
Page 1 replaced with page 1 in frame 3.
Page 1 loaded into frame 3.
Page fault at address 14 (Page 3)
Page 3 replaced with page 14 in frame 0.
Page 14 loaded into frame 0.
Page fault at address 6 (Page 2)
Page 2 replaced with page 6 in frame 1.
Page 6 loaded into frame 1.

Final state of memory frames:


Frame 0: Page 14
Frame 1: Page 6
Frame 2: Page 7
Frame 3: Page 1
Executed Output:
Accessing memory addresses: [7, 12, 3, 0, 11, 8, 7, 1, 14, 6]
Page fault at address 7 (Page 1)
Page 1 loaded into frame 0.
Page fault at address 12 (Page 3)
Page 3 loaded into frame 1.
Page fault at address 3 (Page 0)
Page 0 loaded into frame 2.
Page fault at address 0 (Page 2)
Page 2 loaded into frame 3.
Page fault at address 11 (Page 2)
Page 2 replaced with page 11 in frame 0.
Page 11 loaded into frame 0.
Page fault at address 8 (Page 2)
Page 2 replaced with page 8 in frame 1.
Page 8 loaded into frame 1.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 53

Page fault at address 7 (Page 0)


Page 0 replaced with page 7 in frame 2.
Page 7 loaded into frame 2.
Page fault at address 1 (Page 1)
Page 1 replaced with page 1 in frame 3.
Page 1 loaded into frame 3.
Page fault at address 14 (Page 3)
Page 3 replaced with page 14 in frame 0.
Page 14 loaded into frame 0.
Page fault at address 6 (Page 2)
Page 2 replaced with page 6 in frame 1.
Page 6 loaded into frame 1.

Final state of memory frames:


Frame 0: Page 14
Frame 1: Page 6
Frame 2: Page 7
Frame 3: Page 1

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 54

MFU page replacement algorithm:


Program:
from collections import defaultdict
import random

PAGE_SIZE = 4
MEMORY_SIZE = 16
NUM_PAGES = MEMORY_SIZE // PAGE_SIZE

# Initialize memory
memory = [-1] * NUM_PAGES
page_frequency = defaultdict(int) # Dictionary to keep track of page usage frequency

def mfu_page_replacement(address, page_table):


page = address // PAGE_SIZE
if page not in page_table:
if len(page_table) >= NUM_PAGES:
# Find the most frequently used page to replace
most_frequent_page = max(page_frequency, key=page_frequency.get)
frame_to_replace = page_table.pop(most_frequent_page, None)
if frame_to_replace is not None:
memory[frame_to_replace] = -1
print(f"Page {most_frequent_page} replaced with page {page} in frame {frame_to_replace}.")
else:
frame_to_replace = memory.index(-1)
page_frequency.pop(most_frequent_page, None)
else:
frame_to_replace = memory.index(-1)

memory[frame_to_replace] = page
page_table[page] = frame_to_replace

page_frequency[page] += 1
print(f"Page {page} loaded into frame {page_table[page]}.")

def access_memory_mfu(address, page_table):


page = address // PAGE_SIZE
if page in page_table:
print(f"Page {page} found in frame {page_table[page]}.")
else:
print(f"Page fault at address {address} (Page {page})")
mfu_page_replacement(address, page_table)

def main_mfu():
page_table = {}
addresses = [random.randint(0, MEMORY_SIZE - 1) for _ in range(10)]
print(f"Accessing memory addresses: {addresses}")
for address in addresses:
access_memory_mfu(address, page_table)
print("\nFinal state of memory frames:")
for i, frame in enumerate(memory):
print(f"Frame {i}: Page {frame}")

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 55

if __name__ == "__main__":
main_mfu()
Sample Output:
Accessing memory addresses: [6, 12, 2, 0, 8, 6, 3, 12, 7, 1]
Page fault at address 6 (Page 1)
Page 1 loaded into frame 0.
Page fault at address 12 (Page 3)
Page 3 loaded into frame 1.
Page fault at address 2 (Page 0)
Page 0 loaded into frame 2.
Page fault at address 0 (Page 2)
Page 2 loaded into frame 3.
Page fault at address 8 (Page 2)
Page 2 replaced with page 8 in frame 3.
Page 8 loaded into frame 3.
Page fault at address 6 (Page 1)
Page 1 found in frame 0.
Page fault at address 3 (Page 0)
Page 0 replaced with page 3 in frame 1.
Page 3 loaded into frame 1.
Page fault at address 7 (Page 1)
Page 1 replaced with page 7 in frame 2.
Page 7 loaded into frame 2.
Page fault at address 1 (Page 0)
Page 0 replaced with page 1 in frame 3.
Page 1 loaded into frame 3.

Final state of memory frames:


Frame 0: Page 1
Frame 1: Page 3
Frame 2: Page 7
Frame 3: Page 1
Executed Output:
Accessing memory addresses: [6, 12, 2, 0, 8, 6, 3, 12, 7, 1]
Page fault at address 6 (Page 1)
Page 1 loaded into frame 0.
Page fault at address 12 (Page 3)
Page 3 loaded into frame 1.
Page fault at address 2 (Page 0)
Page 0 loaded into frame 2.
Page fault at address 0 (Page 2)
Page 2 loaded into frame 3.
Page fault at address 8 (Page 2)
Page 2 replaced with page 8 in frame 3.
Page 8 loaded into frame 3.
Page fault at address 6 (Page 1)
Page 1 found in frame 0.
Page fault at address 3 (Page 0)
Page 0 replaced with page 3 in frame 1.
Page 3 loaded into frame 1.
Page fault at address 7 (Page 1)
Page 1 replaced with page 7 in frame 2.
Page 7 loaded into frame 2.
Page fault at address 1 (Page 0)
Page 0 replaced with page 1 in frame 3.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 56

Page 1 loaded into frame 3.

Final state of memory frames:


Frame 0: Page 1
Frame 1: Page 3
Frame 2: Page 7
Frame 3: Page 1

Result:
Thus python programs to implement page replacement algorithms are successfully executed.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 57

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula

You might also like