0% found this document useful (0 votes)
7 views68 pages

OS Lab Programs 555

dfafhafap8gepf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views68 pages

OS Lab Programs 555

dfafhafap8gepf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

Date: ExP.No.

: P ag e |1

Basics of UNIX commands:


Aim:
To implement basic UNIX commands.
Description:
UNIX commands are the fundamental tools used in Unix-like operating systems, including Linux and macOS, to
interact with the system, manage files, manipulate data, and perform various tasks.
UNIX commands are a set of powerful tools used in Unix-like operating systems to perform various tasks
and manage the system. Here is a list of some common UNIX commands along with their brief explanations:
Commands/program:
1. ls: Lists the files and directories in the current directory.
2. cd: Changes the current directory to the specified directory.
3. pwd: Prints the current working directory.
4. mkdir: Creates a new directory.
5. rmdir: Removes an empty directory.
6. cp: Copies files or directories.
7. mv: Moves or renames files or directories.
8. rm: Removes files or directories.
9. cat: Concatenates and displays the contents of files.
10. less: Views the contents of a file interactively.
11. head: Displays the first few lines of a file.
12. tail: Displays the last few lines of a file.
13. grep: Searches for a specific pattern in files.
14. find: Searches for files and directories based on various criteria.
15. chmod: Changes the permissions of files and directories.
16. chown: Changes the owner of files and directories.
17. ps: Lists the currently running processes.
18. top: Displays real-time information about system processes.
19. man: Displays the manual pages for commands.
20. tar: Archives files into a tarball or extracts files from a tarball.
21. gzip: Compresses files using gzip compression.
22. unzip: Extracts files from a ZIP archive.
23. ssh: Connects to a remote server using the Secure Shell (SSH) protocol.
24. scp: Copies files securely between hosts over SSH.
25. sed: Stream editor for text manipulation.
26. awk: A programming language used for text processing and data extraction.
27. du: Displays disk usage information for files and directories.
28. df: Displays information about available disk space.
29. history: Shows the command history of the current user.
30. ping: Sends ICMP echo requests to a specified host for network troubleshooting.

These are just a few examples of the many commands available in UNIX-like systems. Each command has its
own set of options and arguments that can be used to customize their behavior. For more detailed
information on each command, you can refer to their respective manual pages by using the "man" command
followed by the command name (e.g., "man ls" for the manual page of the "ls" command).

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P ag e |2

Output:

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P ag e |3

Result:
Thus some of the basic commands of UNIX operating system are successfully executed.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P ag e |4

Write programs using the following system calls of UNIX operating system fork, exec, getpid, exit, wait,
close, stat, opendir, readdir.

Aim:
To write a program using following system calls of UNIX operating system fork, exce, getpid, exit, wait,
close, stat, opendir, readdir.

Description:
1. `fork`: The `fork` system call creates a new process (child process) that is an exact copy of the calling
process (parent process). After calling `fork`, the child process has its own unique process ID (PID) and runs
independently of the parent process. It is commonly used to implement parallel processing and concurrent
execution.
2. `exec`: The `exec` family of system calls (e.g., `execl`, `execv`, `execve`, etc.) is used to replace the current
process's memory image with a new program. It loads a new program into the current process's memory
space, replacing the previous program's code and data. This allows the process to run a different program
while keeping the same PID.
3. `getpid`: The `getpid` system call returns the process ID (PID) of the calling process. It is often used to
identify the current process uniquely.
4. `exit`: The `exit` system call is used to terminate the current process voluntarily. When a process calls `exit`,
it cleans up its resources and returns an exit status to its parent process, indicating the termination condition
(usually 0 for success and non-zero for error).
5. `wait`: The `wait` system call is used by a parent process to wait for the termination of its child process(es).
When a process calls `wait`, it suspends execution until one of its child processes exits. It also allows the
parent process to retrieve the exit status of the terminated child.
6. `close`: The `close` system call is used to close a file descriptor. File descriptors are small integers that
represent open files in a process. When a file is no longer needed, the `close` call releases the associated
resources and makes the file descriptor available for reuse.
7. `stat`: The `stat` system call retrieves information about a file (e.g., size, permissions, timestamps, etc.). It
takes the file path as an argument and fills a `struct stat` with the file's metadata.
8. `opendir`: The `opendir` system call opens a directory stream corresponding to the given directory path. It
allows a process to read the contents of the directory using subsequent calls to `readdir`.
9. `readdir`: The `readdir` system call is used to read the contents of a directory opened with `opendir`. It
returns a pointer to a `struct dirent` that contains information about the next directory entry (e.g., file name,
file type).

These system calls are fundamental to the UNIX operating system's functionality and provide an interface for
user programs to interact with the kernel and perform various operations related to processes, file
management, and directory handling.

Program:
import os
import subprocess

# fork and getpid


pid = os.fork()
if pid > 0:
print(f"Parent process (PID: {os.getpid()})")
else:
print(f"Child process (PID: {os.getpid()})")

# exec
os.execvp("ls", ["ls", "-l"])
Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula
Date: ExP.No.: P ag e |5

# wait
pid = os.fork()
if pid > 0:
child_pid, exit_status = os.wait()
print(f"Child process (PID: {child_pid}) exited with status: {exit_status}")
else:
print(f"Child process (PID: {os.getpid()})")
os._exit(0)

# close
file_descriptor = os.open("file.txt", os.O_CREAT | os.O_WRONLY)
os.close(file_descriptor)

# stat
file_info = os.stat("file.txt")
print(f"File size: {file_info.st_size} bytes")
print(f"Last accessed: {file_info.st_atime}")

# opendir and readdir


directory = os.opendir(".")
while True:
entry = os.readdir(directory)
if not entry:
break
print(entry)

# exit
print("Exiting the program...")
os._exit(0)

Output:

Result:
Thus a python program to implement system calls of UNIX operating system fork, exce, getpid, exit,
wait, close, stat, opendir, readdir etc., successfully executed.
Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula
Date: ExP.No.: P ag e |6

Write program to simulate UNIX commands like cp, ls, greg.


Aim:
To write a program to simulate UNIX commands like cp, ls, greg

Description:
1. `cp` (Copy): The `cp` command is used to copy files or directories from one location to another. It takes two
arguments: the source file/directory and the destination file/directory. When copying a single file, the syntax
is `cp source_file destination_file`, and when copying a directory and its contents, the syntax is `cp -r
source_directory destination_directory`.

2. `ls` (List): The `ls` command is used to list files and directories in a specified directory. Without any
arguments, it displays the contents of the current directory. It can be used with various options to customize
the output, such as showing hidden files, displaying file sizes, sorting files by different criteria, etc.

Program:
import shutil
import os
import re

# Function to copy a file


def copy_file(source, destination):
shutil.copy2(source, destination)
print(f"File {source} copied to {destination}.")
# Function to list files in a directory
def list_files(directory="."):
files = os.listdir(directory)
for file in files:
print(file)
# Function to search for a pattern in files
def grep(pattern, file):
with open(file, "r") as f:
lines = f.readlines()
for line in lines:
if re.search(pattern, line):
print(line.strip())
# Copy a file
print("Copying is done here")
copy_file("source_file.txt", "destination_file.txt")
# List files in a directory
print("The list of files in the directory are ")
list_files()
# Search for a pattern in a file
grep("pattern", "file.txt")

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P ag e |7
Output:

Result:
Thus a python program to simulate UNIX commands like cp, ls, greg is successfully executed.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P ag e |8

SHELL programming
Aim:
To write some programs and implement them using shell programming.

Description:
Shell programming refers to writing scripts or programs using shell scripting languages to interact with the
operating system's shell (command-line interface). In Unix-like operating systems, the default shell is usually
the Bash shell (Bourne Again SHell), although there are other shells like Zsh, Csh, and Fish.

Shell scripts are plain text files containing a series of commands that are executed in sequence by the shell.
Shell programming is widely used for automating tasks, managing system configurations, and creating custom
utilities.
Here's a brief description of some key aspects of shell programming:
1. **Script Shebang:** The first line of a shell script typically starts with a "shebang" (#!) followed by the path
to the shell interpreter. For example, #!/bin/bash indicates that the script should be executed using the Bash
shell.

2. **Comments:** Shell scripts support comments using the '#' symbol. Comments are ignored by the shell
and are used to provide explanations or context within the script.

3. **Variables:** Shell scripts use variables to store data, such as strings or numbers. Variables are defined
without spaces around the '=' sign and are accessed using the '$' symbol. Example: `name="John"`, `echo
"Hello, $name!"` will output "Hello, John!"

4. **Command Substitution:** Enclosing a command within backticks (`) or $(...) allows you to capture its
output into a variable. Example: `files=$(ls)` stores the output of the `ls` command in the variable `files`.

5. **Control Structures:** Shell scripts support control structures like if-else, for loops, while loops, and case
statements. These structures enable conditional branching and repetitive execution of commands.

6. **Functions:** Shell scripts can define functions to encapsulate a set of commands that can be reused
throughout the script.

7. **Input/Output Redirection:** Shell scripts can redirect input and output streams. For example,
`command > output.txt` will redirect the output of `command` to a file called "output.txt."

8. **Command-Line Arguments:** Shell scripts can accept command-line arguments passed when executing
the script. The arguments are accessed using variables like $1, $2, etc.

9. **Exit Status:** Each command in a shell script returns an exit status. The special variable `$?` holds the
exit status of the last executed command.

10. **Conditional Execution:** Commands can be executed conditionally based on the success or failure of
preceding commands. For example, `command1 && command2` will execute `command2` only if `command1`
succeeds.

Shell programming is lightweight and efficient for automating repetitive tasks and managing system
configurations. It is especially useful for system administrators, developers, and power users who work
extensively on the command-line interface.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P ag e |9

Programs:
1) Factorial.sh:
Aim: To write a shell scripting program to find factorial of a number
Program:
#!/bin/bash
factorial() {
local num=$1
local fact=1
for (( i=1; i<=num; i++ ))
do
fact=$((fact * i))
done
echo $fact
}
echo "Enter a number: "
read num
result=$(factorial $num)
echo "The factorial of $num is: $result"
Output:

2) number_sum.sh:
Aim: To write a shell program to find sum of given list of numbers
Program:
#!/bin/bash
sum=0
while true
do
echo "Enter a number (0 to exit): "
read num
if [ $num -eq 0 ]; then
break
fi
sum=$((sum + num))
done
echo "The sum is: $sum"

Output:

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 10

3) word_count.sh:
Aim: To write a shell program to find the number of words in sentence
Program:
#!/bin/bash
echo "Enter a sentence: "
read sentence
word_count=$(echo "$sentence" | wc -w)
echo "Number of words in the sentence: $word_count"

Sample Output:

4) Password_generator.sh:
Aim: To write shell program to generate random password.
Program:
#!/bin/bash
echo "Enter the desired password length: "
read length
password=$(openssl rand -base64 $((length * 3 / 4)) | tr -d '\n' | head -c $length ; echo)
echo "Generated password: $password"

Output:

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 11

5) System_information.sh:
Aim: To write a shell program to display system information
Program:
#!/bin/bash
echo "System
Information:" echo
"Hostname: $(hostname)"
echo "Operating System:
$(uname -s)"echo "Kernel
Version: $(uname -r)"
echo "Available Memory: $(free -h | awk '/^Mem:/ {print
$4}')"echo "Disk Usage: $(df -h / | awk '/^\// {print $5}')"
echo "Logged-in Users: $(who | wc -l)"

Output:

Result:
Thus some shell scripting programs are implemented and executed successfully.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 12

CPU scheduling algorithms

Aim: To write programs to implement CPU scheduling algorithms Description: CPU scheduling algorithms
are essential mechanisms in operating systems that determine which process gets to use the CPU at any given
time. These algorithms play a crucial role in enhancing system efficiency and responsiveness. Here's an
overview of common CPU scheduling algorithms:

1. First-Come, First-Served (FCFS): FCFS is a non-preemptive scheduling algorithm where processes


are executed in the order they arrive in the ready queue. It's simple to implement but may lead to
longer average waiting times, particularly if longer processes arrive first.

2. Shortest Job Next (SJN) / Shortest Job First (SJF): SJN or SJF selects the process with the
smallest burst time (execution time) next. This algorithm aims to minimize average waiting time and
turnaround time by prioritizing shorter jobs. However, it requires accurate estimation of burst times to
be effective.

3. Priority Scheduling: Priority scheduling assigns a priority to each process, and the CPU is allocated
to the highest priority process available. It can be preemptive (priorities may change dynamically) or
non-preemptive (once a process starts, it continues until it finishes). Priority can be based on factors
like process characteristics, deadlines, or system policies.

4. Round Robin (RR): RR allocates a fixed time slice (quantum) of CPU time to each process in a
circular order. If a process doesn't complete within its time quantum, it's preempted and placed at the
end of the queue. RR ensures fair CPU allocation among processes and is suitable for time-sharing
systems. However, it may lead to increased overhead due to frequent context switches with smaller
time slices.

5. Shortest Remaining Time First (SRTF): SRTF is a preemptive version of SJN where the process
with the smallest remaining execution time is selected for execution. It aims to minimize waiting time
by allowing shorter jobs to complete quickly. SRTF requires accurate prediction or estimation of
remaining execution times, which can be challenging in dynamic environments.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 13

Algorithms:
1) First Come First Serve:
Program:
def fcfs_scheduling(processes, arrival_time, burst_time):
n = len(processes)
waiting_time = [0] * n
completion_time = [0] * n
turnaround_time = [0] * n
current_time = 0

# Calculate completion time, waiting time, and turnaround time


for i in range(n):
if current_time < arrival_time[i]:
current_time = arrival_time[i]
completion_time[i] = current_time + burst_time[i]
turnaround_time[i] = completion_time[i] - arrival_time[i]
waiting_time[i] = turnaround_time[i] - burst_time[i]
current_time = completion_time[i]

# Print the Gantt chart


print("\nGantt chart:")
for i in range(n):
print(f"| {processes[i]} {completion_time[i]} ", end="")
print("|")

# Print the process information


print("\nProcess\tArrival Time\tBurst Time\tCompletion Time\tWaiting Time\tTurnaround Time")
for i in range(n):

print(f"{processes[i]}\t{arrival_time[i]}\t\t{burst_time[i]}\t\t{completion_time[i]}\t\t{waiting_time[i]}\t\t{t
urnaround_time[i]}")

avg_waiting_time = sum(waiting_time) / n
print(f"\nAverage waiting time: {avg_waiting_time:.2f}")

def get_inputs():
n = int(input("Enter the number of processes: "))
processes = [f"p{i}" for i in range(n)]
arrival_time = []
burst_time = []

for i in range(n):
at = int(input(f"Enter the arrival time of {processes[i]}: "))
bt = int(input(f"Enter the burst time of {processes[i]}: "))
arrival_time.append(at)
burst_time.append(bt)

return processes, arrival_time, burst_time

processes, arrival_time, burst_time = get_inputs()


fcfs_scheduling(processes, arrival_time, burst_time)

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 14

SAMPLE OUTPUT:
Enter the number of processes: 3
Enter the arrival time of p0: 0
Enter the burst time of p0: 5
Enter the arrival time of p1: 2
Enter the burst time of p1: 3
Enter the arrival time of p2: 4
Enter the burst time of p2: 2

Gantt chart:
| p0 5 | p1 8 | p2 10 |

Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 5 5 0 5
p1 2 3 8 3 6
p2 4 2 10 4 6

Average waiting time: 2.33


Executed Output:

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 15

Shortest Job first:


Program:
def sjf_scheduling(processes, arrival_time, burst_time):
n = len(processes)
waiting_time = [0] * n
completion_time = [0] * n
turnaround_time = [0] * n
remaining_time = burst_time.copy()
current_time = 0
completed_processes = 0

# Process the jobs


while completed_processes < n:
# Find the process with the shortest burst time that has arrived
min_time = float('inf')
min_index = -1
for i in range(n):
if arrival_time[i] <= current_time and remaining_time[i] < min_time and remaining_time[i] > 0:
min_time = remaining_time[i]
min_index = i

if min_index == -1:
current_time += 1
continue

# Update time and remaining time


current_time += remaining_time[min_index]
completion_time[min_index] = current_time
turnaround_time[min_index] = completion_time[min_index] - arrival_time[min_index]
waiting_time[min_index] = turnaround_time[min_index] - burst_time[min_index]
remaining_time[min_index] = 0
completed_processes += 1

# Print the Gantt chart


print("\nGantt chart:")
print("|", end="")
current_time = 0
for i in range(n):
print(f" {processes[i]} {completion_time[i]} |", end="")
print()

# Print the process information


print("\nProcess\tArrival Time\tBurst Time\tCompletion Time\tWaiting Time\tTurnaround Time")
for i in range(n):

print(f"{processes[i]}\t{arrival_time[i]}\t\t{burst_time[i]}\t\t{completion_time[i]}\t\t{waiting_time[i]}\t\t{t
urnaround_time[i]}")

avg_waiting_time = sum(waiting_time) / n
print(f"\nAverage waiting time: {avg_waiting_time:.2f}")

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 16

def get_inputs():
n = int(input("Enter the number of processes: "))
processes = [f"p{i}" for i in range(n)]
arrival_time = []
burst_time = []

for i in range(n):
at = int(input(f"Enter the arrival time of {processes[i]}: "))
bt = int(input(f"Enter the burst time of {processes[i]}: "))
arrival_time.append(at)
burst_time.append(bt)

return processes, arrival_time, burst_time

processes, arrival_time, burst_time = get_inputs()


sjf_scheduling(processes, arrival_time, burst_time)
Sample Output:
Enter the number of processes: 3
Enter the arrival time of p0: 0
Enter the burst time of p0: 8
Enter the arrival time of p1: 1
Enter the burst time of p1: 4
Enter the arrival time of p2: 2
Enter the burst time of p2: 1

Gantt chart:
| p2 1 | p1 5 | p0 13 |

Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 8 13 5 13
p1 1 4 5 0 4
p2 2 1 1 0 -1

Average waiting time: 1.67


Executed Output:

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 17

Priority scheduling algorithm:


Program:
def priority_scheduling(processes, arrival_time, burst_time, priority):
n = len(processes)
waiting_time = [0] * n
completion_time = [0] * n
turnaround_time = [0] * n
sorted_processes = sorted(range(n), key=lambda i: (arrival_time[i], -priority[i]))

current_time = 0
completed_processes = 0

while completed_processes < n:


# Select the process with the highest priority that has arrived
available_processes = [i for i in sorted_processes if arrival_time[i] <= current_time and
completion_time[i] == 0]
if not available_processes:
current_time += 1
continue

# Find the process with the highest priority (lowest number)


current_process = min(available_processes, key=lambda i: priority[i])
burst = burst_time[current_process]

# Update times
completion_time[current_process] = current_time + burst
turnaround_time[current_process] = completion_time[current_process] -
arrival_time[current_process]
waiting_time[current_process] = turnaround_time[current_process] - burst_time[current_process]
current_time += burst
completed_processes += 1

# Print the Gantt chart


print("\nGantt chart:")
last_time = 0
for i in range(n):
if completion_time[i] > last_time:
print(f"| {processes[i]} {completion_time[i]} ", end="")
last_time = completion_time[i]
print("|")

# Print the process information


print("\nProcess\tArrival Time\tBurst Time\tPriority\tCompletion Time\tWaiting Time\tTurnaround
Time")
for i in range(n):

print(f"{processes[i]}\t{arrival_time[i]}\t\t{burst_time[i]}\t\t{priority[i]}\t\t{completion_time[i]}\t\t{waitin
g_time[i]}\t\t{turnaround_time[i]}")

avg_waiting_time = sum(waiting_time) / n
print(f"\nAverage waiting time: {avg_waiting_time:.2f}")

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 18

def get_inputs():
n = int(input("Enter the number of processes: "))
processes = [f"p{i}" for i in range(n)]
arrival_time = []
burst_time = []
priority = []

for i in range(n):
at = int(input(f"Enter the arrival time of {processes[i]}: "))
bt = int(input(f"Enter the burst time of {processes[i]}: "))
pr = int(input(f"Enter the priority of {processes[i]} (lower number means higher priority): "))
arrival_time.append(at)
burst_time.append(bt)
priority.append(pr)

return processes, arrival_time, burst_time, priority

processes, arrival_time, burst_time, priority = get_inputs()


priority_scheduling(processes, arrival_time, burst_time, priority)
Sample Output:
Enter the number of processes: 3
Enter the arrival time of p0: 0
Enter the burst time of p0: 8
Enter the priority of p0 (lower number means higher priority): 2
Enter the arrival time of p1: 1
Enter the burst time of p1: 4
Enter the priority of p1 (lower number means higher priority): 1
Enter the arrival time of p2: 2
Enter the burst time of p2: 2
Enter the priority of p2 (lower number means higher priority): 3

Gantt chart:
| p1 5 | p2 7 | p0 15 |

Process Arrival Time Burst Time PriorityCompletion Time Waiting Time Turnaround Time
p0 0 8 2 15 7 15
p1 1 4 1 5 0 4
p2 2 2 3 7 3 5

Average waiting time: 3.33

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 19

Executed Output:

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 20

Round robin scheduling:


Program:
def round_robin_scheduling(processes, arrival_time, burst_time, quantum):
n = len(processes)
remaining_time = burst_time.copy()
waiting_time = [0] * n
turnaround_time = [0] * n
completion_time = [0] * n

current_time = 0
queue = []
process_indices = list(range(n))

while process_indices or queue:


# Add all processes that have arrived to the queue
for i in list(process_indices):
if arrival_time[i] <= current_time:
queue.append(i)
process_indices.remove(i)

if not queue:
# If no processes are in the queue, advance time
current_time += 1
continue

# Get the next process from the queue


current_process = queue.pop(0)

# Calculate the execution time for the current quantum


execute_time = min(quantum, remaining_time[current_process])
remaining_time[current_process] -= execute_time
current_time += execute_time

if remaining_time[current_process] == 0:
completion_time[current_process] = current_time
turnaround_time[current_process] = completion_time[current_process] -
arrival_time[current_process]
waiting_time[current_process] = turnaround_time[current_process] - burst_time[current_process]
else:
# If process is not completed, put it back in the queue
queue.append(current_process)

# Print the Gantt chart


print("\nGantt chart:")
current_time = 0
for i in range(n):
print(f"| {processes[i]} {completion_time[i]} ", end="")
print("|")

# Print the process information


print("\nProcess\tArrival Time\tBurst Time\tCompletion Time\tWaiting Time\tTurnaround Time")
for i in range(n):

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 21

print(f"{processes[i]}\t{arrival_time[i]}\t\t{burst_time[i]}\t\t{completion_time[i]}\t\t{waiting_time[i]}\t\t{tu
rnaround_time[i]}")

avg_waiting_time = sum(waiting_time) / n
print(f"\nAverage waiting time: {avg_waiting_time:.2f}")

def get_inputs():
n = int(input("Enter the number of processes: "))
processes = [f"p{i}" for i in range(n)]
arrival_time = []
burst_time = []

for i in range(n):
at = int(input(f"Enter the arrival time of {processes[i]}: "))
bt = int(input(f"Enter the burst time of {processes[i]}: "))
arrival_time.append(at)
burst_time.append(bt)

quantum = int(input("Enter the time quantum for Round Robin scheduling: "))
return processes, arrival_time, burst_time, quantum

processes, arrival_time, burst_time, quantum = get_inputs()


round_robin_scheduling(processes, arrival_time, burst_time, quantum)

Sample Output:
Enter the number of processes: 3
Enter the arrival time of p0: 0
Enter the burst time of p0: 10
Enter the arrival time of p1: 2
Enter the burst time of p1: 5
Enter the arrival time of p2: 4
Enter the burst time of p2: 8
Enter the time quantum for Round Robin scheduling: 4 Gantt chart:
Gantt chart:
| p0 4 | p1 9 | p2 13 | p0 18 | p2 21 | p1 24 | p2 29 |

Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 10 18 8 18
p1 2 5 24 9 22
p2 4 8 29 9 25

Average waiting time: 8.67

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 22

Executed Output:

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 23

Shortest remaining time first:


Program:
def srtf_scheduling(processes, arrival_time, burst_time):
n = len(processes)
remaining_time = burst_time.copy()
waiting_time = [0] * n
turnaround_time = [0] * n
completion_time = [0] * n

current_time = 0
completed_processes = 0
process_queue = []

while completed_processes < n:


# Add all processes that have arrived to the queue
for i in range(n):
if arrival_time[i] <= current_time and remaining_time[i] > 0:
process_queue.append(i)

if not process_queue:
current_time += 1
continue

# Find the process with the shortest remaining time


current_process = min(process_queue, key=lambda i: remaining_time[i])

# Execute the current process for 1 unit of time


remaining_time[current_process] -= 1
current_time += 1

# Print Gantt chart segment


print(f"| {processes[current_process]} {current_time} ", end="")

# If the current process is completed


if remaining_time[current_process] == 0:
completion_time[current_process] = current_time
turnaround_time[current_process] = completion_time[current_process] -
arrival_time[current_process]
waiting_time[current_process] = turnaround_time[current_process] - burst_time[current_process]
process_queue.remove(current_process)
completed_processes += 1

# Print the Gantt chart


print("|")

# Print the process information


print("\nProcess\tArrival Time\tBurst Time\tCompletion Time\tWaiting Time\tTurnaround Time")
for i in range(n):

print(f"{processes[i]}\t{arrival_time[i]}\t\t{burst_time[i]}\t\t{completion_time[i]}\t\t{waiting_time[i]}\t\t{tu
rnaround_time[i]}")

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 24

avg_waiting_time = sum(waiting_time) / n
print(f"\nAverage waiting time: {avg_waiting_time:.2f}")

def get_inputs():
n = int(input("Enter the number of processes: "))
processes = [f"p{i}" for i in range(n)]
arrival_time = []
burst_time = []

for i in range(n):
at = int(input(f"Enter the arrival time of {processes[i]}: "))
bt = int(input(f"Enter the burst time of {processes[i]}: "))
arrival_time.append(at)
burst_time.append(bt)

return processes, arrival_time, burst_time

processes, arrival_time, burst_time = get_inputs()


srtf_scheduling(processes, arrival_time, burst_time)

Sample Output:
Enter the number of processes: 3
Enter the arrival time of p0: 0
Enter the burst time of p0: 8
Enter the arrival time of p1: 1
Enter the burst time of p1: 4
Enter the arrival time of p2: 2
Enter the burst time of p2: 2
| p0 1 | p1 5 | p2 7 | p1 9 | p0 17 |

Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 8 17 9 17
p1 1 4 9 4 8
p2 2 2 7 3 5

Average waiting time: 5.33

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 25

Executed Output:

Result:
Thus python programs to implement CPU scheduling algorithms successfully executed.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 26

Write a program for implementation of semaphore

Aim:
To write a program for the implementation of semaphores
Description:
Semaphores are a synchronization mechanism used in concurrent programming to control access to
shared resources and coordinate the execution of multiple processes or threads. They were introduced
by Edsger W.Dijkstra in 1965 and have since become a fundamental concept in operating systems and
parallel computing.

A semaphore is a simple integer variable, often referred to as a "counting semaphore." It can take
on non-negative integer values and supports two main operations:

1. **Wait (P) Operation**: When a process (or thread) wants to access a shared resource, it must first
perform a "wait" operation on the semaphore. If the semaphore value is greater than zero, the
process decrements the semaphore value and continues its execution, indicating that the resource is
available. If thesemaphore value is zero, the process is blocked (put to sleep) until the semaphore
value becomes positive again.

2. **Signal (V) Operation**: After a process finishes using a shared resource, it performs a "signal"
operation on the semaphore. This operation increments the semaphore value, indicating that the
resource is now available for other processes or threads to use. If there were blocked processes waiting
for the semaphore to become positive (due to previous "wait" operations), one of them will be
awakened and granted access to theresource.

Semaphores help prevent race conditions and ensure that critical sections of code (regions of code that
accessshared resources) are mutually exclusive, meaning only one process or thread can access the
shared resource at a time.

There are two main types of semaphores:


1. **Binary Semaphore**: A binary semaphore is a special type of semaphore that can only take on
two values, typically 0 and 1. It is often used for signaling purposes, where 0 means the resource is
unavailable,and 1 means the resource is available.

2. **Counting Semaphore**: As mentioned earlier, a counting semaphore can take on non-negative


integervalues, allowing more flexible coordination among multiple processes or threads.

However, it's essential to use semaphores carefully to avoid potential issues like deadlocks (circular
waiting) or race conditions. More advanced synchronization mechanisms, such as mutexes and
condition variables, areoften used in modern programming languages and libraries to manage
concurrency more effectively.

Program:
import time
import threading

# Semaphore functions
def semaphore_wait(semaphore):
while semaphore[0] <= 0:
pass
semaphore[0] -= 1
return semaphore

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 27

def semaphore_signal(semaphore):
semaphore[0] += 1
return semaphore

# Shared variable to act as a semaphore


shared_semaphore = [1] # Using a list to allow modification within threads

# Function for the tasks


def task_function(task_id):
global shared_semaphore

print(f"Task {task_id} is trying to acquire the semaphore.")


semaphore_wait(shared_semaphore)
print(f"Task {task_id} has acquired the semaphore.")

# Simulate some work


print(f"Task {task_id} is performing some work.")
time.sleep(2)

# Release the semaphore


print(f"Task {task_id} is releasing the semaphore.")
semaphore_signal(shared_semaphore)

# Create and start multiple tasks


k = int(input("Enter the number of tasks that want to share resources: "))
tasks = []

# Create and start threads for each task


for i in range(k):
thread = threading.Thread(target=task_function, args=(i,))
tasks.append(thread)
thread.start()

# Wait for all threads to complete


for task in tasks:
task.join()

print("All tasks have completed.")


Sample Output:
Enter the number of tasks that want to share
resources: 3
Task 0 is trying to acquire the semaphore.
Task 0 has acquired the semaphore.
Task 0 is performing some work.
Task 1 is trying to acquire the semaphore.
Task 2 is trying to acquire the semaphore.
Task 0 is releasing the semaphore.
Task 1 has acquired the semaphore.
Task 1 is performing some work.
Task 2 is trying to acquire the semaphore.
Task 1 is releasing the semaphore.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 28

Task 2 has acquired the semaphore.


Task 2 is performing some work.
Task 2 is releasing the semaphore.
All tasks have completed.

Executed output:

Result:
Thus a python program for the implementation of semaphores is successfully executed.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 29

Write program for implementation of Shared memory and


IPC.Implementation of Shared memory:
Aim:
To write a python program for the implementation of shared memory.
Description:
Shared memory is a communication and synchronization mechanism that allows multiple processes or
threadsto access the same region of memory in a concurrent manner. It enables efficient data sharing
and interprocess communication (IPC) among different entities running on the same system.

In shared memory, a region of memory is mapped into the address space of multiple processes or
threads. These processes/threads can then read from or write to the shared memory region just like
accessing regular memory. This shared memory area acts as a shared buffer, facilitating the exchange of
data between differentprocesses without the need for copying data between them.

However, shared memory also requires careful synchronization to avoid data inconsistencies and race
conditions. Developers need to use synchronization primitives like semaphores, mutexes, or atomic
operations to ensure that multiple processes or threads access the shared memory in a coordinated
and controlled manner. Proper synchronization helps maintain data integrity and prevent conflicts
when multipleentities attempt to modify the shared data concurrently.

To use shared memory, operating systems provide APIs for creating and managing shared memory
regions.Developers need to be cautious while using shared memory to prevent data corruption and
ensure proper synchronization to avoid race conditions and data inconsistencies.
Program:
import multiprocessing
import time

def worker(shared_data, lock):


"""Function to be run by the worker process."""
for _ in range(5):
# Acquire the lock to ensure exclusive access to shared data
with lock:
# Modify shared data
shared_data.value += 1
print(f"Worker process: {shared_data.value}")

# Simulate some work


time.sleep(1)

def main():
# Create a shared integer with initial value 0
shared_data = multiprocessing.Value('i', 0)

# Create a lock for synchronizing access to shared memory


lock = multiprocessing.Lock()

# Create worker processes


process1 = multiprocessing.Process(target=worker, args=(shared_data, lock))
process2 = multiprocessing.Process(target=worker, args=(shared_data, lock))

# Start the processes

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 30

process1.start()
process2.start()

# Wait for both processes to finish


process1.join()
process2.join()

print(f"Final shared data value: {shared_data.value}")

if __name__ == "__main__":
main()
Sample Output:

Worker process: 1
Worker process: 2
Worker process: 3
Worker process: 4
Worker process: 5
Worker process: 6
Worker process: 7
Worker process: 8
Worker process: 9
Worker process: 10
Final shared data value: 10

Executed Output:

Result:
Thus a python program for the implementation of Shared memory successfully execut

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 31

Implementation of IPC:
Aim:
To write a program for the implementation of Inter Process Communication (IPC)

Description:
Inter-Process Communication (IPC) facilitates communication and data sharing among concurrent
processeson a computer. Shared memory allows direct access to a common memory area for high-speed
data exchange, while message passing involves sending messages through channels like pipes or sockets.
Synchronization tools like semaphores and mutexes prevent conflicts and ensure data integrity. IPC finds
applications in parallel computing, client-server systems, and real-time applications. Proper design and
implementation are essential to handle complexities and maintain security in IPC mechanisms.

Inter-Process Communication (IPC) enables data exchange and coordination between concurrent
processes in a computer system. It uses shared memory or message passing channels for communication.
Synchronization mechanisms like semaphores ensure proper resource access. IPC is crucial for parallel
computing, client-servermodels, and process coordination. However, it requires careful design to avoid
complexities and security issues.

Program:
import multiprocessing
import time

def producer(queue):
"""Function to be run by the producer process."""
for i in range(5):
item = f"item-{i}"
print(f"Producer putting {item} into queue")
queue.put(item) # Put item into the queue
time.sleep(1) # Simulate work

def consumer(queue):
"""Function to be run by the consumer process."""
while True:
item = queue.get() # Get item from the queue
if item is None: # Sentinel value to end the consumer process
break
print(f"Consumer got {item} from queue")
time.sleep(2) # Simulate work

def main():
# Create a queue for IPC
queue = multiprocessing.Queue()

# Create producer and consumer processes


producer_process = multiprocessing.Process(target=producer, args=(queue,))
consumer_process = multiprocessing.Process(target=consumer, args=(queue,))

# Start the processes


producer_process.start()
consumer_process.start()

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 32

# Wait for the producer process to finish


producer_process.join()

# Add sentinel values to the queue to signal the consumer to stop


queue.put(None)

# Wait for the consumer process to finish


consumer_process.join()

print("Both producer and consumer processes have completed.")

if __name__ == "__main__":
main()

Sample Output:
Producer putting item-0 into queue
Consumer got item-0 from queue
Producer putting item-1 into queue
Producer putting item-2 into queue
Consumer got item-1 from queue
Producer putting item-3 into queue
Producer putting item-4 into queue
Consumer got item-2 from queue
Consumer got item-3 from queue
Consumer got item-4 from queue
Both producer and consumer processes have completed.
Executed Output:

Result:
Thus a python program for the implementation of IPC successfully executed.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 33

Write a program to implement Banker’s algorithm for dead lock


avoidance:
Aim:
To write a program to implement Banker’s algorithm for dead lock avoidance
Description:
The Banker's algorithm is a deadlock avoidance algorithm used to prevent deadlocks in a resource
allocation system. It is primarily employed in operating systems to ensure that processes' resource
requests are grantedin a way that avoids deadlock situations.
Key points about the Banker's algorithm:

1. **Resource Allocation Model**: The algorithm operates under the assumption that the system has
a fixednumber of resources of different types (e.g., memory, CPU, printers) and multiple processes
compete for these resources.

2. **Maximum Claim**: Each process declares its maximum possible resource requirements
(maximumclaim) before it begins its execution. This information is known to the system.

3. **Available Resources**: The system maintains a record of the available resources for each
resource typeat any given time.

4. **Safety and Request Matrices**: The algorithm uses two matrices: the safety matrix and the
request matrix. The safety matrix is used to assess the system's ability to allocate resources safely to
processes without causing deadlock. The request matrix stores the resource requests made by each
process during itsexecution.

5. **Deadlock Avoidance**: The Banker's algorithm employs a conservative approach to resource


allocation.When a process makes a resource request, the system simulates the allocation and checks if
it can still maintain a safe state (no deadlock) after granting the requested resources. If the state
remains safe, the request is granted; otherwise, the process must wait until sufficient resources
become available.

6. **Safe State**: A system state is considered safe if there exists a sequence of resource allocations
where each process can complete its execution and release resources, allowing other processes to
complete withoutgetting stuck in a deadlock.

7. **Resource Allocation and Deallocation**: The algorithm allows resource allocation to


processes andresource deallocation when processes release resources upon completion. It
continuously updates the available resources and request matrices based on these allocations and
deallocations.

8. **Dynamic Resource Requests**: The Banker's algorithm allows processes to make multiple
resourcerequests during their execution. It checks each request's safety before granting it.

The Banker's algorithm is a preventive measure against deadlocks, ensuring that resource allocations are
donein a manner that avoids deadlock scenarios. It provides a way for processes to request resources
safely, considering the system's current resource availability and avoiding situations that could lead to
deadlock.

Program:
import numpy as np

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 34

def is_safe_state(available, max_claim, allocation, need, sequence):


""" Check if the system is in a safe state using Banker's Algorithm """
n = len(available)
work = available.copy()
finish = np.zeros(n, dtype=bool)
safe_sequence = []

for _ in range(n):
for i in range(n):
if not finish[i] and all(need[i] <= work):
work += allocation[i]
finish[i] = True
safe_sequence.append(i)
break
else:
return False, []

return True, safe_sequence

def bankers_algorithm(available, max_claim, allocation):


n = len(available)
need = max_claim - allocation

# Check if initial state is safe


safe, sequence = is_safe_state(available, max_claim, allocation, need, [])
if not safe:
print("Initial state is not safe. Deadlock detected.")
return

print("Initial state is safe. Executing processes...")

# Simulate execution by requesting resources and releasing them


for i in range(n):
# Simulate resource request and release
print(f"Process {i} is requesting resources...")

request = np.random.randint(1, need[i] + 1) # Random request within need


if all(request <= available):
available -= request
allocation[i] += request
need[i] -= request
print(f"Process {i} acquired resources. New state:")
print(f"Available: {available}")
print(f"Allocation: {allocation}")
print(f"Need: {need}")
else:
print(f"Process {i} must wait. Insufficient resources available.")

# Simulate release of resources after some processing


release = np.random.randint(1, allocation[i] + 1) # Random release within allocation

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 35

available += release
allocation[i] -= release
need[i] += release
print(f"Process {i} released resources. New state:")
print(f"Available: {available}")
print(f"Allocation: {allocation}")
print(f"Need: {need}")

# Check if system is still in a safe state after each iteration


safe, sequence = is_safe_state(available, max_claim, allocation, need, sequence)
if safe:
print(f"System is in a safe state after Process {i}. Safe sequence: {sequence}")
else:
print(f"System is not in a safe state after Process {i}. Deadlock detected.")
break

# Dynamic input from user


n = int(input("Enter the number of processes: "))
m = int(input("Enter the number of resource types: "))

max_claim = np.zeros((n, m), dtype=int)


allocation = np.zeros((n, m), dtype=int)
available = np.zeros(m, dtype=int)

print("Enter the maximum claim matrix:")


for i in range(n):
for j in range(m):
max_claim[i][j] = int(input(f"Process {i}, Resource {j}: "))

print("Enter the allocation matrix:")


for i in range(n):
for j in range(m):
allocation[i][j] = int(input(f"Process {i}, Resource {j}: "))

print("Enter the available resources:")


for j in range(m):
available[j] = int(input(f"Resource {j}: "))

print("\nExecuting Banker's Algorithm...\n")


bankers_algorithm(available, max_claim, allocation)
Sample Output:
Enter the number of processes: 3
Enter the number of resource types: 4
Enter the maximum claim matrix:
Process 0, Resource 0: 7
Process 0, Resource 1: 5
Process 0, Resource 2: 3
Process 0, Resource 3: 2
Process 1, Resource 0: 3
Process 1, Resource 1: 2

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 36

Process 1, Resource 2: 2
Process 1, Resource 3: 2
Process 2, Resource 0: 9
Process 2, Resource 1: 0
Process 2, Resource 2: 2
Process 2, Resource 3: 2
Enter the allocation matrix:
Process 0, Resource 0: 0
Process 0, Resource 1: 1
Process 0, Resource 2: 0
Process 0, Resource 3: 0
Process 1, Resource 0: 2
Process 1, Resource 1: 0
Process 1, Resource 2: 0
Process 1, Resource 3: 1
Process 2, Resource 0: 3
Process 2, Resource 1: 0
Process 2, Resource 2: 2
Process 2, Resource 3: 1
Enter the available resources:
Resource 0: 3
Resource 1: 3
Resource 2: 2
Resource 3: 2
Executing Banker's Algorithm...

Initial state is safe. Executing processes...


Process 0 is requesting resources...
Process 0 acquired resources. New state:
Available: [3 2 2 2]
Allocation: [[0 1 0 0]
[2 0 0 1]
[3 0 2 1]]
Need: [[7 4 3 2]
[1 2 2 1]
[6 0 0 1]]
System is in a safe state after Process 0. Safe sequence: [0]
Process 0 released resources. New state:
Available: [3 3 2 2]
Allocation: [[0 0 0 0]
[2 0 0 1]
[3 0 2 1]]
Need: [[7 5 3 2]
[1 2 2 1]
[6 0 0 1]]
System is in a safe state after Process 0. Safe sequence: [0, 1]
Process 1 is requesting resources...
Process 1 must wait. Insufficient resources available.
Process 1 released resources. New state:
Available: [3 3 2 3]
Allocation: [[0 0 0 0]

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 37

[0 0 0 0]
[3 0 2 1]]
Need: [[7 5 3 2]
[3 2 2 1]
[6 0 0 1]]
System is in a safe state after Process 1. Safe sequence: [0, 1, 2]
Process 2 is requesting resources...
Process 2 must wait. Insufficient resources available.
Process 2 released resources. New state:
Available: [3 3 2 4]
Allocation: [[0 0 0 0]
[0 0 0 0]
[0 0 0 0]]
Need: [[7 5 3 2]
[3 2 2 1]
[9 0 2 1]]
System is in a safe state after Process 2. Safe sequence: [0, 1, 2]
Both producer and consumer processes have completed.
Executed Output:

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 38

Result:
Thus a python program for the implementation of Banker’s algorithm for dead lock avoidance
successfully executed.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 39

Write a program for Implementation of dead lock


detection
Aim:
To write a program for implementation of dead lock detection
Description:
Deadlock detection algorithms identify deadlocks in a resource allocation system by
periodicallyscanning the resource allocation graph for cycles. Once a deadlock is detected, appropriate
actions are taken to resolve it, such as process termination or resource preemption. However, these
algorithms do not preventdeadlocks; they reactively handle them after they occur.
Program:
import numpy as np

def detect_deadlock(allocated, max_claim, available):


""" Check for deadlock using the Resource Allocation Graph method. """
n = len(available) # Number of processes
m = len(available) # Number of resources

# Calculate the need matrix


need = max_claim - allocated

# Initialize work and finish arrays


work = available.copy()
finish = np.zeros(n, dtype=bool)

while True:
progress_made = False
for i in range(n):
if not finish[i] and all(need[i] <= work):
# If process i can finish, update work and finish arrays
work += allocated[i]
finish[i] = True
progress_made = True
print(f"Process {i} can finish; updated work: {work}")

if not progress_made:
break

# Check if all processes are finished


if all(finish):
print("No deadlock detected. All processes can finish.")
else:
print("Deadlock detected. Not all processes can finish.")

# Function to take dynamic input from the user


def main():
n = int(input("Enter the number of processes: "))
m = int(input("Enter the number of resources: "))

max_claim = np.zeros((n, m), dtype=int)


allocated = np.zeros((n, m), dtype=int)

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 40

available = np.zeros(m, dtype=int)

print("Enter the maximum claim matrix:")


for i in range(n):
for j in range(m):
max_claim[i][j] = int(input(f"Process {i}, Resource {j}: "))

print("Enter the allocation matrix:")


for i in range(n):
for j in range(m):
allocated[i][j] = int(input(f"Process {i}, Resource {j}: "))

print("Enter the available resources:")


for j in range(m):
available[j] = int(input(f"Resource {j}: "))

print("\nDetecting deadlock...\n")
detect_deadlock(allocated, max_claim, available)

if __name__ == "__main__":
main()
Sample Output:
Enter the number of processes: 3
Enter the number of resources: 3
Enter the maximum claim matrix:
Process 0, Resource 0: 7
Process 0, Resource 1: 5
Process 0, Resource 2: 3
Process 1, Resource 0: 3
Process 1, Resource 1: 2
Process 1, Resource 2: 2
Process 2, Resource 0: 9
Process 2, Resource 1: 0
Process 2, Resource 2: 2
Enter the allocation matrix:
Process 0, Resource 0: 0
Process 0, Resource 1: 1
Process 0, Resource 2: 0
Process 1, Resource 0: 2
Process 1, Resource 1: 0
Process 1, Resource 2: 0
Process 2, Resource 0: 3
Process 2, Resource 1: 0
Process 2, Resource 2: 2
Enter the available resources:
Resource 0: 3
Resource 1: 3
Resource 2: 2
Detecting deadlock...

Process 0 can finish; updated work: [ 3 4 2]


Process 1 can finish; updated work: [ 5 4 2]
Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula
Date: ExP.No.: P a g e | 41

Process 2 can finish; updated work: [ 8 4 4]


No deadlock detected. All processes can finish.

Executed Output:

Result:
Thus a python program for the implementation of dead lock detection algorithm successfully
executed.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 42

Write a program for Implementation of threading and


synchronization
Aim:
To write a program for implementation of threading and synchronization
Description:
Multithreading and synchronization are concepts related to concurrent programming in computer
systems:

1. **Multithreading**: Multithreading is the ability of an operating system or programming language


to support the execution of multiple threads within a single process. Threads are lightweight units of
execution, and multithreading allows a program to perform multiple tasks concurrently, making more
efficient use of theavailable CPU resources.

2. **Benefits of Multithreading**: Multithreading can improve performance and


responsiveness in applications by enabling tasks to be performed in parallel. It allows for better
utilization of multi-coreprocessors and can be particularly beneficial in tasks involving I/O
operations or parallel processing.

3. **Thread Creation and Management**: Threads are created and managed by the operating system
or the programming language's runtime environment. They share the same memory space within a
process, makingcommunication and data sharing between threads easier.

4. **Synchronization**: Synchronization is the process of coordinating the execution of multiple


threads to avoid race conditions and ensure data integrity. When multiple threads access shared
resources simultaneously, synchronization mechanisms, such as mutexes, semaphores, and condition
variables, are usedto enforce mutual exclusion and control access to the shared data.

5. **Race Conditions**: Race conditions occur when multiple threads access shared resources in an
uncontrolled manner, leading to unpredictable and potentially incorrect behavior. Synchronization
mechanisms help prevent race conditions by allowing only one thread to access the shared resource at a
time.

6. **Deadlocks**: Deadlocks can occur when two or more threads are each waiting for a resource that
is heldby another thread, resulting in a circular dependency. Properly designed synchronization can help
avoid deadlocks by ensuring that threads release resources in a coordinated manner.

7. **Challenges**: Multithreading introduces challenges like thread coordination, resource


sharing, andavoiding synchronization overhead. Developers need to carefully design and manage
threads and synchronization to avoid issues like deadlocks, livelocks, and priority inversion.

In summary, multithreading allows a program to execute multiple tasks concurrently, improving


performanceand responsiveness. However, proper synchronization is essential to prevent race
conditions and deadlocks and ensure correct behavior in multithreaded applications.
Program:
import threading
import time

# Shared counter and a lock for synchronization


shared_counter = 0
lock = threading.Lock()

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 43

def increment_counter(thread_id, increments):


global shared_counter
for _ in range(increments):
# Acquire the lock to ensure exclusive access to the shared resource
with lock:
current_value = shared_counter
print(f"Thread {thread_id} is incrementing counter from {current_value}", end=" ")
time.sleep(0.1) # Simulate some processing time
shared_counter = current_value + 1
print(f"to {shared_counter}")

def main():
num_threads = int(input("Enter the number of threads: "))
increments_per_thread = int(input("Enter the number of increments per thread: "))

threads = []

# Create and start threads


for i in range(num_threads):
thread = threading.Thread(target=increment_counter, args=(i, increments_per_thread))
threads.append(thread)
thread.start()

# Wait for all threads to complete


for thread in threads:
thread.join()

print(f"Final value of shared_counter: {shared_counter}")

if __name__ == "__main__":
main()
Sample Output:
Enter the number of threads: 3
Enter the number of increments per thread: 5
Thread 0 is incrementing counter from 0 to 1
Thread 1 is incrementing counter from 1 to 2
Thread 2 is incrementing counter from 2 to 3
Thread 0 is incrementing counter from 3 to 4
Thread 1 is incrementing counter from 4 to 5
Thread 2 is incrementing counter from 5 to 6
Thread 0 is incrementing counter from 6 to 7
Thread 1 is incrementing counter from 7 to 8
Thread 2 is incrementing counter from 8 to 9
Thread 0 is incrementing counter from 9 to 10
Thread 1 is incrementing counter from 10 to 11
Thread 2 is incrementing counter from 11 to 12
Final value of shared_counter: 12
Executed Output:

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 44

Result:
Thus a python program for implementation of threading and synchronization successfully
executed.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 45

Write a program for Implementation of memory allocation methods for fixed partitions:
a) First fit
b) best fit
c)worst fit
Aim:
To Write a program for Implementation of memory allocation methods for fixed partitions:
a) First fit b) best fit c) worst fit

Description:
Memory allocation methods for fixed partitions are used in operating systems to manage memory in a
systemwith a fixed number of memory partitions of different sizes. These methods determine how
processes are allocated to these fixed partitions based on their size requirements.

1. **First Fit**:
- In the first-fit memory allocation method, when a new process arrives and needs memory, the
systemsearches the memory partitions sequentially from the beginning.
- The first partition that is large enough to accommodate the process is allocated to it.
- This method is relatively simple and efficient in terms of time complexity since it stops searching
once asuitable partition is found.
- However, it may lead to fragmentation, where small blocks of unused memory are scattered
across thememory, making it challenging to allocate larger processes in the future.

2. **Best Fit**:
- In the best-fit memory allocation method, when a new process arrives, the system searches
for thesmallest partition that can hold the process size.
- Among all the partitions that are large enough to accommodate the process, the one with the
smallest sizeis chosen for allocation.
- This method helps in reducing fragmentation, as it tries to fit processes into the smallest possible
availablepartitions.
- However, it may be slightly slower than the first-fit method, as it needs to search the entire
list ofpartitions to find the best fit.

3. **Worst Fit**:
- In the worst-fit memory allocation method, when a new process arrives, the system searches
for thelargest partition available that can hold the process size.
- The largest partition among all the partitions that can accommodate the process is allocated to it.
- This method may lead to more fragmentation compared to first-fit and best-fit, as larger
partitions areused for smaller processes, leaving behind smaller unused spaces.
- It is less commonly used than first-fit and best-fit because it does not efficiently utilize memory space.

In summary, memory allocation methods for fixed partitions, such as first-fit, best-fit, and worst-fit,
determinehow processes are assigned to available memory partitions based on their size requirements.
Each method has its advantages and disadvantages, affecting fragmentation and memory utilization in
the system. The choice of the allocation method depends on the specific requirements and
characteristics of the system.

Program:
def first_fit(partitions, processes):
allocation = [-1] * len(processes)
for i, process in enumerate(processes):
for j, partition in enumerate(partitions):

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 46

if partition >= process:


allocation[i] = j
partitions[j] -= process
break
return allocation

def best_fit(partitions, processes):


allocation = [-1] * len(processes)
for i, process in enumerate(processes):
best_index = -1
for j, partition in enumerate(partitions):
if partition >= process and (best_index == -1 or partition < partitions[best_index]):
best_index = j
if best_index != -1:
allocation[i] = best_index
partitions[best_index] -= process
return allocation

def worst_fit(partitions, processes):


allocation = [-1] * len(processes)
for i, process in enumerate(processes):
worst_index = -1
for j, partition in enumerate(partitions):
if partition >= process and (worst_index == -1 or partition > partitions[worst_index]):
worst_index = j
if worst_index != -1:
allocation[i] = worst_index
partitions[worst_index] -= process
return allocation

def print_allocation(allocation, processes, method):


print(f"\n{method} Allocation:")
for i, alloc in enumerate(allocation):
if alloc != -1:
print(f"Process {i} allocated to Partition {alloc}")
else:
print(f"Process {i} not allocated")

def main():
num_partitions = int(input("Enter the number of partitions: "))
partitions = [int(input(f"Enter size of partition {i}: ")) for i in range(num_partitions)]

num_processes = int(input("Enter the number of processes: "))


processes = [int(input(f"Enter size of process {i}: ")) for i in range(num_processes)]

# Copy of partitions for each method


partitions_copy = partitions.copy()

# First Fit
allocation_first_fit = first_fit(partitions_copy, processes)
print_allocation(allocation_first_fit, processes, "First Fit")

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 47

# Restore partitions and allocate using Best Fit


partitions_copy = partitions.copy()
allocation_best_fit = best_fit(partitions_copy, processes)
print_allocation(allocation_best_fit, processes, "Best Fit")

# Restore partitions and allocate using Worst Fit


partitions_copy = partitions.copy()
allocation_worst_fit = worst_fit(partitions_copy, processes)
print_allocation(allocation_worst_fit, processes, "Worst Fit")

if __name__ == "__main__":
main()
Sample Output:
Enter the number of partitions: 4
Enter size of partition 0: 100
Enter size of partition 1: 500
Enter size of partition 2: 200
Enter size of partition 3: 300
Enter the number of processes: 5
Enter size of process 0: 212
Enter size of process 1: 417
Enter size of process 2: 112
Enter size of process 3: 426
Enter size of process 4: 50
First Fit Allocation:
Process 0 allocated to Partition 2
Process 1 allocated to Partition 1
Process 2 allocated to Partition 3
Process 3 not allocated
Process 4 allocated to Partition 0

Best Fit Allocation:


Process 0 allocated to Partition 2
Process 1 allocated to Partition 1
Process 2 allocated to Partition 3
Process 3 not allocated
Process 4 allocated to Partition 0

Worst Fit Allocation:


Process 0 allocated to Partition 1
Process 1 allocated to Partition 1
Process 2 allocated to Partition 3
Process 3 not allocated
Process 4 allocated to Partition 0

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 48

Executed Output:

Result:
Thus a python program for Implementation of memory allocation methods for fixed partitions is
successfully executed.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 49

Write a program for the implementation of paging technique of memory


management:
Aim:
To write a program for the implementation of paging technique of memory management
Description:
Paging is a memory management technique used by operating systems to handle the organization and
allocation of physical memory. It allows the operating system to present a uniform, logical view of
memory toprocesses, while efficiently managing the available physical memory.

In the paging technique, the physical memory is divided into fixed-size blocks called "frames," and the
logical memory used by processes is divided into fixed-size blocks called "pages." These pages are usually
of the samesize as the frames. The size of a page is typically a power of 2, such as 4 KB or 8 KB.

When a process needs to access a specific memory address, the virtual address generated by the
CPU is divided into two parts: a "page number" and an "offset." The page number is used as an
index to access apage table, which is a data structure maintained by the operating system to keep
track of the mapping between virtual pages and physical frames.

The page table provides the corresponding physical frame number for the given page number. The
offset isused to determine the exact location within the physical frame where the data is stored.

If a page is not currently present in physical memory (i.e., it's not loaded into a frame), the CPU
generates a page fault. The operating system then retrieves the required page from the secondary
storage (e.g., hard disk)and loads it into an available frame in physical memory. The page table is
updated to reflect the new mapping between the virtual page and the physical frame.

Paging allows for several advantages in memory management:

1. **Simplified Address Translation:** With paging, the CPU only needs to perform a single
level of translation, from virtual addresses to physical addresses, making the address translation
process moreefficient.

2. **Non-contiguous Allocation:** Processes can be allocated non-contiguous sections of physical


memory,as pages from a single process can be scattered across various frames in memory.

3. **Protection and Isolation:** Each page is assigned specific access permissions, allowing the
operatingsystem to enforce memory protection and isolate processes from one another.

4. **Memory Sharing:** Multiple processes can share the same physical page if they need access to the
samedata, such as code libraries or read-only data.

Paging, however, requires the overhead of maintaining the page table, and page faults can lead to
performance penalties due to the need to fetch data from secondary storage. To mitigate these
issues, modern processors often incorporate hardware support, such as Translation Lookaside
Buffers (TLBs), tospeed up address translation and reduce the impact of page faults.

Program:
import random

# Constants
PAGE_SIZE = 4 # Size of a page in bytes

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 50

MEMORY_SIZE = 16 # Total size of memory in bytes


NUM_PAGES = MEMORY_SIZE // PAGE_SIZE # Number of pages

# Initialize memory
memory = [-1] * NUM_PAGES # -1 indicates an empty frame

def page_number(address):
"""Returns the page number for a given address."""
return address // PAGE_SIZE

def frame_number(page):
"""Returns the frame number where a page is stored."""
return memory.index(page) if page in memory else -1

def page_fault(address, page_table):


"""Simulates a page fault by loading a page into memory."""
page = page_number(address)
frame = frame_number(page)

if frame == -1: # Page not in memory


empty_frame = memory.index(-1) if -1 in memory else None
if empty_frame is not None:
memory[empty_frame] = page
print(f"Page {page} loaded into frame {empty_frame}.")
else:
# If no empty frame, replace a page using FIFO (First In
First Out) algorithm
replaced_page = memory.pop(0)
memory.append(page)
print(f"Page {replaced_page} replaced with page {page} in
frame 0.")
page_table[page] = memory.index(page)
else:
print(f"Page {page} found in frame {frame}.")

def access_memory(address, page_table):


"""Accesses a memory address and handles page faults."""
page = page_number(address)
if page in page_table:
print(f"Accessing address {address} (Page {page})")
page_fault(address, page_table)
else:
print(f"Page fault at address {address} (Page {page})")
page_fault(address, page_table)

def main():
page_table = {} # Dictionary to store the page table

# Simulate memory access


addresses = [random.randint(0, MEMORY_SIZE - 1) for _ in
range(10)] # Random addresses
print(f"Accessing memory addresses: {addresses}")

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 51

for address in addresses:


access_memory(address, page_table)

print("\nFinal state of memory frames:")


for i, frame in enumerate(memory):
print(f"Frame {i}: Page {frame}")

if __name__ == "__main__":
main()

Sample Output:

Accessing memory addresses: [0, 5, 8, 2, 10, 7, 3, 14, 6, 1]

Accessing address 0 (Page 0)


Page 0 loaded into frame 0.
Page 0 found in frame 0.

Accessing address 5 (Page 1)


Page 1 loaded into frame 1.
Page 1 found in frame 1.

Accessing address 8 (Page 2)


Page 2 loaded into frame 2.
Page 2 found in frame 2.

Accessing address 2 (Page 0)


Page 0 found in frame 0.

Accessing address 10 (Page 2)


Page 2 found in frame 2.

Accessing address 7 (Page 1)


Page 1 found in frame 1.

Accessing address 3 (Page 0)


Page 0 found in frame 0.

Accessing address 14 (Page 3)


Page 3 loaded into frame 3.
Page 3 found in frame 3.

Accessing address 6 (Page 1)


Page 1 found in frame 1.

Accessing address 1 (Page 0)


Page 0 found in frame 0.

Final state of memory frames:


Frame 0: Page 0
Frame 1: Page 1
Frame 2: Page 2
Frame 3: Page 3

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 52

Executed Output:

Result:
Thus a python program to implement paging technique of memory management is successfully
executed.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 53

Write program for the Implementation of following page replacement


algorithmsAim:
To write a program for the implementation of page replacement algorithms.
Description:
Page replacement algorithms are used in computer operating systems to manage the memory
efficiently when there is a need to swap pages between the main memory (RAM) and the secondary
storage (usually ahard disk) due to limited physical memory. These algorithms decide which page to
evict from memory whenthere is a page fault (a requested page is not present in RAM) and a new page
needs to be loaded.

1. FIFO (First-In-First-Out):
FIFO is one of the simplest page replacement algorithms. It works based on the principle of the first page
thatentered the memory will be the first one to be replaced. In other words, the page that has been in
memory the longest will be evicted. This algorithm uses a queue data structure to keep track of the
order in which pages were brought into memory. However, FIFO may suffer from the "Belady's
Anomaly," where increasing the number of frames can lead to more page faults.

2. LRU (Least Recently Used):


The LRU algorithm replaces the page that has not been accessed for the longest period of time. It is
based onthe idea that the least recently used page is the best candidate for replacement since it is
likely that pages accessed long ago are less likely to be needed in the near future. To implement LRU, a
stack or a linked list ofpages ordered by their recent usage timestamp is maintained. The main
challenge with LRU is that it requires tracking and updating the usage information for all pages, which
can be computationally expensive.

3. LFU (Least Frequently Used):


LFU algorithm replaces the page that has been used the least number of times. It works on the
assumption that the pages with the least frequency of access are less likely to be used in the future.
Each page has a counter associated with it that is incremented each time the page is accessed. The page
with the lowest countis selected for replacement when needed. However, LFU can face issues when a
page is accessed frequently for a short period and then not accessed again, leading to unnecessary
retention of such pages.

4. MFU (Most Frequently Used):


MFU is the opposite of LFU, where it selects the page with the highest access count for replacement.
The ideais to retain pages that have been heavily used, assuming they are likely to be needed again
soon. MFU can work well in scenarios where some pages are accessed very frequently, but it might not
be the best choice in all cases, especially if a page is used heavily initially but becomes obsolete later.

5. OPTIMAL (Optimal Page Replacement):


The OPTIMAL algorithm is a theoretical algorithm that serves as the upper bound for other page
replacementalgorithms. It makes the assumption that it knows the future sequence of page requests
and selects the pagethat will not be used for the longest time in the future for replacement. In practice,
this algorithm is impossible to implement since it requires knowledge of future events. However,
OPTIMAL is often used as a benchmark to compare the performance of other algorithms.

Each page replacement algorithm has its advantages and disadvantages. The best choice of algorithm
dependson the specific use case, workload characteristics, and available hardware resources. Real-world
implementations often involve a trade-off between algorithm complexity and performance.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 54

Algorithms:
FIFO page replacement algorithms:
Program:
from collections import deque

PAGE_SIZE = 4
MEMORY_SIZE = 16
NUM_PAGES = MEMORY_SIZE // PAGE_SIZE

# Initialize memory
memory = [-1] * NUM_PAGES
page_queue = deque() # To keep track of pages for FIFO

def fifo_page_replacement(address, page_table):


page = address // PAGE_SIZE
if page not in page_table:
if len(page_queue) >= NUM_PAGES:
old_page = page_queue.popleft()
memory[frame_number(old_page)] = -1
page_table.pop(old_page, None)
empty_frame = memory.index(-1) if -1 in memory else None
if empty_frame is not None:
memory[empty_frame] = page
else:
empty_frame = page_queue.index(page_table.pop(next(iter(page_table)), -1))
memory[empty_frame] = page
page_queue.append(page)
page_table[page] = empty_frame
print(f"Page {page} loaded into frame {page_table[page]}.")

def frame_number(page):
return memory.index(page) if page in memory else -1

def access_memory_fifo(address, page_table):


page = address // PAGE_SIZE
if page in page_table:
print(f"Page {page} found in frame {page_table[page]}.")
else:
print(f"Page fault at address {address} (Page {page})")
fifo_page_replacement(address, page_table)

def main_fifo():
page_table = {}
addresses = [random.randint(0, MEMORY_SIZE - 1) for _ in range(10)]
print(f"Accessing memory addresses: {addresses}")
for address in addresses:
access_memory_fifo(address, page_table)
print("\nFinal state of memory frames:")
for i, frame in enumerate(memory):
print(f"Frame {i}: Page {frame}")

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 55

if __name__ == "__main__":
main_fifo()
Sample Output:
Accessing memory addresses: [9, 0, 13, 4, 2, 8, 6, 1, 14, 7]
Accessing address 9 (Page 2)
Page 2 loaded into frame 0.

Accessing address 0 (Page 0)


Page 0 loaded into frame 1.

Accessing address 13 (Page 3)


Page 3 loaded into frame 2.

Accessing address 4 (Page 1)


Page 1 loaded into frame 3.

Accessing address 2 (Page 2)


Page 2 found in frame 0.

Accessing address 8 (Page 2)


Page 2 found in frame 0.

Accessing address 6 (Page 1)


Page 1 found in frame 3.

Accessing address 1 (Page 0)


Page 0 found in frame 1.

Accessing address 14 (Page 3)


Page 3 found in frame 2.

Accessing address 7 (Page 1)


Page 1 found in frame 3.

Final state of memory frames:


Frame 0: Page 2
Frame 1: Page 0
Frame 2: Page 3
Frame 3: Page 1

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 56

Executed Output:

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 57

LFU:
Program:
from collections import defaultdict
n = int(input("Enter the number of pages: "))
capacity = int(input("Enter the number of page
frames: "))pages = [0]*n
print("Enter the
pages: ")for i in
range(n):
pages[i] =
int(input())pf = 0
v = []
mp =
defaultdict(int)
for i in
range(n):
if pages[i] not
in v:
if len(v) ==
capacity:
mp[v[0]] =
mp[v[0]] - 1
v.pop(0)

v.append(pages[i])

if pages[i] not in v:
if len(v) ==
capacity:
mp[v[0]] =
mp[v[0]] - 1
v.pop(0)
v.append(pa
ges[i])
mp[pages[i]]
+=1 pf = pf +
1
else:
mp[pages[i]]
+=1
v.remove(pa
ges[i])
v.append(pa
ges[i])
k = len(v) - 2

while k>=0 and


mp[v[k]]>mp[v[k+1]]:v[k],
v[k+1] = v[k+1], v[k]
k-=1
print("The number of page faults for LFU algorithm is:",pf)

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 58

Sample Output:
Enter the number of pages:
13 Enter the number of page
frames: 4Enter the pages:
7
0
1
2
0
3
0
4
2
3
0
3
2
The number of page faults for LFU algorithm is: 6

Executed Output:

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 59

LRU:
Program:
from collections import OrderedDict

PAGE_SIZE = 4
MEMORY_SIZE = 16
NUM_PAGES = MEMORY_SIZE // PAGE_SIZE

# Initialize memory
memory = [-1] * NUM_PAGES
page_order = OrderedDict() # To keep track of pages for LRU

def lru_page_replacement(address, page_table):


page = address // PAGE_SIZE
if page not in page_table:
if len(page_order) >= NUM_PAGES:
oldest_page = next(iter(page_order))
frame_to_replace = page_table.pop(oldest_page, None)
if frame_to_replace is not None:
memory[frame_to_replace] = -1
page_order.pop(oldest_page)
empty_frame = memory.index(-1) if -1 in memory else None
if empty_frame is not None:
memory[empty_frame] = page
else:
empty_frame = page_order.popitem(last=False)[1]
memory[empty_frame] = page
page_table[page] = empty_frame
page_order[page] = empty_frame
else:
page_order.move_to_end(page)
print(f"Page {page} loaded into frame {page_table[page]}.")

def access_memory_lru(address, page_table):


page = address // PAGE_SIZE
if page in page_table:
print(f"Page {page} found in frame {page_table[page]}.")
else:
print(f"Page fault at address {address} (Page {page})")
lru_page_replacement(address, page_table)

def main_lru():
page_table = {}
addresses = [random.randint(0, MEMORY_SIZE - 1) for _ in range(10)]
print(f"Accessing memory addresses: {addresses}")
for address in addresses:
access_memory_lru(address, page_table)
print("\nFinal state of memory frames:")
for i, frame in enumerate(memory):
print(f"Frame {i}: Page {frame}")

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 60

if __name__ == "__main__":
main_lru()
Sample Output:
Accessing memory addresses:
[9, 0, 13, 4, 2, 8, 6, 1, 14, 7]
Accessing address 9 (Page 2)
Page 2 loaded into frame 0.

Accessing address 0 (Page 0)


Page 0 loaded into frame 1.

Accessing address 13 (Page 3)


Page 3 loaded into frame 2.

Accessing address 4 (Page 1)


Page 1 loaded into frame 3.

Accessing address 2 (Page 2)


Page 2 found in frame 0.

Accessing address 8 (Page 2)


Page 2 found in frame 0.

Accessing address 6 (Page 1)


Page 1 found in frame 3.

Accessing address 1 (Page 0)


Page 0 found in frame 1.

Accessing address 14 (Page 3)


Page 3 found in frame 2.

Accessing address 7 (Page 1)


Page 1 found in frame 3.

Final state of memory frames:


Frame 0: Page 2
Frame 1: Page 0
Frame 2: Page 3
Frame 3: Page 1

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 61

Executed Output:

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 62

Optimal page replacement:


Program:
import numpy as np

PAGE_SIZE = 4
MEMORY_SIZE = 16
NUM_PAGES = MEMORY_SIZE // PAGE_SIZE

# Initialize memory
memory = [-1] * NUM_PAGES

def optimal_page_replacement(address, page_table, addresses):


page = address // PAGE_SIZE
if page not in page_table:
if len(page_table) >= NUM_PAGES:
future_use = [float('inf')] * NUM_PAGES
for i, frame in enumerate(memory):
try:
future_use[i] = addresses.index(frame, addresses.index(address)) if frame in
addresses[addresses.index(address):] else float('inf')
except ValueError:
future_use[i] = float('inf')
replace_index = future_use.index(max(future_use))
replaced_page = memory[replace_index]
memory[replace_index] = page
page_table.pop(replaced_page, None)
print(f"Page {replaced_page} replaced with page {page} in frame {replace_index}.")
else:
empty_frame = memory.index(-1)
memory[empty_frame] = page
page_table[page] = memory.index(page)
print(f"Page {page} loaded into frame {page_table[page]}.")

def access_memory_optimal(address, page_table, addresses):


page = address // PAGE_SIZE
if page in page_table:
print(f"Page {page} found in frame {page_table[page]}.")
else:
print(f"Page fault at address {address} (Page {page})")
optimal_page_replacement(address, page_table, addresses)

def main_optimal():
page_table = {}
addresses = [random.randint(0, MEMORY_SIZE - 1) for _ in range(10)]
print(f"Accessing memory addresses: {addresses}")
for address in addresses:
access_memory_optimal(address, page_table, addresses)
print("\nFinal state of memory frames:")
for i, frame in enumerate(memory):
print(f"Frame {i}: Page {frame}")

if __name__ == "__main__":

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 63

main_optimal()
Sample Output:
Accessing memory
addresses: [9, 0, 13, 4, 2, 8,
6, 1, 14, 7]
Accessing address 9 (Page 2)
Page 2 loaded into frame 0.

Accessing address 0 (Page 0)


Page 0 loaded into frame 1.

Accessing address 13 (Page 3)


Page 3 loaded into frame 2.

Accessing address 4 (Page 1)


Page 1 loaded into frame 3.

Accessing address 2 (Page 2)


Page 2 found in frame 0.

Accessing address 8 (Page 2)


Page 2 found in frame 0.

Accessing address 6 (Page 1)


Page 1 found in frame 3.

Accessing address 1 (Page 0)


Page 0 found in frame 1.

Accessing address 14 (Page 3)


Page 3 found in frame 2.

Accessing address 7 (Page 1)


Page 1 found in frame 3.

Final state of memory frames:


Frame 0: Page 2
Frame 1: Page 0
Frame 2: Page 3
Frame 3: Page 1

Executed Output:

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 64

MFU page replacement algorithm:

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 65

Program:
from collections import defaultdict
import random

PAGE_SIZE = 4
MEMORY_SIZE = 16
NUM_PAGES = MEMORY_SIZE // PAGE_SIZE

# Initialize memory
memory = [-1] * NUM_PAGES
page_frequency = defaultdict(int) # Dictionary to keep track of page usage frequency

def mfu_page_replacement(address, page_table):


page = address // PAGE_SIZE
if page not in page_table:
if len(page_table) >= NUM_PAGES:
# Find the most frequently used page to replace
most_frequent_page = max(page_frequency, key=page_frequency.get)
frame_to_replace = page_table.pop(most_frequent_page, None)
if frame_to_replace is not None:
memory[frame_to_replace] = -1
print(f"Page {most_frequent_page} replaced with page {page} in frame {frame_to_replace}.")
else:
frame_to_replace = memory.index(-1)
page_frequency.pop(most_frequent_page, None)
else:
frame_to_replace = memory.index(-1)

memory[frame_to_replace] = page
page_table[page] = frame_to_replace

page_frequency[page] += 1
print(f"Page {page} loaded into frame {page_table[page]}.")

def access_memory_mfu(address, page_table):


page = address // PAGE_SIZE
if page in page_table:
print(f"Page {page} found in frame {page_table[page]}.")
else:
print(f"Page fault at address {address} (Page {page})")
mfu_page_replacement(address, page_table)

def main_mfu():
page_table = {}
addresses = [random.randint(0, MEMORY_SIZE - 1) for _ in range(10)]
print(f"Accessing memory addresses: {addresses}")
for address in addresses:
access_memory_mfu(address, page_table)
print("\nFinal state of memory frames:")
for i, frame in enumerate(memory):
print(f"Frame {i}: Page {frame}")

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 66

if __name__ == "__main__":
main_mfu()
Sample Output:
Accessing memory addresses:
[9, 0, 13, 4, 2, 8, 6, 1, 14, 7]
Accessing address 9 (Page 2)
Page 2 loaded into frame 0.

Accessing address 0 (Page 0)


Page 0 loaded into frame 1.

Accessing address 13 (Page 3)


Page 3 loaded into frame 2.

Accessing address 4 (Page 1)


Page 1 loaded into frame 3.

Accessing address 2 (Page 2)


Page 2 found in frame 0.

Accessing address 8 (Page 2)


Page 2 found in frame 0.

Accessing address 6 (Page 1)


Page 1 found in frame 3.

Accessing address 1 (Page 0)


Page 0 found in frame 1.

Accessing address 14 (Page 3)


Page 3 found in frame 2.

Accessing address 7 (Page 1)


Page 1 found in frame 3.

Final state of memory frames:


Frame 0: Page 2
Frame 1: Page 0
Frame 2: Page 3
Frame 3: Page 1

Executed Output:

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 67

Result:
Thus python programs to implement page replacement algorithms are successfully executed.

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula


Date: ExP.No.: P a g e | 68

Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula

You might also like