OS Lab Programs 555
OS Lab Programs 555
: P ag e |1
These are just a few examples of the many commands available in UNIX-like systems. Each command has its
own set of options and arguments that can be used to customize their behavior. For more detailed
information on each command, you can refer to their respective manual pages by using the "man" command
followed by the command name (e.g., "man ls" for the manual page of the "ls" command).
Output:
Result:
Thus some of the basic commands of UNIX operating system are successfully executed.
Write programs using the following system calls of UNIX operating system fork, exec, getpid, exit, wait,
close, stat, opendir, readdir.
Aim:
To write a program using following system calls of UNIX operating system fork, exce, getpid, exit, wait,
close, stat, opendir, readdir.
Description:
1. `fork`: The `fork` system call creates a new process (child process) that is an exact copy of the calling
process (parent process). After calling `fork`, the child process has its own unique process ID (PID) and runs
independently of the parent process. It is commonly used to implement parallel processing and concurrent
execution.
2. `exec`: The `exec` family of system calls (e.g., `execl`, `execv`, `execve`, etc.) is used to replace the current
process's memory image with a new program. It loads a new program into the current process's memory
space, replacing the previous program's code and data. This allows the process to run a different program
while keeping the same PID.
3. `getpid`: The `getpid` system call returns the process ID (PID) of the calling process. It is often used to
identify the current process uniquely.
4. `exit`: The `exit` system call is used to terminate the current process voluntarily. When a process calls `exit`,
it cleans up its resources and returns an exit status to its parent process, indicating the termination condition
(usually 0 for success and non-zero for error).
5. `wait`: The `wait` system call is used by a parent process to wait for the termination of its child process(es).
When a process calls `wait`, it suspends execution until one of its child processes exits. It also allows the
parent process to retrieve the exit status of the terminated child.
6. `close`: The `close` system call is used to close a file descriptor. File descriptors are small integers that
represent open files in a process. When a file is no longer needed, the `close` call releases the associated
resources and makes the file descriptor available for reuse.
7. `stat`: The `stat` system call retrieves information about a file (e.g., size, permissions, timestamps, etc.). It
takes the file path as an argument and fills a `struct stat` with the file's metadata.
8. `opendir`: The `opendir` system call opens a directory stream corresponding to the given directory path. It
allows a process to read the contents of the directory using subsequent calls to `readdir`.
9. `readdir`: The `readdir` system call is used to read the contents of a directory opened with `opendir`. It
returns a pointer to a `struct dirent` that contains information about the next directory entry (e.g., file name,
file type).
These system calls are fundamental to the UNIX operating system's functionality and provide an interface for
user programs to interact with the kernel and perform various operations related to processes, file
management, and directory handling.
Program:
import os
import subprocess
# exec
os.execvp("ls", ["ls", "-l"])
Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula
Date: ExP.No.: P ag e |5
# wait
pid = os.fork()
if pid > 0:
child_pid, exit_status = os.wait()
print(f"Child process (PID: {child_pid}) exited with status: {exit_status}")
else:
print(f"Child process (PID: {os.getpid()})")
os._exit(0)
# close
file_descriptor = os.open("file.txt", os.O_CREAT | os.O_WRONLY)
os.close(file_descriptor)
# stat
file_info = os.stat("file.txt")
print(f"File size: {file_info.st_size} bytes")
print(f"Last accessed: {file_info.st_atime}")
# exit
print("Exiting the program...")
os._exit(0)
Output:
Result:
Thus a python program to implement system calls of UNIX operating system fork, exce, getpid, exit,
wait, close, stat, opendir, readdir etc., successfully executed.
Operating Systems Laboratory JNTUA College of Engineering (Autonomous) Pulivendula
Date: ExP.No.: P ag e |6
Description:
1. `cp` (Copy): The `cp` command is used to copy files or directories from one location to another. It takes two
arguments: the source file/directory and the destination file/directory. When copying a single file, the syntax
is `cp source_file destination_file`, and when copying a directory and its contents, the syntax is `cp -r
source_directory destination_directory`.
2. `ls` (List): The `ls` command is used to list files and directories in a specified directory. Without any
arguments, it displays the contents of the current directory. It can be used with various options to customize
the output, such as showing hidden files, displaying file sizes, sorting files by different criteria, etc.
Program:
import shutil
import os
import re
Result:
Thus a python program to simulate UNIX commands like cp, ls, greg is successfully executed.
SHELL programming
Aim:
To write some programs and implement them using shell programming.
Description:
Shell programming refers to writing scripts or programs using shell scripting languages to interact with the
operating system's shell (command-line interface). In Unix-like operating systems, the default shell is usually
the Bash shell (Bourne Again SHell), although there are other shells like Zsh, Csh, and Fish.
Shell scripts are plain text files containing a series of commands that are executed in sequence by the shell.
Shell programming is widely used for automating tasks, managing system configurations, and creating custom
utilities.
Here's a brief description of some key aspects of shell programming:
1. **Script Shebang:** The first line of a shell script typically starts with a "shebang" (#!) followed by the path
to the shell interpreter. For example, #!/bin/bash indicates that the script should be executed using the Bash
shell.
2. **Comments:** Shell scripts support comments using the '#' symbol. Comments are ignored by the shell
and are used to provide explanations or context within the script.
3. **Variables:** Shell scripts use variables to store data, such as strings or numbers. Variables are defined
without spaces around the '=' sign and are accessed using the '$' symbol. Example: `name="John"`, `echo
"Hello, $name!"` will output "Hello, John!"
4. **Command Substitution:** Enclosing a command within backticks (`) or $(...) allows you to capture its
output into a variable. Example: `files=$(ls)` stores the output of the `ls` command in the variable `files`.
5. **Control Structures:** Shell scripts support control structures like if-else, for loops, while loops, and case
statements. These structures enable conditional branching and repetitive execution of commands.
6. **Functions:** Shell scripts can define functions to encapsulate a set of commands that can be reused
throughout the script.
7. **Input/Output Redirection:** Shell scripts can redirect input and output streams. For example,
`command > output.txt` will redirect the output of `command` to a file called "output.txt."
8. **Command-Line Arguments:** Shell scripts can accept command-line arguments passed when executing
the script. The arguments are accessed using variables like $1, $2, etc.
9. **Exit Status:** Each command in a shell script returns an exit status. The special variable `$?` holds the
exit status of the last executed command.
10. **Conditional Execution:** Commands can be executed conditionally based on the success or failure of
preceding commands. For example, `command1 && command2` will execute `command2` only if `command1`
succeeds.
Shell programming is lightweight and efficient for automating repetitive tasks and managing system
configurations. It is especially useful for system administrators, developers, and power users who work
extensively on the command-line interface.
Programs:
1) Factorial.sh:
Aim: To write a shell scripting program to find factorial of a number
Program:
#!/bin/bash
factorial() {
local num=$1
local fact=1
for (( i=1; i<=num; i++ ))
do
fact=$((fact * i))
done
echo $fact
}
echo "Enter a number: "
read num
result=$(factorial $num)
echo "The factorial of $num is: $result"
Output:
2) number_sum.sh:
Aim: To write a shell program to find sum of given list of numbers
Program:
#!/bin/bash
sum=0
while true
do
echo "Enter a number (0 to exit): "
read num
if [ $num -eq 0 ]; then
break
fi
sum=$((sum + num))
done
echo "The sum is: $sum"
Output:
3) word_count.sh:
Aim: To write a shell program to find the number of words in sentence
Program:
#!/bin/bash
echo "Enter a sentence: "
read sentence
word_count=$(echo "$sentence" | wc -w)
echo "Number of words in the sentence: $word_count"
Sample Output:
4) Password_generator.sh:
Aim: To write shell program to generate random password.
Program:
#!/bin/bash
echo "Enter the desired password length: "
read length
password=$(openssl rand -base64 $((length * 3 / 4)) | tr -d '\n' | head -c $length ; echo)
echo "Generated password: $password"
Output:
5) System_information.sh:
Aim: To write a shell program to display system information
Program:
#!/bin/bash
echo "System
Information:" echo
"Hostname: $(hostname)"
echo "Operating System:
$(uname -s)"echo "Kernel
Version: $(uname -r)"
echo "Available Memory: $(free -h | awk '/^Mem:/ {print
$4}')"echo "Disk Usage: $(df -h / | awk '/^\// {print $5}')"
echo "Logged-in Users: $(who | wc -l)"
Output:
Result:
Thus some shell scripting programs are implemented and executed successfully.
Aim: To write programs to implement CPU scheduling algorithms Description: CPU scheduling algorithms
are essential mechanisms in operating systems that determine which process gets to use the CPU at any given
time. These algorithms play a crucial role in enhancing system efficiency and responsiveness. Here's an
overview of common CPU scheduling algorithms:
2. Shortest Job Next (SJN) / Shortest Job First (SJF): SJN or SJF selects the process with the
smallest burst time (execution time) next. This algorithm aims to minimize average waiting time and
turnaround time by prioritizing shorter jobs. However, it requires accurate estimation of burst times to
be effective.
3. Priority Scheduling: Priority scheduling assigns a priority to each process, and the CPU is allocated
to the highest priority process available. It can be preemptive (priorities may change dynamically) or
non-preemptive (once a process starts, it continues until it finishes). Priority can be based on factors
like process characteristics, deadlines, or system policies.
4. Round Robin (RR): RR allocates a fixed time slice (quantum) of CPU time to each process in a
circular order. If a process doesn't complete within its time quantum, it's preempted and placed at the
end of the queue. RR ensures fair CPU allocation among processes and is suitable for time-sharing
systems. However, it may lead to increased overhead due to frequent context switches with smaller
time slices.
5. Shortest Remaining Time First (SRTF): SRTF is a preemptive version of SJN where the process
with the smallest remaining execution time is selected for execution. It aims to minimize waiting time
by allowing shorter jobs to complete quickly. SRTF requires accurate prediction or estimation of
remaining execution times, which can be challenging in dynamic environments.
Algorithms:
1) First Come First Serve:
Program:
def fcfs_scheduling(processes, arrival_time, burst_time):
n = len(processes)
waiting_time = [0] * n
completion_time = [0] * n
turnaround_time = [0] * n
current_time = 0
print(f"{processes[i]}\t{arrival_time[i]}\t\t{burst_time[i]}\t\t{completion_time[i]}\t\t{waiting_time[i]}\t\t{t
urnaround_time[i]}")
avg_waiting_time = sum(waiting_time) / n
print(f"\nAverage waiting time: {avg_waiting_time:.2f}")
def get_inputs():
n = int(input("Enter the number of processes: "))
processes = [f"p{i}" for i in range(n)]
arrival_time = []
burst_time = []
for i in range(n):
at = int(input(f"Enter the arrival time of {processes[i]}: "))
bt = int(input(f"Enter the burst time of {processes[i]}: "))
arrival_time.append(at)
burst_time.append(bt)
SAMPLE OUTPUT:
Enter the number of processes: 3
Enter the arrival time of p0: 0
Enter the burst time of p0: 5
Enter the arrival time of p1: 2
Enter the burst time of p1: 3
Enter the arrival time of p2: 4
Enter the burst time of p2: 2
Gantt chart:
| p0 5 | p1 8 | p2 10 |
Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 5 5 0 5
p1 2 3 8 3 6
p2 4 2 10 4 6
if min_index == -1:
current_time += 1
continue
print(f"{processes[i]}\t{arrival_time[i]}\t\t{burst_time[i]}\t\t{completion_time[i]}\t\t{waiting_time[i]}\t\t{t
urnaround_time[i]}")
avg_waiting_time = sum(waiting_time) / n
print(f"\nAverage waiting time: {avg_waiting_time:.2f}")
def get_inputs():
n = int(input("Enter the number of processes: "))
processes = [f"p{i}" for i in range(n)]
arrival_time = []
burst_time = []
for i in range(n):
at = int(input(f"Enter the arrival time of {processes[i]}: "))
bt = int(input(f"Enter the burst time of {processes[i]}: "))
arrival_time.append(at)
burst_time.append(bt)
Gantt chart:
| p2 1 | p1 5 | p0 13 |
Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 8 13 5 13
p1 1 4 5 0 4
p2 2 1 1 0 -1
current_time = 0
completed_processes = 0
# Update times
completion_time[current_process] = current_time + burst
turnaround_time[current_process] = completion_time[current_process] -
arrival_time[current_process]
waiting_time[current_process] = turnaround_time[current_process] - burst_time[current_process]
current_time += burst
completed_processes += 1
print(f"{processes[i]}\t{arrival_time[i]}\t\t{burst_time[i]}\t\t{priority[i]}\t\t{completion_time[i]}\t\t{waitin
g_time[i]}\t\t{turnaround_time[i]}")
avg_waiting_time = sum(waiting_time) / n
print(f"\nAverage waiting time: {avg_waiting_time:.2f}")
def get_inputs():
n = int(input("Enter the number of processes: "))
processes = [f"p{i}" for i in range(n)]
arrival_time = []
burst_time = []
priority = []
for i in range(n):
at = int(input(f"Enter the arrival time of {processes[i]}: "))
bt = int(input(f"Enter the burst time of {processes[i]}: "))
pr = int(input(f"Enter the priority of {processes[i]} (lower number means higher priority): "))
arrival_time.append(at)
burst_time.append(bt)
priority.append(pr)
Gantt chart:
| p1 5 | p2 7 | p0 15 |
Process Arrival Time Burst Time PriorityCompletion Time Waiting Time Turnaround Time
p0 0 8 2 15 7 15
p1 1 4 1 5 0 4
p2 2 2 3 7 3 5
Executed Output:
current_time = 0
queue = []
process_indices = list(range(n))
if not queue:
# If no processes are in the queue, advance time
current_time += 1
continue
if remaining_time[current_process] == 0:
completion_time[current_process] = current_time
turnaround_time[current_process] = completion_time[current_process] -
arrival_time[current_process]
waiting_time[current_process] = turnaround_time[current_process] - burst_time[current_process]
else:
# If process is not completed, put it back in the queue
queue.append(current_process)
print(f"{processes[i]}\t{arrival_time[i]}\t\t{burst_time[i]}\t\t{completion_time[i]}\t\t{waiting_time[i]}\t\t{tu
rnaround_time[i]}")
avg_waiting_time = sum(waiting_time) / n
print(f"\nAverage waiting time: {avg_waiting_time:.2f}")
def get_inputs():
n = int(input("Enter the number of processes: "))
processes = [f"p{i}" for i in range(n)]
arrival_time = []
burst_time = []
for i in range(n):
at = int(input(f"Enter the arrival time of {processes[i]}: "))
bt = int(input(f"Enter the burst time of {processes[i]}: "))
arrival_time.append(at)
burst_time.append(bt)
quantum = int(input("Enter the time quantum for Round Robin scheduling: "))
return processes, arrival_time, burst_time, quantum
Sample Output:
Enter the number of processes: 3
Enter the arrival time of p0: 0
Enter the burst time of p0: 10
Enter the arrival time of p1: 2
Enter the burst time of p1: 5
Enter the arrival time of p2: 4
Enter the burst time of p2: 8
Enter the time quantum for Round Robin scheduling: 4 Gantt chart:
Gantt chart:
| p0 4 | p1 9 | p2 13 | p0 18 | p2 21 | p1 24 | p2 29 |
Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 10 18 8 18
p1 2 5 24 9 22
p2 4 8 29 9 25
Executed Output:
current_time = 0
completed_processes = 0
process_queue = []
if not process_queue:
current_time += 1
continue
print(f"{processes[i]}\t{arrival_time[i]}\t\t{burst_time[i]}\t\t{completion_time[i]}\t\t{waiting_time[i]}\t\t{tu
rnaround_time[i]}")
avg_waiting_time = sum(waiting_time) / n
print(f"\nAverage waiting time: {avg_waiting_time:.2f}")
def get_inputs():
n = int(input("Enter the number of processes: "))
processes = [f"p{i}" for i in range(n)]
arrival_time = []
burst_time = []
for i in range(n):
at = int(input(f"Enter the arrival time of {processes[i]}: "))
bt = int(input(f"Enter the burst time of {processes[i]}: "))
arrival_time.append(at)
burst_time.append(bt)
Sample Output:
Enter the number of processes: 3
Enter the arrival time of p0: 0
Enter the burst time of p0: 8
Enter the arrival time of p1: 1
Enter the burst time of p1: 4
Enter the arrival time of p2: 2
Enter the burst time of p2: 2
| p0 1 | p1 5 | p2 7 | p1 9 | p0 17 |
Process Arrival Time Burst Time Completion Time Waiting Time Turnaround Time
p0 0 8 17 9 17
p1 1 4 9 4 8
p2 2 2 7 3 5
Executed Output:
Result:
Thus python programs to implement CPU scheduling algorithms successfully executed.
Aim:
To write a program for the implementation of semaphores
Description:
Semaphores are a synchronization mechanism used in concurrent programming to control access to
shared resources and coordinate the execution of multiple processes or threads. They were introduced
by Edsger W.Dijkstra in 1965 and have since become a fundamental concept in operating systems and
parallel computing.
A semaphore is a simple integer variable, often referred to as a "counting semaphore." It can take
on non-negative integer values and supports two main operations:
1. **Wait (P) Operation**: When a process (or thread) wants to access a shared resource, it must first
perform a "wait" operation on the semaphore. If the semaphore value is greater than zero, the
process decrements the semaphore value and continues its execution, indicating that the resource is
available. If thesemaphore value is zero, the process is blocked (put to sleep) until the semaphore
value becomes positive again.
2. **Signal (V) Operation**: After a process finishes using a shared resource, it performs a "signal"
operation on the semaphore. This operation increments the semaphore value, indicating that the
resource is now available for other processes or threads to use. If there were blocked processes waiting
for the semaphore to become positive (due to previous "wait" operations), one of them will be
awakened and granted access to theresource.
Semaphores help prevent race conditions and ensure that critical sections of code (regions of code that
accessshared resources) are mutually exclusive, meaning only one process or thread can access the
shared resource at a time.
However, it's essential to use semaphores carefully to avoid potential issues like deadlocks (circular
waiting) or race conditions. More advanced synchronization mechanisms, such as mutexes and
condition variables, areoften used in modern programming languages and libraries to manage
concurrency more effectively.
Program:
import time
import threading
# Semaphore functions
def semaphore_wait(semaphore):
while semaphore[0] <= 0:
pass
semaphore[0] -= 1
return semaphore
def semaphore_signal(semaphore):
semaphore[0] += 1
return semaphore
Executed output:
Result:
Thus a python program for the implementation of semaphores is successfully executed.
In shared memory, a region of memory is mapped into the address space of multiple processes or
threads. These processes/threads can then read from or write to the shared memory region just like
accessing regular memory. This shared memory area acts as a shared buffer, facilitating the exchange of
data between differentprocesses without the need for copying data between them.
However, shared memory also requires careful synchronization to avoid data inconsistencies and race
conditions. Developers need to use synchronization primitives like semaphores, mutexes, or atomic
operations to ensure that multiple processes or threads access the shared memory in a coordinated
and controlled manner. Proper synchronization helps maintain data integrity and prevent conflicts
when multipleentities attempt to modify the shared data concurrently.
To use shared memory, operating systems provide APIs for creating and managing shared memory
regions.Developers need to be cautious while using shared memory to prevent data corruption and
ensure proper synchronization to avoid race conditions and data inconsistencies.
Program:
import multiprocessing
import time
def main():
# Create a shared integer with initial value 0
shared_data = multiprocessing.Value('i', 0)
process1.start()
process2.start()
if __name__ == "__main__":
main()
Sample Output:
Worker process: 1
Worker process: 2
Worker process: 3
Worker process: 4
Worker process: 5
Worker process: 6
Worker process: 7
Worker process: 8
Worker process: 9
Worker process: 10
Final shared data value: 10
Executed Output:
Result:
Thus a python program for the implementation of Shared memory successfully execut
Implementation of IPC:
Aim:
To write a program for the implementation of Inter Process Communication (IPC)
Description:
Inter-Process Communication (IPC) facilitates communication and data sharing among concurrent
processeson a computer. Shared memory allows direct access to a common memory area for high-speed
data exchange, while message passing involves sending messages through channels like pipes or sockets.
Synchronization tools like semaphores and mutexes prevent conflicts and ensure data integrity. IPC finds
applications in parallel computing, client-server systems, and real-time applications. Proper design and
implementation are essential to handle complexities and maintain security in IPC mechanisms.
Inter-Process Communication (IPC) enables data exchange and coordination between concurrent
processes in a computer system. It uses shared memory or message passing channels for communication.
Synchronization mechanisms like semaphores ensure proper resource access. IPC is crucial for parallel
computing, client-servermodels, and process coordination. However, it requires careful design to avoid
complexities and security issues.
Program:
import multiprocessing
import time
def producer(queue):
"""Function to be run by the producer process."""
for i in range(5):
item = f"item-{i}"
print(f"Producer putting {item} into queue")
queue.put(item) # Put item into the queue
time.sleep(1) # Simulate work
def consumer(queue):
"""Function to be run by the consumer process."""
while True:
item = queue.get() # Get item from the queue
if item is None: # Sentinel value to end the consumer process
break
print(f"Consumer got {item} from queue")
time.sleep(2) # Simulate work
def main():
# Create a queue for IPC
queue = multiprocessing.Queue()
if __name__ == "__main__":
main()
Sample Output:
Producer putting item-0 into queue
Consumer got item-0 from queue
Producer putting item-1 into queue
Producer putting item-2 into queue
Consumer got item-1 from queue
Producer putting item-3 into queue
Producer putting item-4 into queue
Consumer got item-2 from queue
Consumer got item-3 from queue
Consumer got item-4 from queue
Both producer and consumer processes have completed.
Executed Output:
Result:
Thus a python program for the implementation of IPC successfully executed.
1. **Resource Allocation Model**: The algorithm operates under the assumption that the system has
a fixednumber of resources of different types (e.g., memory, CPU, printers) and multiple processes
compete for these resources.
2. **Maximum Claim**: Each process declares its maximum possible resource requirements
(maximumclaim) before it begins its execution. This information is known to the system.
3. **Available Resources**: The system maintains a record of the available resources for each
resource typeat any given time.
4. **Safety and Request Matrices**: The algorithm uses two matrices: the safety matrix and the
request matrix. The safety matrix is used to assess the system's ability to allocate resources safely to
processes without causing deadlock. The request matrix stores the resource requests made by each
process during itsexecution.
6. **Safe State**: A system state is considered safe if there exists a sequence of resource allocations
where each process can complete its execution and release resources, allowing other processes to
complete withoutgetting stuck in a deadlock.
8. **Dynamic Resource Requests**: The Banker's algorithm allows processes to make multiple
resourcerequests during their execution. It checks each request's safety before granting it.
The Banker's algorithm is a preventive measure against deadlocks, ensuring that resource allocations are
donein a manner that avoids deadlock scenarios. It provides a way for processes to request resources
safely, considering the system's current resource availability and avoiding situations that could lead to
deadlock.
Program:
import numpy as np
for _ in range(n):
for i in range(n):
if not finish[i] and all(need[i] <= work):
work += allocation[i]
finish[i] = True
safe_sequence.append(i)
break
else:
return False, []
available += release
allocation[i] -= release
need[i] += release
print(f"Process {i} released resources. New state:")
print(f"Available: {available}")
print(f"Allocation: {allocation}")
print(f"Need: {need}")
Process 1, Resource 2: 2
Process 1, Resource 3: 2
Process 2, Resource 0: 9
Process 2, Resource 1: 0
Process 2, Resource 2: 2
Process 2, Resource 3: 2
Enter the allocation matrix:
Process 0, Resource 0: 0
Process 0, Resource 1: 1
Process 0, Resource 2: 0
Process 0, Resource 3: 0
Process 1, Resource 0: 2
Process 1, Resource 1: 0
Process 1, Resource 2: 0
Process 1, Resource 3: 1
Process 2, Resource 0: 3
Process 2, Resource 1: 0
Process 2, Resource 2: 2
Process 2, Resource 3: 1
Enter the available resources:
Resource 0: 3
Resource 1: 3
Resource 2: 2
Resource 3: 2
Executing Banker's Algorithm...
[0 0 0 0]
[3 0 2 1]]
Need: [[7 5 3 2]
[3 2 2 1]
[6 0 0 1]]
System is in a safe state after Process 1. Safe sequence: [0, 1, 2]
Process 2 is requesting resources...
Process 2 must wait. Insufficient resources available.
Process 2 released resources. New state:
Available: [3 3 2 4]
Allocation: [[0 0 0 0]
[0 0 0 0]
[0 0 0 0]]
Need: [[7 5 3 2]
[3 2 2 1]
[9 0 2 1]]
System is in a safe state after Process 2. Safe sequence: [0, 1, 2]
Both producer and consumer processes have completed.
Executed Output:
Result:
Thus a python program for the implementation of Banker’s algorithm for dead lock avoidance
successfully executed.
while True:
progress_made = False
for i in range(n):
if not finish[i] and all(need[i] <= work):
# If process i can finish, update work and finish arrays
work += allocated[i]
finish[i] = True
progress_made = True
print(f"Process {i} can finish; updated work: {work}")
if not progress_made:
break
print("\nDetecting deadlock...\n")
detect_deadlock(allocated, max_claim, available)
if __name__ == "__main__":
main()
Sample Output:
Enter the number of processes: 3
Enter the number of resources: 3
Enter the maximum claim matrix:
Process 0, Resource 0: 7
Process 0, Resource 1: 5
Process 0, Resource 2: 3
Process 1, Resource 0: 3
Process 1, Resource 1: 2
Process 1, Resource 2: 2
Process 2, Resource 0: 9
Process 2, Resource 1: 0
Process 2, Resource 2: 2
Enter the allocation matrix:
Process 0, Resource 0: 0
Process 0, Resource 1: 1
Process 0, Resource 2: 0
Process 1, Resource 0: 2
Process 1, Resource 1: 0
Process 1, Resource 2: 0
Process 2, Resource 0: 3
Process 2, Resource 1: 0
Process 2, Resource 2: 2
Enter the available resources:
Resource 0: 3
Resource 1: 3
Resource 2: 2
Detecting deadlock...
Executed Output:
Result:
Thus a python program for the implementation of dead lock detection algorithm successfully
executed.
3. **Thread Creation and Management**: Threads are created and managed by the operating system
or the programming language's runtime environment. They share the same memory space within a
process, makingcommunication and data sharing between threads easier.
5. **Race Conditions**: Race conditions occur when multiple threads access shared resources in an
uncontrolled manner, leading to unpredictable and potentially incorrect behavior. Synchronization
mechanisms help prevent race conditions by allowing only one thread to access the shared resource at a
time.
6. **Deadlocks**: Deadlocks can occur when two or more threads are each waiting for a resource that
is heldby another thread, resulting in a circular dependency. Properly designed synchronization can help
avoid deadlocks by ensuring that threads release resources in a coordinated manner.
def main():
num_threads = int(input("Enter the number of threads: "))
increments_per_thread = int(input("Enter the number of increments per thread: "))
threads = []
if __name__ == "__main__":
main()
Sample Output:
Enter the number of threads: 3
Enter the number of increments per thread: 5
Thread 0 is incrementing counter from 0 to 1
Thread 1 is incrementing counter from 1 to 2
Thread 2 is incrementing counter from 2 to 3
Thread 0 is incrementing counter from 3 to 4
Thread 1 is incrementing counter from 4 to 5
Thread 2 is incrementing counter from 5 to 6
Thread 0 is incrementing counter from 6 to 7
Thread 1 is incrementing counter from 7 to 8
Thread 2 is incrementing counter from 8 to 9
Thread 0 is incrementing counter from 9 to 10
Thread 1 is incrementing counter from 10 to 11
Thread 2 is incrementing counter from 11 to 12
Final value of shared_counter: 12
Executed Output:
Result:
Thus a python program for implementation of threading and synchronization successfully
executed.
Write a program for Implementation of memory allocation methods for fixed partitions:
a) First fit
b) best fit
c)worst fit
Aim:
To Write a program for Implementation of memory allocation methods for fixed partitions:
a) First fit b) best fit c) worst fit
Description:
Memory allocation methods for fixed partitions are used in operating systems to manage memory in a
systemwith a fixed number of memory partitions of different sizes. These methods determine how
processes are allocated to these fixed partitions based on their size requirements.
1. **First Fit**:
- In the first-fit memory allocation method, when a new process arrives and needs memory, the
systemsearches the memory partitions sequentially from the beginning.
- The first partition that is large enough to accommodate the process is allocated to it.
- This method is relatively simple and efficient in terms of time complexity since it stops searching
once asuitable partition is found.
- However, it may lead to fragmentation, where small blocks of unused memory are scattered
across thememory, making it challenging to allocate larger processes in the future.
2. **Best Fit**:
- In the best-fit memory allocation method, when a new process arrives, the system searches
for thesmallest partition that can hold the process size.
- Among all the partitions that are large enough to accommodate the process, the one with the
smallest sizeis chosen for allocation.
- This method helps in reducing fragmentation, as it tries to fit processes into the smallest possible
availablepartitions.
- However, it may be slightly slower than the first-fit method, as it needs to search the entire
list ofpartitions to find the best fit.
3. **Worst Fit**:
- In the worst-fit memory allocation method, when a new process arrives, the system searches
for thelargest partition available that can hold the process size.
- The largest partition among all the partitions that can accommodate the process is allocated to it.
- This method may lead to more fragmentation compared to first-fit and best-fit, as larger
partitions areused for smaller processes, leaving behind smaller unused spaces.
- It is less commonly used than first-fit and best-fit because it does not efficiently utilize memory space.
In summary, memory allocation methods for fixed partitions, such as first-fit, best-fit, and worst-fit,
determinehow processes are assigned to available memory partitions based on their size requirements.
Each method has its advantages and disadvantages, affecting fragmentation and memory utilization in
the system. The choice of the allocation method depends on the specific requirements and
characteristics of the system.
Program:
def first_fit(partitions, processes):
allocation = [-1] * len(processes)
for i, process in enumerate(processes):
for j, partition in enumerate(partitions):
def main():
num_partitions = int(input("Enter the number of partitions: "))
partitions = [int(input(f"Enter size of partition {i}: ")) for i in range(num_partitions)]
# First Fit
allocation_first_fit = first_fit(partitions_copy, processes)
print_allocation(allocation_first_fit, processes, "First Fit")
if __name__ == "__main__":
main()
Sample Output:
Enter the number of partitions: 4
Enter size of partition 0: 100
Enter size of partition 1: 500
Enter size of partition 2: 200
Enter size of partition 3: 300
Enter the number of processes: 5
Enter size of process 0: 212
Enter size of process 1: 417
Enter size of process 2: 112
Enter size of process 3: 426
Enter size of process 4: 50
First Fit Allocation:
Process 0 allocated to Partition 2
Process 1 allocated to Partition 1
Process 2 allocated to Partition 3
Process 3 not allocated
Process 4 allocated to Partition 0
Executed Output:
Result:
Thus a python program for Implementation of memory allocation methods for fixed partitions is
successfully executed.
In the paging technique, the physical memory is divided into fixed-size blocks called "frames," and the
logical memory used by processes is divided into fixed-size blocks called "pages." These pages are usually
of the samesize as the frames. The size of a page is typically a power of 2, such as 4 KB or 8 KB.
When a process needs to access a specific memory address, the virtual address generated by the
CPU is divided into two parts: a "page number" and an "offset." The page number is used as an
index to access apage table, which is a data structure maintained by the operating system to keep
track of the mapping between virtual pages and physical frames.
The page table provides the corresponding physical frame number for the given page number. The
offset isused to determine the exact location within the physical frame where the data is stored.
If a page is not currently present in physical memory (i.e., it's not loaded into a frame), the CPU
generates a page fault. The operating system then retrieves the required page from the secondary
storage (e.g., hard disk)and loads it into an available frame in physical memory. The page table is
updated to reflect the new mapping between the virtual page and the physical frame.
1. **Simplified Address Translation:** With paging, the CPU only needs to perform a single
level of translation, from virtual addresses to physical addresses, making the address translation
process moreefficient.
3. **Protection and Isolation:** Each page is assigned specific access permissions, allowing the
operatingsystem to enforce memory protection and isolate processes from one another.
4. **Memory Sharing:** Multiple processes can share the same physical page if they need access to the
samedata, such as code libraries or read-only data.
Paging, however, requires the overhead of maintaining the page table, and page faults can lead to
performance penalties due to the need to fetch data from secondary storage. To mitigate these
issues, modern processors often incorporate hardware support, such as Translation Lookaside
Buffers (TLBs), tospeed up address translation and reduce the impact of page faults.
Program:
import random
# Constants
PAGE_SIZE = 4 # Size of a page in bytes
# Initialize memory
memory = [-1] * NUM_PAGES # -1 indicates an empty frame
def page_number(address):
"""Returns the page number for a given address."""
return address // PAGE_SIZE
def frame_number(page):
"""Returns the frame number where a page is stored."""
return memory.index(page) if page in memory else -1
def main():
page_table = {} # Dictionary to store the page table
if __name__ == "__main__":
main()
Sample Output:
Executed Output:
Result:
Thus a python program to implement paging technique of memory management is successfully
executed.
1. FIFO (First-In-First-Out):
FIFO is one of the simplest page replacement algorithms. It works based on the principle of the first page
thatentered the memory will be the first one to be replaced. In other words, the page that has been in
memory the longest will be evicted. This algorithm uses a queue data structure to keep track of the
order in which pages were brought into memory. However, FIFO may suffer from the "Belady's
Anomaly," where increasing the number of frames can lead to more page faults.
Each page replacement algorithm has its advantages and disadvantages. The best choice of algorithm
dependson the specific use case, workload characteristics, and available hardware resources. Real-world
implementations often involve a trade-off between algorithm complexity and performance.
Algorithms:
FIFO page replacement algorithms:
Program:
from collections import deque
PAGE_SIZE = 4
MEMORY_SIZE = 16
NUM_PAGES = MEMORY_SIZE // PAGE_SIZE
# Initialize memory
memory = [-1] * NUM_PAGES
page_queue = deque() # To keep track of pages for FIFO
def frame_number(page):
return memory.index(page) if page in memory else -1
def main_fifo():
page_table = {}
addresses = [random.randint(0, MEMORY_SIZE - 1) for _ in range(10)]
print(f"Accessing memory addresses: {addresses}")
for address in addresses:
access_memory_fifo(address, page_table)
print("\nFinal state of memory frames:")
for i, frame in enumerate(memory):
print(f"Frame {i}: Page {frame}")
if __name__ == "__main__":
main_fifo()
Sample Output:
Accessing memory addresses: [9, 0, 13, 4, 2, 8, 6, 1, 14, 7]
Accessing address 9 (Page 2)
Page 2 loaded into frame 0.
Executed Output:
LFU:
Program:
from collections import defaultdict
n = int(input("Enter the number of pages: "))
capacity = int(input("Enter the number of page
frames: "))pages = [0]*n
print("Enter the
pages: ")for i in
range(n):
pages[i] =
int(input())pf = 0
v = []
mp =
defaultdict(int)
for i in
range(n):
if pages[i] not
in v:
if len(v) ==
capacity:
mp[v[0]] =
mp[v[0]] - 1
v.pop(0)
v.append(pages[i])
if pages[i] not in v:
if len(v) ==
capacity:
mp[v[0]] =
mp[v[0]] - 1
v.pop(0)
v.append(pa
ges[i])
mp[pages[i]]
+=1 pf = pf +
1
else:
mp[pages[i]]
+=1
v.remove(pa
ges[i])
v.append(pa
ges[i])
k = len(v) - 2
Sample Output:
Enter the number of pages:
13 Enter the number of page
frames: 4Enter the pages:
7
0
1
2
0
3
0
4
2
3
0
3
2
The number of page faults for LFU algorithm is: 6
Executed Output:
LRU:
Program:
from collections import OrderedDict
PAGE_SIZE = 4
MEMORY_SIZE = 16
NUM_PAGES = MEMORY_SIZE // PAGE_SIZE
# Initialize memory
memory = [-1] * NUM_PAGES
page_order = OrderedDict() # To keep track of pages for LRU
def main_lru():
page_table = {}
addresses = [random.randint(0, MEMORY_SIZE - 1) for _ in range(10)]
print(f"Accessing memory addresses: {addresses}")
for address in addresses:
access_memory_lru(address, page_table)
print("\nFinal state of memory frames:")
for i, frame in enumerate(memory):
print(f"Frame {i}: Page {frame}")
if __name__ == "__main__":
main_lru()
Sample Output:
Accessing memory addresses:
[9, 0, 13, 4, 2, 8, 6, 1, 14, 7]
Accessing address 9 (Page 2)
Page 2 loaded into frame 0.
Executed Output:
PAGE_SIZE = 4
MEMORY_SIZE = 16
NUM_PAGES = MEMORY_SIZE // PAGE_SIZE
# Initialize memory
memory = [-1] * NUM_PAGES
def main_optimal():
page_table = {}
addresses = [random.randint(0, MEMORY_SIZE - 1) for _ in range(10)]
print(f"Accessing memory addresses: {addresses}")
for address in addresses:
access_memory_optimal(address, page_table, addresses)
print("\nFinal state of memory frames:")
for i, frame in enumerate(memory):
print(f"Frame {i}: Page {frame}")
if __name__ == "__main__":
main_optimal()
Sample Output:
Accessing memory
addresses: [9, 0, 13, 4, 2, 8,
6, 1, 14, 7]
Accessing address 9 (Page 2)
Page 2 loaded into frame 0.
Executed Output:
Program:
from collections import defaultdict
import random
PAGE_SIZE = 4
MEMORY_SIZE = 16
NUM_PAGES = MEMORY_SIZE // PAGE_SIZE
# Initialize memory
memory = [-1] * NUM_PAGES
page_frequency = defaultdict(int) # Dictionary to keep track of page usage frequency
memory[frame_to_replace] = page
page_table[page] = frame_to_replace
page_frequency[page] += 1
print(f"Page {page} loaded into frame {page_table[page]}.")
def main_mfu():
page_table = {}
addresses = [random.randint(0, MEMORY_SIZE - 1) for _ in range(10)]
print(f"Accessing memory addresses: {addresses}")
for address in addresses:
access_memory_mfu(address, page_table)
print("\nFinal state of memory frames:")
for i, frame in enumerate(memory):
print(f"Frame {i}: Page {frame}")
if __name__ == "__main__":
main_mfu()
Sample Output:
Accessing memory addresses:
[9, 0, 13, 4, 2, 8, 6, 1, 14, 7]
Accessing address 9 (Page 2)
Page 2 loaded into frame 0.
Executed Output:
Result:
Thus python programs to implement page replacement algorithms are successfully executed.