0% found this document useful (0 votes)
26 views6 pages

OPERATING SYSTEM ASSIGNMENT

A concise lecturer note on operating system

Uploaded by

osmankamara557
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views6 pages

OPERATING SYSTEM ASSIGNMENT

A concise lecturer note on operating system

Uploaded by

osmankamara557
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

OPERATING SYSTEM ASSIGNMENT

Q1 Let say here is a python program that simulates the banker’s algorithm for deadlock
avoidance:
Class resource:
def_int_(self,name,available):
self.name = name
self.available = available
class process:
def_int_(self,pid,name,need,allocate,hold):
self.pid = pid
self.name = name
self.need = need
self.allocate = allocate
self.hold = hold
def request_resource(resource, process):
if process.hold[resource.name]<process.need[resource.name]:
process.hold[resource.name]+= 1
resource.available[process.name]- =1
print(f”process{process.name}requested resource{resource.name}and was allocated 1
unit.”)
else:
print(f”process{process.name} cannot request resource{resource.name}as it does not need
any more units.”)
def release_resource(resource, process):
if process.hold[resource.name]>0:
process.hold[resource.name] -=1
resource.available[process.name] +=1
print(f”process{process.name}released resource{resource.name} and
has{process.hold[resource.name]}units left.”)
else:
print (f”process{process.name}cannot release resource{resource.name}as it does not hold
any units.”)
def check_deadlock(resource,process):
for I in range(len(resource.available)):
if resource.available[i]<0:
print(f”deadlock detected! Process{process.name}is waiting for
resource{resource[i].name}.”)
return true
return false
def simulate_bankers_algorithm():
num_resources =int(input(“enter the number of resources:”))
num_processes = int(input(“enter the number of processes:”))
resources = [resource(f”resource{i}”,0) for I in range(num_resource)]
processes = []
for I in range(num_processes):
pid=I +1
name=f”process{i+1}”
need = [int(x)for x in input(f”enter the need for process{name}(separated by
spaces):”).split()]
allocate = [0] *num_resources
hold = [0]*num_resources
process = process(pid,name,need,allocate,hold)
processes.append(process)
while true:
for process in processes:
for resource in resources:
if check_deadlock(resource,process):
print(“banker’s algorithm terminated due to deadlock.”)
break
else:
request_resource(resource,process)
release_resource(resource,process)
if_name_= = “_main_”:
simulate_banker’s_algorithm ()
This program allows the user to input the number of resources and processes, and then
simulates the banker’s algorithm for deadlock avoidance. The program creates a list of
resource objects and a list of process objects. The request_resource function is called
when a process requests a resource, and the release_resource function is called when a
process releases a resource.
The check_deadlock function checks if a deadlock has occurred, and the
simulate_banker’s_algorithm function runs the banker’s algorithm until a deadlock is
detected or the program is manually terminated.

Q2.
Multilevel Queue Scheduling:
Multilevel queue scheduling is a CPU scheduling algorithm that partitions the ready queue
into multiple separate queues, each with its own scheduling algorithm. Each queue has a
different priority level, and processes are assigned to a queue based on their priority. The
CPU scheduling algorithm selects a process from the highest priority queue and executes it.
When a process completes its execution or is preempted, it is moved to the next lower
priority queue.
Multilevel Feedback Queue Scheduling:
Multilevel feedback queue scheduling is a variation of multilevel queue scheduling. In this
algorithm, each process is initially assigned to the first-level queue. When a process is
executed, it is moved to a lower-level queue if it consumes too many CPU cycles or is
blocked for a long time. Conversely, if a process executes efficiently, it is moved to a higher-
level queue. The process can be moved between queues based on its behavior, and the
scheduling algorithm is applied to each queue.
Scenario for Multilevel Queue Scheduling:
Suppose a system has three types of processes: interactive processes, batch processes,
and real-time processes. Interactive processes require immediate response and have high
priority, batch processes can be delayed and have medium priority, and real-time processes
have the lowest priority. Multilevel queue scheduling can be used in this scenario to assign
each type of process to a separate queue based on their priority. The CPU scheduling
algorithm can select a process from the highest priority queue and execute it, ensuring that
interactive processes are given priority.
Scenario for Multilevel Feedback Queue Scheduling:
Suppose a system has a mix of foreground and background processes. Foreground
processes are interactive and require immediate response, while background processes can
be delayed and have lower priority. Multilevel feedback queue scheduling can be used in this
scenario to initially assign all processes to the first-level queue. When a process executes
efficiently, it is moved to a higher-level queue, and when it consumes too many CPU cycles
or is blocked for a long time, it is moved to a lower-level queue. This allows the system to
balance the CPU utilization between foreground and background processes.

Q3
The critical section problem is a classic problem in computer science that arises when
multiple processes or threads need to access a shared resource, such as a variable or a
piece of data, in a mutually exclusive manner. The critical section is the portion of code
where the shared resource is accessed, and it must be executed in a way that ensures that
only one process or thread can access the resource at any given time.
The critical section problem can lead to various issues, such as race conditions, where the
outcome of a program depends on the order in which the processes or threads execute the
critical section, and deadlock, where processes or threads are blocked waiting for each
other to release the shared resource.
To solve the critical section problem, there are several conditions that must be satisfied:
1. Mutual exclusion: Only one process or thread can execute the critical section at any given
time.
2. Progress: If no process or thread is currently executing the critical section, then another
process or thread must be allowed to enter the critical section.
3. Bounded waiting: There must be a limit on the number of times a process or thread can
be prevented from entering the critical section.
There are several algorithms that can be used to solve the critical section problem, such as
the semaphore algorithm, the monitor algorithm, and the condition variable algorithm.
These algorithms use different mechanisms to ensure that the conditions are satisfied and
that the critical section is executed in a mutually exclusive manner.

Q4
The fork() system call is a powerful tool in Unix/Linux operating systems that allows a
parent process to create a new child process. The fork() system call creates a new process
by duplicating the calling process, creating an exact copy of the parent process.
The fork() system call returns a process ID (PID) to the parent process, which can be used
to identify the child process. The child process, on the other hand, receives a 0 return value.
Here is an example program that demonstrates process creation and the parent-child
relationship:
include <unistd.h>
include <sys/wait.h>
include <stdio.h>
int main() {
int pid = fork();
if (pid == 0) {
// Child process
printf("Hello from child process, PID = %d\n", getpid());
} else if (pid > 0) {
// Parent process
printf("Hello from parent process, PID = %d, Child PID = %d\n", getpid(), pid);
} else {
// Error
printf("Error creating child process\n");
}
// Wait for child process to finish
wait(NULL);
return 0;
}
In this program, the fork() system call is called, and the resulting child process is created.
The child process prints a message indicating that it is the child process and its process ID.
The parent process prints a message indicating that it is the parent process, its process ID,
and the process ID of the child process.
The parent process then waits for the child process to finish using the wait() system call.
Once the child process has finished, the parent process continues execution.
This program demonstrates the parent-child relationship between processes in Unix/Linux.
The parent process creates a child process using the fork() system call, and the child
process inherits a copy of the parent process's memory and execution context. The child
process can then execute independently of the parent process, and the parent process can
use the process ID of the child process to synchronize with it.

Q5
To analyze the performance of the Round Robin CPU scheduling algorithm with a time
quantum of 4ms, we need to create a Gantt chart and calculate the average waiting time
and turnaround time for the given processes.

Let's assume the following processes with their arrival times and execution times:

Process Arrival Time (ms) Execution Time


P1 0 7
P2 2 4
P3 3 9
P4 5 3

1. Gantt Chart:
To create the Gantt chart, we will schedule the processes in the order of their arrival times
and execute them for the time quantum of 4ms until the process completes or the time
quantum expires. If the process does not complete within the time quantum, it will be
preempted and moved to the back of the ready queue.
Arrival Time (ms) Process
0 P1
2 P2
3 P3
5 P4

2. Average Waiting Time and Turn around Time:


Waiting time is the amount of time a process spends waiting in the ready queue before it
starts execution. Turnaround time is the total time taken from the arrival of a process to its
completion.

To calculate the average waiting time and turnaround time, we need to find the waiting time
and turnaround time for each process and then take the average.

Process Arrival Time Execution Time Waiting Time Turn Around


Time
P1 0 7 0 7
P2 2 4 4 6
P3 3 9 12 21
P4 5 3 8 11
Average Waiting Time = (0 + 4 + 8 + 12 + 11) / 4 = 35 / 4 = 8.75ms
Average Turnaround Time = (7 + 6+ 21 + 11 ) / 4 = 45 / 4 = 11.25ms
In this example, the Round Robin CPU scheduling algorithm with a time quantum of 4ms
results in an average waiting time of 8.75ms and an average turnaround time of 11.25ms
for the given processes.

You might also like