Ios Unit 2
Ios Unit 2
Scheduling Algorithm:
A Process Scheduler schedules different processes to be assigned to the CPU based
on particular scheduling algorithms. There are six popular process scheduling
algorithms which we are going to discuss in this chapter −
Given: Table of processes, and their Arrival time, Execution time, and priority. Here we
are considering 1 is the lowest priority.
Waiting time of each process is as follows –
• Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
• The processor is allocated to the job closest to completion but it can be
preempted by a newer ready job with shorter time to completion.
• Impossible to implement in interactive systems where required CPU time is not
known.
• It is often used in batch environments where short jobs need to give preference.
2. Process Synchronization:
Process synchronization is very helpful when multiple processes are running at the
same time and more than one process has access to the same data or resources at the
same time. Process synchronization is generally used in the multi-process system.
When more than two processes have access to the same data or resources at the same
time it can cause data inconsistency so to remove this data inconsistency processes
should be synchronized with each other.
In the above picture, We take an example of a bank account that has a current balance
of 500 and there are two users which have access to that account. User 1 and User 2
both are trying to access the balance. If process 1 is for withdrawal and process 2 is for
checking the balance both will occur at the same time then user 1 might get the wrong
current balance. To avoid this kind of data inconsistency process synchronization in os
is very helpful.
We will learn how process synchronization in os works with help of an example. We will
see an example of different processes trying to access the same data at the same time.
In the above example, there are three processes, Process 1 is trying to write the shared
data while Process 2 and Process 3 are trying to read the same data so there are huge
changes in Process 2, and Process 3 might get the wrong data.
• Entry Section:- This section is used to decide the entry of the process
• Critical Section:- This section is used to make sure that only one process access and
modifies the shared data or resources.
• Exit Section:- This section is used to allow a process that is waiting in the entry section
and make sure that finished processes are also removed from the critical section.
• Remainder Section:- The remainder section contains other parts of the code which are
not in the Critical or Exit sections.
Race Condition occurs when more than one process tries to access and modify the
same shared data or resources because many processes try to modify the shared data
or resources there are huge chances of a process getting the wrong result or data.
Therefore, every process race to say that it has correct data or resources and this is
called a race condition.
The value of the shared data depends on the execution order of the process as many
processes try to modify the data or resources at the same time. The race condition is
associated with the critical section. Now the question arises that how to handle a race
condition. We can tackle this problem by implementing logic in the critical section like
only one process at a time can access the critical section and this section is called the
atomic section.
What the critical section do is, make sure that only one process at a time has access to
shared data or resources and only that process can modify that data. Thus when many
problems try to modify the shared data or resources critical section allows only a single
process to access and modify the shared data or resources. Two functions are very
important in the critical section wait() and signal(). To handle the entry of processes in
the critical section wait() function is used. On the other hand, the single() function is
used to remove finished processes from the critical section.
What happens if we remove the critical section? So if we remove a critical section, all
the processes can access and modify shared data at the same time so we can not
guarantee that the outcome will be true. We will see some essential conditions to solve
critical section problems.
There are basically three rules which need to be followed to solve critical section
problems.
• Mutual Exclusion:- Make sure one process is running is the critical section means one
process is accessing the shared data or resources then no other process enters 3the
critical section at the same time.
• Progress:- If there is no process in the critical section and some other processes are
waiting to enter into the critical section. Now which process will enter into the critical
section is taken by these processes.
• Bounding waiting:- When a new process makes a request to enter into the critical
section there should be some waiting time or bound. This bound time is equal to the
number of processes that are allowed to access critical sections before it.
Solutions to the Critical Section Problem:-
• Peterson’s solution:-
The computer scientist named Peterson gave a very utilized approach to solve critical
section problem. This solution is a classical software-based solution.
In this solution when one process is executing in the critical section at the same time
other processes can access the rest of the code and the opposite is also possible. The
important thing is that this solution makes sure that only one process is executing a
critical section at the same time. Let’s understand this solution with help of an example.
do{
flag[i]=True;
turn=i;
flag[i]=False;
// j is the next process which will enter the critical section
turn=j;
while(True)
In the above example, there are n processes given (process 1, process 2, process 3,…,
process n). A flag array is also created with a boolean data type and the initial value of
all array is false. When a process enters a critical section mark the flag of that process’s
index as True. When a process is finished its execution marks the flag of that process
as False and assigns a turn to the next waiting process.
• Synchronization Hardware:-
As the name suggests we will try to find the solution to the critical section problem using
hardware sometimes, this problem can be solved using hardware. To solve this problem
some operating systems give the feature of locking in this function when a process
enters the critical section it acquires a lock and the lock is removed when a process exit
from the critical section. So this locking functionality makes sure that only one process
at a time can enter into the critical section because when other processes try to enter
into the critical section it is locked.
• Mutex Lock:-
Mutex Lock was introduced because the above method ( Synchronize Hardware) was
not an easy method. To synchronize access to resources in the critical section Mutex
Locking Mechanism is used. In this method, we use Lock which is set when a process
enters into the critical section. When a process exits from the critical section the Lock is
unset.
• Semaphores:-
In this method, a process sends a signal to another process that is waiting on
semaphores. Semaphores are variables that are shared between processes. For
synchronization among the processes, semaphores make use of the wait() and signal()
functions.
3. Process Scheduling:
The process scheduling is the activity of the process manager that handles the removal
of the running process from the CPU and the selection of another process on the basis
of a particular strategy.
Categories of Scheduling
1. Non-preemptive: Here the resource can’t be taken from a process until the
process completes execution. The switching of resources occurs when the
running process terminates and moves to a waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed
amount of time. During resource allocation, the process switches from running
state to ready state or from waiting state to ready state. This switching occurs as
the CPU may give priority to other processes and replace the process with higher
priority with the running process.
The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues.
The OS maintains a separate queue for each of the process states and PCBs of all
processes in the same execution state are placed in the same queue. When the state of
a process is changed, its PCB is unlinked from its current queue and moved to its new
state queue.
The Operating System maintains the following important process scheduling queues −
• Job queue − This queue keeps all the processes in the system.
• Ready queue − This queue keeps a set of all processes residing in main
memory, ready and waiting to execute. A new process is always put in this
queue.
• Device queues − The processes which are blocked due to unavailability of an
I/O device constitute this queue.
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority,
etc.). The OS scheduler determines how to move processes between the ready and run
queues which can only have one entry per processor core on the system; in the above
diagram, it has been merged with the CPU.
Two-state process model refers to running and non-running states which are described
below −
Schedulers
Schedulers are special system software which handle process scheduling in various
ways. Their main task is to select the jobs to be submitted into the system and to decide
which process to run. Schedulers are of three types −
• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler
It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads
them into memory for execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as
I/O bound and processor bound. It also controls the degree of multiprogramming. If the
degree of multiprogramming is stable, then the average rate of process creation must
be equal to the average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-
sharing operating systems have no long term scheduler. When a process changes the
state from new to ready, then there is use of long-term scheduler.
Short-term schedulers, also known as dispatchers, make the decision of which process
to execute next. Short-term schedulers are faster than long-term schedulers.
In the above diagram, the process 1 has resource 1 and needs to acquire resource 2.
Similarly process 2 has resource 2 and needs to acquire resource 1. Process 1 and
process 2 are in deadlock as each of them needs the other’s resource to complete their
execution but neither of them is willing to relinquish their resources.
Coffman Conditions
A deadlock occurs if the four Coffman conditions hold true. But these conditions are not
mutually exclusive.
• Mutual Exclusion
There should be a resource that can only be held by one process at a time. In the
diagram below, there is a single instance of Resource 1 and it is held by Process
1 only.
• Hold and Wait
A process can hold multiple resources and still request more resources from
other processes which are holding them. In the diagram given below, Process 2
holds Resource 2 and Resource 3 and is requesting the Resource 1 which is
held by Process 1.
• No Preemption
A resource cannot be preempted from a process by force. A process can only
release a resource voluntarily. In the diagram below, Process 2 cannot preempt
Resource 1 from Process 1. It will only be released when Process 1 relinquishes
it voluntarily after its execution is complete.
• Circular Wait
A process is waiting for the resource held by the second process, which is
waiting for the resource held by the third process and so on, till the last process
is waiting for a resource held by the first process. This forms a circular chain. For
example: Process 1 is allocated Resource2 and it is requesting Resource 1.
Similarly, Process 2 is allocated Resource 1 and it is requesting Resource 2. This
forms a circular wait loop.
5. Deadlock Prevention:
If we simulate deadlock with a table which is standing on its four legs then we can also
simulate four legs with the four conditions which when occurs simultaneously, cause the
deadlock.
However, if we break one of the legs of the table then the table will fall definitely. The
same happens with deadlock, if we can be able to violate one of the four necessary
conditions and don't let them occur together then we can prevent the deadlock.
1. Mutual Exclusion
Mutual section from the resource point of view is the fact that a resource can never be
used by more than one process simultaneously which is fair enough but that is the main
reason behind the deadlock. If a resource could have been used by more than one
process at the same time then the process would have never been waiting for any
resource.
Spooling
For a device like printer, spooling can work. There is a memory associated with the
printer which stores jobs from each of the process into it. Later, Printer collects all the
jobs and print each one of them according to FCFS. By using this mechanism, the
process doesn't have to wait for the printer and it can continue whatever it was doing.
Later, it collects the output when it is produced.
We cannot force a resource to be used by more than one process at the same time
since it will not be fair enough and some serious problems may arise in the
performance. Therefore, we cannot violate mutual exclusion for a process practically.
Hold and wait condition lies when a process holds a resource and waiting for some
other resource to complete its task. Deadlock occurs because there can be more than
one process which are holding one resource and waiting for other in the cyclic order.
However, we have to find out some mechanism by which a process either doesn't hold
any resource or doesn't wait. That means, a process must be assigned all the
necessary resources before the execution starts. A process must not wait for any
resource once the execution has been started.
!(Hold and wait) = !hold or !wait (negation of hold and wait is, either you don't
hold or you don't wait)
This can be implemented practically if a process declares all the resources initially.
However, this sounds very practical but can't be done in the computer system because
a process can't determine necessary resources initially.
Process is the set of instructions which are executed by the CPU. Each of the
instruction may demand multiple resources at the multiple times. The need cannot be
fixed by the OS.
3. No Preemption
Deadlock arises due to the fact that a process can't be stopped once it starts. However,
if we take the resource away from the process which is causing deadlock then we can
prevent deadlock.
This is not a good approach at all since if we take a resource away which is being used
by the process then all the work which it has done till now can become inconsistent.
Consider a printer is being used by any process. If we take the printer away from that
process and assign it to some other process then all the data which has been printed
can become inconsistent and ineffective and also the fact that the process can't start
printing again from where it has left which causes performance inefficiency.
4. Circular Wait
To violate circular wait, we can assign a priority number to each of the resource. A
process can't request for a lesser priority resource. This ensures that not a single
process can request a resource which is being utilized by some other process and no
cycle will be formed.
Among all the methods, violating Circular wait is the only approach that can be
implemented practically.
6. Deadlock avoidance:
In deadlock avoidance, the request for any resource will be granted if the resulting state
of the system doesn't cause deadlock in the system. The state of the system will
continuously be checked for safe and unsafe states.
In order to avoid deadlocks, the process must tell OS, the maximum number of
resources a process can request to complete its execution.
The simplest and most useful approach states that the process should declare the
maximum number of resources of each type it may ever need. The Deadlock avoidance
algorithm examines the resource allocations so that there can never be a circular wait
condition.
The resource allocation state of a system can be defined by the instances of available
and allocated resources, and the maximum instance of the resources demanded by the
processes.
1. E = (7 6 8 4)
2. P = (6 2 8 3)
3. A = (1 4 0 1)
Above tables and vector E, P and A describes the resource allocation state of a system.
There are 4 processes and 4 types of the resources in a system. Table 1 shows the
instances of each resource assigned to each process.
Table 2 shows the instances of the resources, each process still needs. Vector E is the
representation of total instances of each resource in the system.
Vector P represents the instances of resources that have been assigned to processes.
Vector A represents the number of resources that are not in use.
A state of the system is called safe if the system can allocate all the resources
requested by all the processes without entering into deadlock.
If the system cannot fulfill the request of all processes then the state of the system is
called unsafe.
The key of Deadlock avoidance approach is when the request is made for resources
then the request must only be approved in the case if the resulting state is also a safe
state.
7. Deadlock Detection:
Deadlock detection algorithms are used to identify the presence of deadlocks in
computer systems. These algorithms examine the system's processes and resources to
determine if there is a circular wait situation that could lead to a deadlock. If a deadlock
is detected, the algorithm can take steps to resolve it and prevent it from occurring
again in the future. There are several popular deadlock detection algorithms.Here we
will explore necessary conditions of deadlock, purpose of the deadlock detection
algorithm,mainly each of these algorithms in detail and the situations in which they are
most effective.
• Build a RAG − The first step is to build a Resource Allocation Graph (RAG) that
shows the allocation and request of resources in the system. Each resource type
is represented by a rectangle, and each process is represented by a circle.
• Check for cycles − Look for cycles in the RAG. If there is a cycle, it indicates
that the system is deadlocked.
• Identify deadlocked processes − Identify the processes involved in the cycle.
These processes are deadlocked and waiting for resources held by other
processes.
• Determine resource types − Determine the resource types involved in the
deadlock, as well as the resources held and requested by each process.
• Take corrective action − Take corrective action to break the deadlock by
releasing resources, aborting processes, or preempting resources. Once the
deadlock is broken, the system can continue with normal operations.
• Recheck for cycles − After corrective action has been taken, recheck the RAG
for cycles. If there are no more cycles, the system is no longer deadlocked, and
normal operations can resume.
Example
Consider a system with two processes, P1 and P2, and two resources, R1 and R2.
P1 -> R1
P2 -> R2
R1 -> P2
R2 -> P1
• Build a WFG − The first step is to build a Wait-for Graph (WFG) that shows the
waitfor relationships between processes. Each process is represented by a
circle, and an arrow is drawn from one process to another if the former is waiting
for a resource held by the latter.
• Check for cycles − Look for cycles in the WFG. If there is a cycle, it indicates
that the system is deadlocked.
• Identify deadlocked processes − Identify the processes involved in the cycle.
These processes are deadlocked and waiting for resources held by other
processes.
• Determine resource types − Determine the resource types involved in the
deadlock, as well as the resources held and requested by each process.
• Take corrective action − Take corrective action to break the deadlock by
releasing resources, aborting processes, or preempting resources. Once the
deadlock is broken, the system can continue with normal operations.
• Recheck for cycles − After corrective action has been taken, recheck the WFG
for cycles. If there are no more cycles, the system is no longer deadlocked, and
normal operations can resume.
Example
Three processes, P1, P2, and P3, and two resources, R1 and R2.
The wait-for graph (WFG) for this system can be represented as follows −
P1 -> P3
P3 -> P2
P2 -> P3
3. Banker's Algorithm
It can be used as a deadlock detection algorithm. In fact, it is one of the most well-
known algorithms for deadlock detection in operating systems.