100% found this document useful (1 vote)
344 views128 pages

Unit-3 (CPU Scheduling & Deadlocks)

Operating system unit 3, AKTU notes

Uploaded by

Sseiepr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
344 views128 pages

Unit-3 (CPU Scheduling & Deadlocks)

Operating system unit 3, AKTU notes

Uploaded by

Sseiepr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 128

Unit-3

CPU Scheduling & Deadlocks

Shyam B. Verma
(Assistant Professor)
Syllabus [Unit-3]
CPU Scheduling: Scheduling Concepts,
Performance Criteria, Process States, Process
Transition Diagram, Schedulers, Process
Control Block (PCB), Process address space,
Process identification information, Threads
and their management, Scheduling
Algorithms, Multiprocessor Scheduling.
Deadlock: System model, Deadlock
characterization, Prevention, Avoidance and
detection, Recovery from deadlock.
CPU Scheduling
Basic Concepts
Scheduling Criteria
Process States
Process Transition Diagram
Schedulers
Process Control Block (PCB)
Process address space
Process identification information
Threads and their management
Scheduling Algorithms
Multiple-Processor Scheduling
Objectives
To introduce CPU scheduling, which is
the basis for multiprogramming
operating systems
To describe various CPU-scheduling
algorithms
To discuss evaluation criteria for
selecting a CPU-scheduling algorithm for
a particular system
To examine the scheduling algorithms of
several operating systems
Introduction
 In the uniprogrammming systems like MS DOS,
when a process waits for any I/O operation to be
done, the CPU remains idol. This is an overhead since
it wastes the time and causes the problem of
starvation. However, In Multiprogramming systems,
the CPU doesn't remain idle during the waiting time of
the Process and it starts executing other processes.
Operating System has to decide which process the
CPU will be given.
 CPU scheduling is the basis of multiprogramming
O/S. By switching the CPU among processes, the O/S
can make the computer more productive.
 Multiple processes can be stored and executed
simultaneously by context switching.
 Rest of the processes will wait until the CPU is free
and can be rescheduled.
Basic Concepts
Maximum CPU
utilization obtained with
multiprogramming.
CPU–I/O Burst Cycle –
Process execution
consists of a cycle of
CPU execution and I/O
wait
CPU burst followed by
I/O burst
Process States
As a process executes, it changes it’s state
 New: The process is being created
 Ready: The process is waiting to be assigned to a
processor
 Running: Instructions are being executed
 Block or Waiting: The process is waiting for some
event to occur (I/O completion or reception of
signal() )
 Exit/Terminated: The process has finished
execution. The PCB will be deleted.
 Suspended Ready: To make room for new
processes in main memory, some processes will be
shifted to secondary memory.
 Suspended Wait or Suspended Block: To make
room in Block or Waiting state, some processes will
be shifted to secondary memory.
Process Scheduling
The process of handling the removal of an active
process from the CPU and selecting another process
based on a specific strategy.
Process Scheduling is an integral part of Multi-
programming applications. Such operating systems
allow more than one process to be loaded into
memory at a time and the loaded shared CPU
process uses repetition time.
There are three types of process schedulers:
 Long term or Job Scheduler
 Short term or CPU Scheduler
 Medium-term Scheduler
Long term Scheduler
Long term scheduler is also known as job scheduler. It
chooses the processes from the pool (secondary
memory) and keeps them in the ready queue
maintained in the primary memory.
Long Term scheduler mainly controls the degree of
Multiprogramming. The purpose of long term
scheduler is to choose a perfect mix of I/O bound and
CPU bound processes among the jobs present in the
pool.
If the job scheduler chooses more I/O bound processes
then all of the jobs may reside in the blocked state all
the time and the CPU will remain idle most of the time.
This will reduce the degree of Multiprogramming.
Therefore, the Job of long term scheduler is very
critical and may affect the system for a very long time.
Short-term Scheduler
 Short-term scheduler also known as CPU
scheduler, selects a job among many processes
in ready queue, and allocates the CPU
 Queue may be ordered in various ways.
 CPU scheduling decisions may take place when a
process:
1. Process admitted from new to ready state
2. Switches from suspended ready to ready state
3. Switches from running to ready state
4. Switches from waiting to ready state
5. When a process terminates after completion
 Scheduling under 1 and 5 are nonpreemptive
 All other scheduling is preemptive
 Consider interrupts occurring during crucial OS
activities
Preemptive Scheduling
Preemptive Scheduling is an approach where
tasks are assigned based on their priorities.
In this type of scheduling, the execution of a
higher priority task takes precedence over a
lower priority one, even if the latter is already
running. The lower priority task is put on hold
until the higher priority task is completed.
Non-Preemptive Scheduling
Non-preemptive Scheduling is a method of CPU
scheduling in which a process takes control of a
resource and retains it until the process is
terminated or transitions to a waiting state.
Medium Term Scheduler
Medium term scheduler takes care of the
swapped out processes. If the running state
processes needs some I/O time for the completion
then there is a need to change its state from
running to waiting.
It reduces the degree of multiprogramming. The
swapping is necessary to have a perfect mix of
processes in the ready queue.
This MTS removes processes from memory and
placed in secondary memory.
This technique is called swapping.
Swapping is necessary to:
 Improve the process mix
 To improve memory utilization
 To make memory available for other processes
Dispatcher
 A dispatcher is a special program which comes
into play after the scheduler. When the
scheduler completes its job of selecting a process,
it is the dispatcher which takes that process to
the desired state/queue. The dispatcher is the
module that gives a process control over the CPU
after it has been selected by the short-term
scheduler. This function involves the following:
switching context
switching to user mode
jumping to the proper location in the user
program to restart that program
 Dispatch latency – time taken by dispatcher to stop
one process and start another process. The dispatcher
should be fast as possible because it is involved during
every process switch.
Scheduling Criteria
CPU utilization – keep the CPU as busy as
possible
Throughput – No. of processes that
complete their execution per time unit
Turnaround time – amount of time to
execute a particular process
Waiting time – amount of time a process
has been waiting in the ready queue
Response time – amount of time it takes
from when a request was submitted until
the first response is produced, not output
(for time-sharing environment)
Goals of Scheduling Algorithms
Utilization of CPU at maximum level. Keep CPU as
busy as possible.
Allocation of CPU should be fair.
Throughput should be Maximum. i.e. Number of
processes that complete their execution per time
unit should be maximized.
Minimum turnaround time (TAT), i.e. time taken
by a process to finish execution should be the least.
There should be a minimum waiting time
(WT) and the process should not starve in the
ready queue.
Minimum response time (RT). It means that the
time when a process produces the first response
should be as less as possible.
Process Concept
An operating system executes a variety of
programs:
 Batch system – jobs
 Time-shared systems – user programs or
tasks
Textbook uses the terms job and process almost
interchangeably
A Program does nothing unless its instructions are
executed by a CPU. A program in execution is
called a process. In order to accomplish its task,
process needs the computer resources.
There may exist more than one process in the
system which may require the same resource at
the same time. Therefore, the operating system
has to manage all the processes and the
resources in a convenient and efficient way.
Program is passive entity stored on disk
(executable file), process is active
 Program becomes process when executable file
loaded into memory
Execution of program started via GUI mouse
clicks, command line entry of its name, etc
One program can be several processes
 Consider multiple users executing the same
program
The task of O/S is to:
Schedule processes and threads on the CPUs.
Create and delete both user and system processes.
Suspend and resume processes.
Provide mechanisms for process synchronization.
Provide mechanisms for process communication.
Process in Memory
•Every process in memory
has a process boundary.
•The data section contains
global variables.
•The text field contains the
program code
•The Heap is used for
dynamic allocation of
memory
•The stack contains
temporary data such as:
• Method parameters
• Return address
• Local variables
Attributes of a Process
1. Process ID: When a process is created, a unique id is assigned to
the process which is used for unique identification of the process
in the system.
2. Program Counter: A program counter stores the address of the
last instruction of the process on which the process was
suspended. The CPU uses this address when the execution of this
process is resumed.
3. Process State: The Process, from its creation to the completion,
goes through various states which are new, ready, running and
waiting.
4. Priority: Every process has its own priority. The process with the
highest priority among the processes gets the CPU first. This is
also stored on the PCB.
5. General Purpose Registers: Every process has its own set of
registers which are used to hold the data which is generated
during the execution of the process.
6. List of open files: During the Execution, Every process uses
some files which need to be present in the main memory. OS also
maintains a list of open files in the PCB.
7. List of open devices: OS also maintain the list of all open
devices which are used during the execution of the process.
Operations on a Process
Creation of Process
Scheduling of Process
Execution of Process
Killing/Deletion of Process
Process Control Block (PCB)
Information associated with each
process
(also called task control block)
 Process state – running, waiting,
etc
 Program counter – location of
next instruction to execute
 CPU registers – contents of all
process-centric registers
 CPU scheduling information-
priorities, scheduling queue
pointers
 Memory-management
information – memory allocated to
the process
 Accounting information –
Information about the resources
used by the process, such as CPU
time and memory usage
 I/O status information – I/O
devices allocated to process, list of
open files
Process Control Block (PCB) is an important data
structure used by the OS to manage and control the
execution of processes. It is also known as Task Control
Block (TCB) in some OS. A PCB contains all the necessary
information about a process, including its process state,
program counter, memory allocation, open files, and
CPU scheduling information.
The main purpose of a PCB is to enable the OS to manage
multiple processes efficiently by keeping track of the state
of each process and allocating system resources
accordingly.
When a process is created, the OS creates a PCB for that
process and stores all the necessary information about the
process in it. The OS then uses the information in the PCB
to manage the process and ensure that it runs efficiently.
PCBs are used by the OS for many tasks, including
process scheduling, process synchronization, and process
communication.
CPU Switch From Process to
Process
Scheduling Queue
As the process enter the system, they are put
into a job queue.
The processes that are ready and waiting to
execute are kept in ready queue, generally
stored as a linked-list.
The header of the queue contains pointer to
the first and last PCBs in the list.
Each PCB contains a pointer to store the
address of the next PCB in the ready queue.
Each device has it’s own queue to handle
the request of many processes.
Scheduling Algorithms
A Process Scheduler schedules different
processes to be assigned to the CPU based on
particular scheduling algorithms. There are
several popular CPU scheduling algorithms:
First-Come, First-Served (FCFS) scheduling
Shortest-Job-First (SJF) scheduling
Priority scheduling
Shortest Remaining Time First (SRTF) scheduling
Round Robin (RR) scheduling
Highest Response Ratio Next (HRRN) scheduling
Multiple-Level Queues scheduling
Multilevel Feedback Queue Scheduling (MLFQ)
These algorithms are either non-
preemptive or preemptive.
Non-preemptive algorithms are
designed so that once a process enters the
running state, it cannot be preempted
until it completes its allotted time.
Preemptive scheduling is based on
priority where a scheduler may preempt a
low priority running process anytime when
a high priority process enters into the
ready state.
First- Come, First-Served (FCFS)
Scheduling
FCFS considered to be the simplest of all
scheduling algorithms. First come first serve
scheduling algorithm states that the process
that requests the CPU first is allocated the CPU
first and is implemented by using FIFO queue.
Characteristics of FCFS:
FCFS supports non-preemptive and
preemptive CPU scheduling algorithms.
Tasks are always executed on a First-come,
First-serve concept.
FCFS is easy to implement and use.
This algorithm is not much efficient in
performance, and the wait time is quite high.
Advantages of FCFS:
Easy to implement
First come, first serve method
Disadvantages of FCFS:
FCFS suffers from Convoy effect.
The average waiting time is much higher than
the other algorithms.
FCFS is very simple and easy to implement and
hence not much efficient.
Convoy Effect
 FCFS serves as a non-preemptive scheduling
algorithm where the CPU is managing a
process until it is completed, which implies
that other processes will have to wait, which
leads to slowing down of the operating system.
The Convoy Effect is a phenomenon in which
the entire Operating System slows down owing
to a few slower processes in the system. Here,
resource utilization is low because the CPU or
I/O modules must sit idle.
Example FCFS Scheduling

Process ID Process N Arrival Ti Burst Time


ame me
P1 A 0 6
P2 B 2 2
P3 C 3 1
P4 D 4 9
P5 E 5 8
Average CT = ( 6 + 8 +9 + 18 + 26 ) / 5 = 13.4
Average TAT = ( 6 + 8 + 9 +18 +26 ) / 5 = 13.4
Average WT = ( 0 + 6 + 8 + 9 + 18 ) / 5 = 8.2
Example on FCFS Scheduling
Process Burst Time
P1 24
P2 3
P3 3
Suppose that the processes arrive in the
order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30

Waiting time for P = 0; P = 24; P = 27


1 2 3
Average waiting time: (0 + 24 + 27)/3 = 17
FCFS Scheduling (Cont.)

Suppose that the processes arrive in the order:


P2 , P3 , P1
 The Gantt chart for the schedule is:

 Waiting time for P = 6; P = 0 P = 3


1 2 ; 3
 Average waiting time: (6 + 0 + 3)/3 = 3
 Much better than previous case
 Convoy effect - short process behind long process
may have higher waiting time
Consider one CPU-bound and many I/O-bound
processes
Shortest-Job-First (SJF)
Scheduling
The problem which we found in the FCFS algorithm can
be resolved by using a new CPU Scheduling Techniques
named Shortest Job First (SJF).
In this CPU Scheduling Algorithm, the CPU allots its
resources to the process which is in ready queue and
which has least Burst Time.
If we face a condition where two processes are present in
the Ready Queue and their Burst Times are same, we can
choose any process for execution.
Associate with each process the length of its next CPU
burst
 Use these lengths to schedule the process with the shortest
time
SJF is optimal – gives minimum average waiting time for
a given set of processes. The difficulty is knowing the
length of the next CPU request
Characteristics:
 All the heavy processes are executed at the last.
Prevent starvation of small processes.
 If shorter processes continue to be produced, hunger
might result. The idea of aging technique can be
used to overcome this issue.
 Shortest Job can be executed in Preemptive and also
non-preemptive way also. Preemptive version called
Shortest-Remaining-Time-First
Advantages
 SJF is used because it has the least average waiting
time than the other CPU Scheduling Algorithms
Disadvantages
 Starvation is one of the negative traits SJF Algorithm
exhibits.
 Often, it becomes difficult to forecast how long the
next CPU request will take
SJF [Non-preemptive]
Shortest Remaining Time First
[SRTF]
Shortest remaining time first (SRTF) is the
preemptive version of the SJF algorithm.
The processor is allocated to the job closest to
completion but it can be preempted by a newer
ready job with shorter time to completion.
Impossible to implement in interactive systems
where required CPU time is not known.
It is often used in batch environments where short
jobs need to give preference.
In SRTF the short processes are handled very fast.
Like the shortest job first, it also has the potential
for process starvation. Long processes may be
held off indefinitely if short processes are
continually added.
SRTF Example-1
SRTF Example-2
Determining Length of Next CPU
Burst
We can only estimate the length of CPU burst
based on the previous similar process by using

 n 1  t n  1    n .
exponential averaging.

1. t n actual length of n th CPU burst


2.  n 1 predicted value for the next CPU burst
3.  , 0  1

Commonly, α set to 1/2


Examples of Exponential
Averaging
 =0

n+1 = n
Recent history does not count
 =1

n+1 =  tn
Only the actual last CPU burst counts
If we expand the formula, we get:
n+1 =  tn+(1 - ) tn -1 + …
+(1 -  )j  tn -j + …
+(1 -  )n +1 0
Since both  and (1 - ) are less than or
equal to 1, each successive term has less
weight than its predecessor
Prediction of the Length of the Next CPU Burst
Example of Shortest-Remaining-Time-
First

 Now we add the concepts of varying arrival


times and preemption to the analysis
ProcessAarri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
 Preemptive SJF Gantt Chart

P1 P2 P4 P1 P3
0 1 5 10 17 26

 Average waiting time = [(10-1)+(1-1)+(17-


2)+5-3)]/4 = 26/4 = 6.5 msec
Priority Scheduling
 A priority number (integer) is associated with
each process
 The CPU is allocated to the process with the
highest priority (smallest integer  highest
priority)
 Preemptive
 Nonpreemptive

 SJF is priority scheduling where priority is the


inverse of predicted next CPU burst time
 Problem  Starvation – low priority processes
may never execute
 Solution  Aging – as time progresses increase
the priority of the process
Scheduling
ProcessAarri Burst TimeTPriority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

Priority scheduling Gantt Chart

Average waiting time = 8.2 msec


Round Robin (RR)
Each process gets a small unit of CPU time (time
quantum q), usually 10-100 milliseconds. After
this time has elapsed, the process is preempted and
added to the end of the ready queue.
If there are n processes in the ready queue and the
time quantum is q, then each process gets 1/n of
the CPU time in chunks of at most q time units at
once. No process waits more than (n-1)q time units.
Timer interrupts every quantum to schedule next
process
Performance
q large  FIFO
q small  q must be large with respect to context
switch, otherwise overhead is too high
Example of RR with Time Quantum
=4
Process Burst Time
P1 24
P2 3
P3 3
The Gantt chart is:
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

Typically, higher average turnaround than


SJF, but better response
q should be large compared to context
switch time
q usually 10ms to 100ms, context switch <
10 usec
Time Quantum and Context Switch Time
Turnaround Time Varies With The Time Quantum

80% of CPU bursts should


be shorter than q
Multilevel Queue
 Ready queue is partitioned into separate queues, eg:
 foreground (interactive)
 background (batch)
 Processes are permanently assigned to one queue,
generally based on some priority of the process, such
as memory size, process priority or process type.
 Each queue has its own scheduling algorithm:
foreground – RR, background – FCFS
 Scheduling must be done between the queues:
 Fixed priority scheduling: All processes in higher queue
will be executed first then processes in lower queue will be
executed. Possibility of starvation.
 Time slice – each queue gets a certain amount of CPU time
which it can schedule amongst its processes; i.e., 80% to
foreground in RR
 20% to background in FCFS
Multilevel Queue Scheduling
Advantages and
Disadvantages:
•Low scheduling overhead
•Not flexible
Multilevel Feedback Queue
 A process can move between the various queues;
aging can be implemented this way.
 The scheduler first executes all processes in
queue-0 then queue-1 and so on.
 A process that arrives in queue-1 will preempt a
running process in queue-2.
 Multilevel-feedback-queue scheduler defined by
the following parameters:
 number of queues
 scheduling algorithms for each queue
 method used to determine when to upgrade a process
 method used to determine when to demote a process
 method used to determine which queue a process will
enter when that process needs service
Queue

Three queues:
 Q0 – RR with time quantum 8
milliseconds
 Q1 – RR time quantum 16 milliseconds
 Q2 – FCFS

Scheduling
 A new job enters queue Q0 which is
served using RR scheduling
 When it gains CPU, job receives 8
milliseconds
 If it does not finish in 8 milliseconds,
job is moved to queue Q1
 At Q1 job is again served using RR
scheduling and receives 16 additional
milliseconds
 If it still does not complete, it is
preempted and moved to queue Q2
using FCFS scheduling
Scheduling
 A Multi-processor is a system that has more than
one processor but shares the same memory, bus,
and input/output devices. In multi-processor
scheduling, more than one processors(CPUs) share the
load to handle the execution of processes smoothly.
 The scheduling process of a multi-processor is more
complex than that of a single processor system because
of the following reasons.
Load balancing is a problem since more than one
processors are present.
Processes executing simultaneously may require
access to shared data.
Cache affinity should be considered in scheduling.
Cache affinity enables the binding and unbinding of a
process to a CPU or a range of CPUs, so that the
process will execute only on the designated CPU or
CPUs rather than any CPU.
Approaches to Multiple Processor
Scheduling
 The CPUs may be of the same kind (homogeneous)
or different types (heterogeneous). There are two
approaches to multi-processor scheduling:
Asymmetric Multi-processor scheduling
Symmetric Multi-processor scheduling
 Asymmetric Multiprocessing: In asymmetric
multi-processor scheduling, there is a master
server, and the rest of them are slave servers.
The master server handles all the scheduling
processes and I/O processes, and the slave servers
handle the users' processes. If the master server
goes down, the whole system comes to a halt.
However, if one of the slave servers goes down,
the rest of the system keeps working.
In symmetric multi-processor scheduling, the
processors are self-scheduling. The scheduler for
each processor checks the ready queue and selects
a process to execute. Each of the processors works
on the same copy of the operating system and
communicates with each other. If one of the
processors goes down, the rest of the system keeps
working.
 Symmetrical Scheduling with global queues: If
the processes to be executed are in a common queue
or a global queue, the scheduler for each processor
checks this global-ready queue and selects a process
to execute.
 Symmetrical Scheduling with one queue per
CPU: If the processors in the system have their own
private ready queues, the scheduler for each processor
checks their own private queue to select a process.
Symmetric multiprocessor
 There are three types of conflict that may arise in a
symmetric multi-processor system. These are as follows.
 Locking system: The resources in a multi-processor
are shared among the processors. To ensure safe
access of these resources, a locking system is
required. This is done to serialize the access of the
resources by the processors.
 Shared data: Since multiple processors are accessing
the same data at any given time, the data may not be
consistent across all of these processors. To avoid this,
we must use some kind of strategy or locking scheme.
 Cache Coherence: When the data is stored in
multiple local caches and shared by many clients, it
may be rendered invalid if one of the clients changes
the memory block. This can be resolved by
maintaining a consistent view of the data.
Master-Slave
Multiprocessor
In a master-slave multi-processor, one CPU
works as a master while all others work as slave
processors.
This means the master processor handles all
the scheduling processes and the I/O processes
while the slave processors handle the user's
processes.
The memory and I/O devices are shared
among all the processors, and all the
processors are connected to a common bus. It
uses asymmetric multiprocessing to schedule
processes.
Processor Affinity
 A process has an affinity for a processor on which it runs. This is
called processor affinity. When a process runs on a processor, the
recent data accessed by the process is populated in the cache memory
of this processor.
 However, if this process is migrated to another processor for some
reason, the content of the cache memory of the first processor is
invalidated, and the second processor's cache memory has to be
repopulated.
 To avoid the cost of invalidating and repopulating the cache
memory, the Migration of processes from one processor to another is
avoided.
 There are two types of processor affinity.
 Soft Affinity: The system has a rule of trying to keep running a
process on the same processor but does not guarantee it. This is
called soft affinity.
 Hard Affinity: The system allows the process to specify the subset of
processors on which it may run, i.e., each process can run only on
some of the processors. Systems such as Linux implement soft
affinity, but they also provide system calls such as
sched_setaffinity() to support hard affinity.
Load Balancing
 In a multi-processor system, all processors may not have the
same workload. Some may have a long ready queue, while
others may be sitting idle. To solve this problem, load
balancing comes into the picture. Load Balancing is the
phenomenon of distributing workload so that the processors
have an even workload in a symmetric multi-processor system.
 In symmetric multiprocessing systems which have a global
queue, load balancing is not required. In such a system, a
processor examines the global ready queue and selects a
process as soon as it becomes idle.
 There are two ways to solve this.
 Push Migration: In push migration, a task routinely checks
the load on each processor. If the workload is unevenly
distributed, it will push the load from the overloaded
processor and assign the load to an idle or a less busy
processor.
 Pull Migration: In pull migration, an idle processor will
extract the load from an overloaded processor itself.
GATE QUESTIONS [CPU
SCHEDULING]

SRTF=6, NP-SJF=7.5
GATE QUESTIONS

Priority Scheduling & Shortest Job First


GATE QUESTIONS

P=3, Q=7, R=7, S=3


GATE QUESTIONS

A, C, D
GATE QUESTIONS

12
GATE QUESTIONS

5.25
GATE QUESTIONS

2
Deadlocks
System Model
Deadlock Characterization
Methods for Handling Deadlocks
Deadlock Prevention
Deadlock Avoidance
Deadlock Detection
Recovery from Deadlock
System Model
 In a computer system a deadlock is where two or
more processes are unable to proceed because
each process is waiting for the other to release a
resource that it needs to continue execution.
 In other words, a deadlock occurs when two or
more processes are in a circular wait state, and
none of them can release the resources they hold
until they receive the resources they are waiting
for.
Resources − The system has a set of different types of
resources that are shared among processes.
These resources can be hardware or software components,
such as memory, files, printers, or network connections. Each
resource is identified by a unique name or identifier R1,
R2, . . ., Rm. Each resource type Ri has Wi instances.
Processes − The system has a set of processes that
request and release resources. Processes are units of
execution that can be started, suspended, resumed, and
terminated. Each process is identified by a unique process ID
P0,P1,….,Pn.
 Resource Allocation − Each resource can be in one of two
states, allocated or available. A resource that is allocated to a
process cannot be used by any other process until it is released.
Each process utilizes a resource as follows:
 Request
 Use
 Release
 Resource Dependency − Some processes may
require multiple resources to complete their tasks.
A resource dependency graph can be used to
represent the relationships between processes and
resources and to detect potential deadlocks.
 Resource preemption is a technique used by the
operating system to preempt resources from one or
more processes involved in the deadlock and allocate
them to the processes that need them. Preemption can
be done either selectively or globally. In selective
preemption, only the resources that are required to
resolve the deadlock are preempted, while in global
preemption, all the resources held by the deadlocked
processes are preempted.
 When a process is terminated, all the resources held
by the process are released, and other processes
can proceed.
Introduction
In multiprogramming environment, several processes
may compete for a finite no. of resources.
If the resources are not available, the process enters a
wait state.
If the waiting process may never get a chance to
change the state because the requested resources are
held by other waiting processes, then this situation is
called deadlock.
A process must request a resource before using it and
must release after use.
The no. of resources may not exceed the total no. of
resources available in the system.
Request and release of other resources can be
accomplished through wait and signal operations on
semaphores.
Deadlock Characteristics
Necessary Conditions:
 Mutual exclusion: Only one process at a time can
use a resource. Another process will wait until the
resource is released by the process who is using it.
 Hold and wait: A process holding at least one
resource is waiting to acquire additional resources held
by other processes.
 No preemption: Resources can’t be preempted. A
resource can be released only voluntarily by the
process holding it, after that process has completed its
task.
 Circular wait: There exists a set {P , P , …, P } of
0 1 n
waiting processes such that P0 is waiting for a resource
that is held by P1, P1 is waiting for a resource that is
held by P2, …, Pn–1 is waiting for a resource that is held
by Pn, and Pn is waiting for a resource that is held by P0.
Deadlock can arise if all four conditions hold simultaneously.
Resource-Allocation Graph
A directed graph having a set of vertices V and a set of edges E such that:
V is partitioned into two types:
P = {P1, P2, …, Pn}, the set consisting of all the active
processes in the system.

 R = {R1, R2, …, Rm}, the set consisting of all the resource


types in the system
This graph is called a system resource allocation graph
if:
 request edge – directed edge Pi  Rj

(process Pi requested an instance of resource type Rj and


waiting for that resource)
 assignment edge – directed edge Rj  Pi
(an instance of resource type Rj has been allocated to
process Pi)
Resource-Allocation Graph
(Cont.)
Process

Resource Type with 4 instances

Pi requests instance of Rj

Pi
Rj
Pi is holding an instance of Rj

Pi
Rj
Example of a Resource Allocation Graph
Resource Allocation Graph With A Deadlock
Graph With A Cycle But No
Deadlock
Basic Facts
If graph contains no cycles  no deadlock
If graph contains a cycle 
if only one instance per resource type, then
deadlock. A cycle in the graph is the necessary
and sufficient condition for the existence of
deadlock.
if several instances per resource type, then
possibility of deadlock. A cycle in the graph is the
necessary but not sufficient condition for the
existence of deadlock.
Methods for Handling
Deadlocks
We can use protocols to prevent or avoid
deadlock, ensuring that the system will never
enter a deadlock state:
Deadlock Prevention
Deadlock Avoidence
Allow the system to enter a deadlock state:
Detect deadlock
Recover from deadlock
Ignore the problem and pretend that deadlocks
never occur in the system; used by most
operating systems, including UNIX.
Deadlock Prevention
*Set of methods for ensuring that at least one of the four necessary
conditions can not hold.
Mutual Exclusion – The Mutual Exclusion
condition must hold for non-sharable resources.
E.g printers can not be shared. But sharable
resources do not require mutually exclusive
access. So, process will never wait for sharable
resources (e.g., read-only files);
Hold and Wait – must guarantee that whenever a
process requests a resource, it does not hold any
other resources
 Protocol-1: It require processes to request all
resources before execution.
 Protocol-2: Allow process to request resources only
when the process has none allocated to it.
 Low resource utilization; starvation possible
Deadlock Prevention (Cont.)
No Preemption To ensure this condition does not
hold we can use following protocols:
 If a process is holding some resources and requests
another resources and if they are not available, we
check whether they are allocated to some other process
that is waiting for additional resources.
 If so, we preempt the desired resources from the
waiting process and allocate them to the requesting
process.
 If the resources are neither available nor held by a
waiting process, then the requesting process must wait.
 While it is waiting, some of its resources may be
preempted and allocated to requesting processes.
 A process can restarted only when it is allocated the
requested resources and recovers all its preempted
resources.
 [Preemption of allocated resources are allowed].
Circular Wait – To ensure that this condition
never holds, impose a total ordering of all
resource types, and require that each process
requests resources in an increasing order of
enumeration.
If a process first request resource type Ri and
then request Rj, then Rj>=Ri.

Low device utilization and reduced system throughput


Deadlock Avoidance
To avoid deadlock additional a priori information about how resources are
to be requested is required.
The system must keep track of resources currently
available, currently allocated and future request
and release of each process to decide whether current
request can be satisfied or not.
Simplest and most useful model requires that each
process declare the maximum number of resources of
each type that it may need.
The deadlock-avoidance algorithm dynamically
examines the resource-allocation state to ensure that
there can never be a circular-wait condition.
Resource-allocation state is defined by the number of
available and allocated resources, and the maximum
demands of the processes.
Safe State
When a process requests an available resource, system
must decide if immediate allocation leaves the system in a
safe state.
System is in safe state if there exists a sequence <P1, P2,
…, Pn> of ALL the processes in the systems such that for
each Pi, the resources that Pi can still request can be
satisfied by currently available resources + resources held
by all the Pj, with j < i
That is:
 If Pi resource needs are not immediately available, then Pi can
wait until all Pj have finished
 When Pj is finished, Pi can obtain needed resources, execute,
return allocated resources, and terminate
 When Pi terminates, Pi +1 can obtain its needed resources, and
so on
 If no such sequence exists, then the system is in unsafe
state.
Basic Facts
If a system is in safe state  no deadlocks
If a system is in unsafe state  possibility of
deadlock
Avoidance  ensure that a system will never
enter an unsafe state.
Consider a system with 12 magnetic tape drive:
Process Max. NeedsCurrent Allocated
Need
P0 10 5 5
P1 4 2 2
P2 9 2 7
Total allocated=5+2+2=9
Currently available=12-9=3
The sequence <P1,P0,P2> satisfies the safety condition.
Let after some time P2 request one more resource and
allocated.
Process Max. Needs Current Allocated Need
P0 10 5 5
P1 4 2 2
P2 9 3 6
Currently available=12-10=2
P1 requires 2 tape drive for completion. After completion of
P1 available tape drives=4
But P0 and P2 requires 5 and 6 tape drives respectively. So,
Avoidance Algorithms
Single instance of a resource type
Use a resource-allocation graph

Multiple instances of a resource type


 Use the banker’s algorithm
Resource-Allocation Graph
Scheme
A new edge called Claim edge Pi  Rj indicated
that process Pi may request resource Rj at some
time in future; represented by a dashed line
Claim edge converts to request edge when a
process requests a resource
Request edge converted to an assignment
edge when the resource is allocated to the
process
When a resource is released by a process,
assignment edge reconverts to a claim edge
Resources must be claimed a priori in the
system
Resource-Allocation Graph

R2 is currently free. We can allocate R2 to one of


them but can not allocate to P2 because it may
create a cycle in graph.
Unsafe State In Resource-Allocation Graph
Resource-Allocation Graph Algorithm

Suppose that process Pi requests a resource Rj


The request can be granted only if converting
the request edge to an assignment edge does
not result in the formation of a cycle in the
resource allocation graph.
Banker’s Algorithm
Banks never allocate its available cash such that it
can no longer satisfy the needs of all its customer.
This algorithm is used when a system have
multiple instances of each resource type.
Algorithm:
When a process enters the system, it must declare
the max. no. of instances of each type it may need.
This request should not be greater than total no. of
resources in the system.
The system must determine whether the allocation
of these resources will leave the system in safe
state. If it will, allocate them; otherwise the process
must wait until some other processes release
enough resources.
When a process gets all its resources it must return
them in a finite amount of time.
Data Structures for the Banker’s
Algorithm
Let n = number of processes, and m = number of resources types.

Available: Vector of length m. If available [j] = k,


there are k instances of resource type Rj available
Max: n x m matrix. If Max [i,j] = k, then process
Pi may request at most k instances of resource
type Rj
Allocation: n x m matrix. If Allocation[i,j] = k
then Pi is currently allocated k instances of Rj
Need: n x m matrix. If Need[i,j] = k, then P may
i
need k more instances of Rj to complete its task
Need [i,j] = Max[i,j] – Allocation [i,j]
Let X and Y be vectors of length n. We say
that, X<=Y if and only if:
X[i]<=Y[i] for all i=1,2,3…n
For example:
X=[1,7,3,2] and Y=[0,3,2,1], we say that Y<=X.
We say, Y<X if Y<=X and Y!=X
For example:
X=[1,2,3,4] and Y=[1,2,3,1], we say that Y<X.
Safety Algorithm
1. Let Work and Finish be vectors of length m
and n, respectively. Initialize:
Work = Available
Finish [i] = false for i = 0, 1, …, n- 1

2. Find an i such that both:


(a) Finish [i] = false
(b) Needi  Work
If no such i exists, go to step 4

3. Work = Work + Allocationi


Finish[i] = true
go to step 2
4. If Finish [i] == true for all i, then the
system is in a safe state
This algorithm requires an order of mxn2 operations.
Resource-Request Algorithm for Process Pi
When a request for resources is made by process Pi,
the following actions are taken:
Requesti = request vector for process Pi.
1. If Requesti  Needi go to step 2. Otherwise,
raise error condition, since process has exceeded its
maximum claim
2. If Requesti  Available, go to step 3.
Otherwise Pi must wait, since resources are not
available
3. Pretend to allocate requested resources to Pi by
modifying the state as follows:
Available = Available – Requesti
Allocationi = Allocationi + Requesti
Needi = Needi – Requesti
 If safe  the resources are allocated to Pi
 If unsafe  Pi must wait, and the old resource-allocation state
is restored
Example of Banker’s Algorithm
5 processes P0 through P4; 3 resource types:

A (10 instances), B (5 instances), and C (7


instances)
Snapshot at time T0:

Allocation Max Available


ABC ABC ABC
P0 0 1 0 753 332
P1 2 0 0 322
P2 3 0 2 902
P3 2 1 1 222
P4 0 0 2 433
Example (Cont.)
 The content of the Need matrix is defined to be Max
– Allocation

Need
ABC
P0 743
P1 122
P2 600
P3 011
P4 431

 The system is in a safe state since the sequence < P ,


1
P3, P4, P2, P0> satisfies safety criteria.
Example: P1 Request (1,0,2)
 Check that Request  Available (that is, (1,0,2) 
(3,3,2)  true
Allocation Need
Available
ABC ABC ABC
P0 0 1 0 743 230
P1 302 020
P2 3 0 2 600
P3 2 1 1 011
P4 0 0 2 431
 Executing safety algorithm shows that sequence < P ,
1
P3, P4, P0, P2> satisfies safety requirement. So, P1 will
immediately get its requested resources.
 Can request for (3,3,0) by P be granted?
4

 Can request for (0,2,0) by P be granted?


0
AKTU Question [2022-23]
AKTU Question [2018-19]
Deadlock Detection
 In this approach, The OS doesn't apply any
mechanism to avoid or prevent the deadlocks.
Therefore the system considers that the deadlock
will definitely occur.
 In order to get rid of deadlocks, the OS periodically
checks the system for any deadlock. In case, it finds
any of the deadlock then the OS will recover the
system using some recovery techniques.
 The primary responsibility of OS is to detect
deadlocks. With the help of the resource allocation
graph, the OS can detect deadlocks.
Single Instance of Each Resource Type
Draw a wait-for graph for the given resource
P
allocation graph i
Processes are represented by nodes.
Add an edge from Pi  Pj if Pi is waiting for
process Pj to release that resource

Periodically invoke an algorithm that searches


for a cycle in the graph. If there is a cycle,
there exists a deadlock

An algorithm to detect a cycle in a graph


requires an order of n2 operations, where n is
the number of vertices in the graph
Resource-Allocation Graph and Wait-for
Graph

Resource-Allocation Graph Corresponding wait-for graph


Several Instances of a Resource
Type
Available: A vector of length m indicates
the number of available resources of each
type
Allocation: An n x m matrix defines the
number of resources of each type currently
allocated to each process
Request: An n x m matrix indicates the
current request of each process. If Request
[i][j] = k, then process Pi is requesting k
more instances of resource type Rj.
Detection Algorithm
1. Let Work and Finish be vectors of length
m and n, respectively Initialize:
(a) Work = Available
(b) For i = 1,2, …, n, if Allocationi  0, then
Finish[i] = false; otherwise, Finish[i] = true
2. Find an index i such that both:
(a) Finish[i] == false
(b)Requesti  Work
If no such i exists, go to step 4
3. Work = Work + Allocationi
Finish[i] = true
go to step 2
4. If Finish[i] == false, for some i, 1  i  n,
then the system is in deadlock state. Moreover, if
Finish[i] == false, then Pi is deadlocked
Example of Detection Algorithm
 Five processes P through P ; three resource types
0 4
A (7 instances), B (2 instances), and C (6 instances)

 Snapshot at time T :
0
AllocationRequestAvailable
ABC ABC ABC
P0 010 000 000
P1 200 202
P2 303 000
P3 211 100
P4 002 002

 Sequence <P , P , P , P , P > will result in


0 2 3 1 4
Finish[i] = true for all i
Example (Cont.)
 P requests an additional instance of type C
2
Request
ABC
P0 0 0 0
P1 2 0 2
P2 0 0 1
P3 1 0 0
P4 0 0 2
 State of system?
Can reclaim resources held by process P , but
0
insufficient resources to fulfill other processes;
requests
Deadlock exists, consisting of processes P , P , P ,
1 2 3
and P4
Detection-Algorithm Usage
When, and how often, to invoke deadlock-
detection algorithm depends on:
How often a deadlock is likely to occur?
How many processes will need to be rolled back?
If detection algorithm is invoked frequently,
then there may be many cycles in the resource
graph and so we would not be able to tell which
processes “caused” the deadlock. Frequent
invocation of deadlock-detection algorithm may
incur overhead on system performance.
Less frequent invocation may leads to many
cycles in the resource allocation graph and it will
be difficult to tell which processes caused the
deadlock.
Recovery from Deadlock
Recovery from Deadlock
There are two alternative to deal with deadlock:
 Inform the operator that the deadlock occurred
and let the operator deal with it.
 Let the system recover from the deadlock
automatically.
There are four options for breaking a deadlock:
 Abort all deadlocked processes
 Abort one process at a time until the deadlock
cycle is eliminated
 Preempt some resources from one or more
deadlocked processes
 The operating system can roll back to the previous
safe state using check pointing at every state.
Process Termination
 Aborting one or more processes will break the deadlock. There
are two options;
 Abort all deadlocked processes: It causes great expense
because these processes have computed for a long time.
 Abort one process at a time until the deadlock cycle is
eliminated: It causes overhead on system performance
because every time we abort a process a deadlock-detection
algorithm must be invoked to detect any further deadlock in
the system.
 In which order should we choose to abort processes?
1. Priority of the process
2. How long process has computed, and how much longer to
completion
3. Resources the process has used
4. Resources process needs to complete
5. How many processes will need to be terminated
6. Is process interactive or batch?
Resource Preemption
Preempt some resources from processes and give
these resources to other processes until the
deadlock cycle is broken.
Selecting a victim – Which resources and which
processes are to be preempted? The order of
preemption should minimize the cost. Cost can be
computed by:
 No. of resources the process is holding
 Execution time so far a process has consumed
Rollback –When we preempt a resource from a
process, this process must roll backed to some safe
state, restart process from that state.
Starvation – If same process may always be
picked as victim, then that process may starved.
GATE QUESTIONS
[DEADLOCK]

A, D
GATE QUESTIONS

2
GATE QUESTIONS

B
GATE QUESTIONS

D
GATE QUESTIONS

4
GATE QUESTIONS

B
GATE QUESTIONS

P2

You might also like