0% found this document useful (0 votes)
60 views26 pages

23CB401 - UNIT 2 - Notes

unit 2 notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views26 pages

23CB401 - UNIT 2 - Notes

unit 2 notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 26

23CB401-OPERATING SYSTEMS

UNIT 2

PROCESS MANAGEMENT

CPU Scheduling – Basic Concepts, Scheduling criteria - Scheduling algorithms; Process


Synchronization - The Critical-Section problem, Synchronization hardware, Mutex Locks,
Semaphores, Monitors, Classical problems of synchronization; Deadlock – Deadlock
Characterization, Methods for handling deadlocks, Deadlock prevention, Deadlock avoidance,
Deadlock detection, Recovery from deadlock.

2.1 CPU SCHEDULING

 CPU scheduling is the basis of multi-programmed operating systems.


 The objective of multiprogramming is to have some process running at all times, in order
to maximize CPU utilization.
 Scheduling is a fundamental operating-system function.
 Almost all computer resources are scheduled before use.

CPU-I/O Burst Cycle


 Process execution consists of a cycle of CPU execution and I/O wait.
 Processes alternate between these two states.
 Process execution begins with a CPU burst.
 That is followed by an I/O burst, then another CPU burst, then another I/O burst, and so
on.
 Eventually, the last CPU burst will end with a system request to terminate execution,
rather than with another I/O burst.
23CB401-OPERATING SYSTEMS

CPU Scheduler

 Whenever the CPU becomes idle, the operating system must select one of the processes
in the ready queue to be executed.
 The selection process is carried out by the short-term scheduler (or CPU scheduler).
 The ready queue is not necessarily a first-in, first-out (FIFO) queue. It may be a FIFO
queue, a priority queue, a tree, or simply an unordered linked list.

Preemptive Scheduling

 CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state
2. When a process switches from the running state to the ready state
3. When a process switches from the waiting state to the ready state
4. When a process terminates
 Under 1 & 4 scheduling scheme is non preemptive.
 Otherwise the scheduling scheme is preemptive.

Non-preemptive Scheduling

 In non preemptive scheduling, once the CPU has been allocated a process, the process
keeps the CPU until it releases the CPU either by termination or by switching to the
waiting state.
 This scheduling method is used by the Microsoft windows environment.

Dispatcher

 The dispatcher is the module that gives control of the CPU to the process selected by the
short-term scheduler.
23CB401-OPERATING SYSTEMS

 This function involves:


1. Switching context
2. Switching to user mode
3. Jumping to the proper location in the user program to restart that program

2.1.1 Scheduling Criteria

1. CPU utilization: The CPU should be kept as busy as possible. CPU utilization may
range from 0 to 100 percent. In a real system, it should range from 40 percent (for a
lightly loaded system) to 90 percent (for a heavily used system).

2. Throughput: It is the number of processes completed per time unit. For long processes,
this rate may be 1 process per hour; for short transactions, throughput might be 10
processes per second.

3. Turnaround time: The interval from the time of submission of a process to the time of
completion is the turnaround time. Turnaround time is the sum of the periods spent
waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing
I/O.

4. Waiting time: Waiting time is the sum of the periods spent waiting in the ready queue.

5. Response time: It is the amount of time it takes to start responding, but not the time that
it takes to output that response.

2.1.2 CPU Scheduling Algorithms

1. First-Come, First-Served Scheduling


2. Shortest Job First Scheduling
3. Priority Scheduling
4. Round Robin Scheduling

First-Come, First-Served Scheduling


 The process that requests the CPU first is allocated the CPU first.
 It is a non-preemptive scheduling technique.
 The implementation of the FCFS policy is easily managed with a FIFO queue.

Example:
Process Burst Time
P1 24
P2 3
P3 3
23CB401-OPERATING SYSTEMS

 If the processes arrive in the order PI, P2, P3, and are served in FCFS order, we get the
result shown in the following Gantt chart:
Gantt Chart

Average waiting time = (0+24+27) / 3 = 17 ms


Average Turnaround time = (24+27+30) / 3 = 27 ms

 The FCFS algorithm is particularly troublesome for time – sharing systems, where it
is important that each user get a share of the CPU at regular intervals.

Shortest Job First Scheduling

 It is a non-preemptive scheduling technique.


 The CPU is assigned to the process that has the smallest next CPU burst.
 If two processes have the same length next CPU burst, FCFS scheduling is used to break
the tie.
Example:

Process Burst Time


P1 6
P2 8
P3 7
P4 3
Gantt Chart

Average waiting time is (3 + 16 + 9 + 0)/4 = 7 ms


Average turnaround time = ( 3+9+16+24) / 4 = 13 ms

Shortest Remaining Time First Scheduling


 It is a preemptive scheduling technique.
 Preemptive SJF is known as shortest remaining time first
Example :
Process Arrival Time Burst
Time P1 0 8
23CB401-OPERATING SYSTEMS

P2 1 4
P3 2 9
P4 3 5

Waiting Time Calculation


P1 : 10 – 1 = 9
P2 : 1 – 1 = 0
P3 : 17 – 2 = 15
P4 : 5 – 3 = 2
Average waiting time = (9+0+15+2) / 4 = 6.5 ms

Priority Scheduling
 A priority is associated with each process, and the CPU is allocated to the process with the
highest priority.( smallest integer  highest priority).

Example :
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

Average waiting time =8.2 ms



Priority Scheduling can be preemptive or non-preemptive.

Drawback  Starvation – low priority processes may never execute.

Solution  Aging – It is a technique of gradually increasing the priority of processes that
wait in the system for a long time.
Round-Robin Scheduling
 The round-robin (RR) scheduling algorithm is designed especially for timesharing
systems.
 It is similar to FCFS scheduling, but preemption is added to switch between processes.
 A small unit of time, called a time quantum (or time slice), is defined.
 The ready queue is treated as a circular queue.
Example:
Process Burst Time
P1 24
P2 3
P3 3
23CB401-OPERATING SYSTEMS

Time Quantum = 4 ms.

Waiting time
P1 = 26 – 20 = 6
P2 = 4
P3 = 7 (6+4+7 / 3 = 5.66 ms)
Average waiting time is 17/3 = 5.66 milliseconds.
 The performance of the RR algorithm depends heavily on the size of the time–quantum.
 If time-quantum is very large (infinite) then RR policy is same as FCFS policy.
 If time quantum is very small, RR approach is called processor sharing and appears to the
users as though each of n process has its own processor running at 1/n the speed of real
processor.

Multilevel Queue Scheduling


 It partitions the ready queue into several separate queues .
 The processes are permanently assigned to one queue, generally based on some property
of the process, such as memory size, process priority, or process type.
 There must be scheduling between the queues, which is commonly implemented as a
fixed-priority preemptive scheduling.
 For example the foreground queue may have absolute priority over the background
queue.
 Example of a multilevel queue scheduling algorithm with five queues
1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes
 Each queue has absolute priority over lower-priority queue.
23CB401-OPERATING SYSTEMS

Multilevel Feedback Queue Scheduling


 It allows a process to move between queues.
 The idea is to separate processes with different CPU-burst characteristics.
 If a process uses too much CPU time, it will be moved to a lower-priority queue.
 This scheme leaves I/O-bound and interactive processes in the higher-priority queues.
 Similarly, a process that waits too long in a lower priority queue may be moved to a
higher-priority queue.
 This form of aging prevents starvation.
Example:
 Consider a multilevel feedback queue scheduler with three queues, numbered from 0 to 2.
 The scheduler first executes all processes in queue 0.
 Only when queue 0 is empty will it execute processes in queue 1.
 Similarly, processes in queue 2 will be executed only if queues 0 and 1 are empty.
 A process that arrives for queue 1 will preempt a process in queue 2.
 A process that arrives for queue 0 will, in turn, preempt a process in queue 1.

 A multilevel feedback queue scheduler is defined by the following parameters:


1. The number of queues
2. The scheduling algorithm for each queue
3. The method used to determine when to upgrade a process to a higher priority
queue
4. The method used to determine when to demote a process to a lower-priority
queue
5. The method used to determine which queue a process will enter when that
process needs service

2.2 PROCESS SYNCHRONIZATION

 Concurrent access to shared data may result in data inconsistency.


 Maintaining data consistency requires mechanisms to ensure the orderly execution of
cooperating processes.
 Shared-memory solution to bounded-butter problem allows at most n – 1 items in buffer
at the same time. A solution, where all N buffers are used is not simple.
 Suppose that we modify the producer-consumer code by adding a variable counter,
initialized to 0 and increment it each time a new item is added to the buffer
 Race condition: The situation where several processes access – and manipulate shared
data concurrently. The final value of the shared data depends upon which process finishes
last.
 To prevent race conditions, concurrent processes must be synchronized.
Example:
Consider the bounded buffer problem , where an integer variable counter, initialized to 0 is added .
counter is incremented every time we add a new item to the buffer and is decremented every time we
remove one item from the buffer.
The code for the producer process can be modified as follows:
while (true)
{
/* produce an item in next produced */
while (counter == BUFFER SIZE)
; /* do nothing */
buffer[in] = next
produced;
in = (in + 1) % BUFFER SIZE;
counter++;
}
The code for the consumer process can be modified as follows:

while (true)
{
while (counter == 0)
; /* do nothing */
next consumed = buffer[out];
out = (out + 1) % BUFFER SIZE;
counter--;
/* consume the item in next consumed */
}
Let the current value of counter be 5. If producer process and consumer process execute
the statements counter++ and counter—concurrently then the value of counter may be 4,5
or 6 which is incorrect. To explain this further, counter ++ may be implemented in
machine language as follows:
register1 = counter
register1 = register1 + 1
counter = register1

and counter - - may be implemented as follows:


register2 = counter
register2 = register2 - 1
counter = register2

The concurrent execution of counter ++ and counter - - is equivalent to a sequential


execution of the statement are interleaved in some arbitrary order. One such interleaving
is given below:
T0: producer execute register1 = counter {register1 = 5}
T1: producer execute register1 = register1 + 1 {register1 = 6}
T2: consumer execute register2 = counter {register2 = 5}
T3: consumer execute register2 = register2 − 1 {register2 = 4}
T4: producer execute counter = register1 {counter = 6}
T5: consumer execute counter = register2 {counter = 4}

A situation like this, where several processes access and manipulate the same data concurrently
and the outcome of the execution depends on the particular order in which the access takes place,
is called a race condition.
To guard against the race condition above, we need to ensure that only one process at a time can
be manipulating the variable counter.

2.2.1 The Critical-Section Problem

 A critical section is the portion of a program that accesses shared data.


 Each process has a code segment, called critical section, in which the shared data is
accessed.
 To ensure synchronization, no two processes is should be allowed to execute in their
critical sections at the same time.

Requirements to be satisfied for a Solution to the Critical-Section Problem:

1. Mutual Exclusion - If process Pi is executing in its critical section, then no other


processes can be executing in their critical sections.
2. Progress - If no process is executing in its critical section and there exist some processes
that wish to enter their critical section, then the selection of the processes that will enter
the critical section next cannot be postponed indefinitely.
3. Bounded Waiting - A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its
critical section and before that request is granted.

General structure of process Pi

do {
entry section
critical section

exit section
remainder section

} while (1);

 Two general approaches are used to handle critical sections in operating systems:
preemptive kernels and non-preemptive kernels.
 A preemptive kernel allows a process to be preempted while it is running in kernel mode.
 A non-preemptive kernel does not allow a process running in kernel mode to be
preempted; a kernel-mode process will run until it exits kernel mode, blocks, or
voluntarily yields control of the CPU.
 Obviously, a non-preemptive kernel is essentially free from race conditions on kernel
data structures, as only one process is active in the kernel at a time.
 We cannot say the same about preemptive kernels, so they must be carefully designed to
ensure that shared kernel data are free from race conditions.
 Preemptive kernels are especially difficult to design for SMP architectures, since in these
environments it is possible for two kernel-mode processes to run simultaneously on
different processors.

2.2.2 Synchronization Hardware

These primitive operations can be used directly as synchronization tools, or they can be used to
form the foundation of more abstract synchronization mechanisms.

 Memory Barriers

How a computer architecture determines what memory guarantees it will provide to an application
program is known as its memory model. In general, a memory model falls into one of two
categories:

1. Strongly ordered, where a memory modification on one processor is immediately visible to all
other processors.

2. Weakly ordered, where modifications to memory on one processor may not be immediately
visible to other processors.

Memory models vary by processor type, so kernel developers can not make any assumptions
regarding the visibility of modifications to memory on a shared-memory multiprocessor. To address
this issue, computer architectures provide instructions that can force any changes in memory to be
propagated to all other processors, thereby ensuring that memory modifications are visible to
threads running on other processors. Such instructions are known as memory barriers or memory
fences. When a memory barrier instruction is performed, the system ensures that all loads and stores
are completed before any subsequent load or store operations are performed. Therefore, even if
instructions were reordered, the memory barrier ensures that the store operations are completed in
memory and visible to other processors before future load or store operations are performed.

 Hardware Instructions

Many modern computer systems provide special hardware instructions that allow us either to test
and modify the content of a word or to swap the contents of two words atomically—that is, as one
uninterruptible unit. We can use these special instructions to solve the critical-section problem in a
relatively simple manner. Rather than discussing one specific instruction for one specific machine,
we abstract the main concepts behind these types of instructions by describing the test and set() and
compare and swap() instructions.

 Atomic Variables

Typically, the compare and swap() instruction is not used directly to provide mutual exclusion.
Rather, it is used as a basic building block for constructing other tools that solve the critical-section
problem. One such tool is an atomic variable, which provides atomic operations on basic data types
such as integers and booleans. Incrementing or decrementing an integer value may produce a race
condition. Atomic variables can be used in to ensure mutual exclusion in situations where there may
be a data race on a single variable while it is being updated, as when a counter is incremented.

2.2.3 Mutex Locks

 Mutex(Mutual Exclusion) lock is a simple software tool that solves the critical section
problem.
 The mutex lock is used to protect critical regions and thus prevent race conditions.
 A process must acquire the lock before entering a critical section; it releases the lock
when it exits the critical section.
 The acquire() function acquires the lock, and the release() function releases the lock.
 A mutex lock has a boolean variable available whose value indicates if the lock is
available or not.
 If the lock is available, a call to acquire() succeeds, and the lock is then considered
unavailable.
 A process that attempts to acquire an unavailable lock is blocked until the lock is
released.

The definition of acquire() is as follows:


acquire() {
while (!available)
; /* busy wait */
available = false;;
}

Solution to the critical-section problem using mutex locks.

do {
acquire
lock
critical section
release lock
remainder section
} while (true);

The definition of release() is as follows:


release()
{ available = true;
}
 Calls to either acquire() or release() must be performed atomically.
 The main disadvantage of the implementation given here is that it requires busy waiting.
 mutex lock is also called a spinlock because the process “spins” while waiting for the lock to
become available.
 Advantage of Spinlocks is that no context switch is required when a process must wait on a
lock.
 When locks are expected to be held for short times, spinlocks are useful.

2.2.4 Semaphores

 It is a synchronization tool that is used to generalize the solution to the critical section
problem.
 A Semaphore S is an integer variable that can only be accessed via two indivisible (atomic)
operations namely
1. wait or P operation ( to test )
2. signal or V operation ( to increment
) wait (s)
{
while(s0)
; s--;
}
signal (s)
{
s++;
}
Mutual Exclusion Implementation using semaphore
do
{

wait(mutex);

critical section
signal(mutex)

remainder section
} while (1);
Semaphore Implementation
 The semaphore discussed so far requires a busy waiting. That is if a process is in critical-
section, the other process that tries to enter its critical-section must loop continuously in the
entry code.
 To overcome the busy waiting problem, the definition of the semaphore operations wait and
signal should be modified.
 When a process executes the wait operation and finds that the semaphore value is not
positive, the process can block itself. The block operation places the process into a waiting
queue associated with the semaphore.
 A process that is blocked waiting on a semaphore should be restarted when some other
process executes a signal operation. The blocked process should be restarted by a wakeup
operation which put that process into ready queue.
 To implemented the semaphore, we define a semaphore as a record
as: typedef struct {
int value;
struct process *L;
} semaphore;

 Assume two simple operations:


1. block suspends the process that invokes it.
2. wakeup(P) resumes the execution of a blocked process P.
 Semaphore operations now defined as
wait(S)
{
S.value--;
if (S.value < 0) {
add this process to S.L;
block;
}

signal(S)
{
S.value++;
if (S.value <= 0) {
remove a process P from S.L;
wakeup(P);
}

Deadlock & starvation:


Example: Consider a system of two processes, P0 & P1 each accessing two semaphores, S & Q,
set to the value 1.
P0 P1
Wait (S) Wait (Q)
Wait (Q) Wait (S)
. .
. .
. .
Signal(S) Signal(Q)
Signal(Q) Signal(S)
 Suppose that P0 executes wait(S), then P1 executes wait(Q). When P0 executes wait(Q),
it must wait until P1 executes signal(Q).Similarly when P1 executes wait(S), it must wait
until P0 executes signal(S). Since these signal operations cannot be executed, P0 & P1
are deadlocked.
 Another problem related to deadlock is indefinite blocking or starvation, a situation
where a process wait indefinitely within the semaphore. Indefinite blocking may occur if
we add or remove processes from the list associated with a semaphore in LIFO order.

Types of Semaphores
 Counting semaphore – any positive integer value
 Binary semaphore – integer value can range only between 0 and 1

2.2.5 Monitors

 A monitor is a synchronization construct that supports mutual exclusion and the ability to
wait /block until a certain condition becomes true.
 A monitor is an abstract datatype that encapsulates data with a set of functions to operate
on the data.
Characteristics of Monitor

 The local variables of a monitor can be accessed only by the local functions.
 A function defined within a monitor can only access the local variables of a monitor and its
formal parameter.
 Only one process may be active within the monitor at a time.

Syntax of a Monitor

monitor monitor-name
{
// shared variable declarations
function P1 (…) { …. }

function Pn (…) {……}
initialization code (….) {
}
}

Schematic view of a monitor

Monitor with condition variables

 Instead of lock-based protection, monitors use a shared condition variable for


synchronization and only two operations wait() and signal() can be applied on the
condition variable.

condition x, y;
x.wait (); // a process that invokes the operation is suspended.
x.signal (); //resumes one of the suspended processes(if any)

2.2.6 Classic problems of synchronization

Bounded-Buffer Problem:

It is commonly used to illustrate the power of synchronization primitives. Assume that the pool
consists of n buffers, each capable of holding one item. The mutex semaphore provides mutual
exclusion for accesses to the buffer pool and is initialized to the value 1. The empty and full
semaphores count the number of empty and full buffers, respectively. The semaphore empty is
initialized to the value n; the semaphore full is initialized to the value 0.
The code for the producer process is shown, the code for the consumer process is also shown.
Note the symmetry between the producer and the consumer. Interpret this code as the producer
producing full buffers for the consumer, or as the consumer producing empty buffers for the
producer.
In our problem, the producer and consumer processes share the following
data structures:
int n;
semaphore mutex = 1;
semaphore empty = n;
Bounded-Buffer Problem Producer Process:

do {

produce an item in nextp

wait(empty);
wait(mutex);

add nextp to buffer

signal(mutex);
signal(full);
} while (1);

Bounded-Buffer Problem Consumer Process:

do {
wait(full)
wait(mutex);

remove an item from buffer to nextc

signal(mutex);
signal(empty);

consume the item in nextc

} while (1);

Readers-Writers Problem:

A data object (such as a file or record) is to be shared among several concurrent processes. Some of
these processes may want only to read the content of the shared object, whereas others may want to
update (that is, to read and write) the shared object. We distinguish between these two types of
processes by referring to those processes that are interested in only reading as readers, and to the
rest as writers. Obviously, if two readers access the shared data object simultaneously, no adverse
effects will result.
To ensure that these difficulties do not arise, we require that the writers have exclusive access to
the shared object. This synchronization problem is referred to as the readers-writers problem.

 Shared data

semaphore mutex, wrt;

Initially

mutex = 1, wrt = 1, readcount = 0

Readers-Writers Problem Writer Process:

wait(wrt);

writing is performed

signal(wrt);

Readers-Writers Problem Reader Process:

wait(mutex);
readcount++;
if (readcount == 1)
wait(rt);
signal(mutex);

reading is performed

wait(mutex);
readcount--;
if (readcount == 0)
signal(wrt);
signal(mutex):

Dining-Philosophers Problem:

Consider five philosophers who spend their lives thinking and eating. The philosophers share a
common circular table surrounded by five chairs, each belonging to one philosopher. In the center
of the table is a bowl of rice, and the table is laid with five single chopsticks (shown in Figure given
below) When a philosopher thinks, she does not interact with her colleagues. From time to time, a
philosopher gets hungry and tries to pick up the two chopsticks that are closest to her (the
chopsticks that are between her and her left and right neighbors). A philosopher may pick up only
one chopstick at a time. Obviously, she cannot pick up a chopstick that is already in the hand of a
neighbor. When a hungry philosopher has both her chopsticks at the same time, she eats without
releasing her chopsticks
 Shared data
semaphore chopstick[5];
Initially all values are 1

The situation of dining philosophers


The situation of the dining philosophers is blocking her chopsticks. When she is finished eating, she puts
down both of her chopsticks and starts thinking again.
The dining-philosophers problem is considered a classic synchronization problem, neither because of its
practical importance nor because computer scientists dislike philosophers, but because it is an example of a
large class of concurrency-control problems. It is a simple representation of the need to allocate several
resources among several processes in a deadlock and starvation-free manner.
One simple solution is to represent each chopstick by a semaphore. A philosopher tries to grab the
chopstick by executing a wait operation on that semaphore; she releases her chopsticks by executing the
signal operation on the appropriate semaphores. Thus, the shared data are
semaphore chopstick [5];
where all the elements of chopstick are initialized to 1. The structure of philosopher i is shown below.
 Philosopher i:
do {
wait(chopstick[i])
wait(chopstick[(i+1) % 5])

eat

signal(chopstick[i]);
signal(chopstick[(i+1) % 5]);

think

} while (1);

2.3 DEADLOCKS

Definition: A process requests resources. If the resources are not available at that time, the
process enters a wait state. Waiting processes may never change state again because the
resources they have requested are held by other waiting processes. This situation is called a
deadlock.

A process must request a resource before using it, and must release resource after using it.
1. Request: If the request cannot be granted immediately then the requesting process must
wait until it can acquire the resource.
2. Use: The process can operate on the resource
3. Release: The process releases the resource.

2.3.1 Deadlock Characterization

Four Necessary conditions for a deadlock


1. Mutual exclusion: At least one resource must be held in a non sharable mode. That is only
one process at a time can use the resource. If another process requests that resource, the
requesting process must be delayed until the resource has been released.
2. Hold and wait: A process must be holding at least one resource and waiting to acquire
additional resources that are currently being held by other processes.
3. No preemption: Resources cannot be preempted.
4. Circular wait: P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that
is held by P2...Pn-1.
Resource-Allocation Graph
 It is a directed graph with a set of vertices V and a set of edges E.
 V is partitioned into two types:
1. nodes P = {p1, p2,..pn}
2. Resource type R ={R1,R2,...Rm}
 Pi -->Rj - request => request edge
 Rj-->Pi - allocated => assignment edge.
 Pi is denoted as a circle and Rj as a square.
 Rj may have more than one instance represented as a dot within the
square. Sets P,R and E.
P = { P1,P2,P3}
R = {R1,R2,R3,R4}
E= {P1->R1, P2->R3, R1->P2, R2->P1, R3->P3 }
 Resource instances
One instance of resource type R1,Two instance of resource type R2,One instance
of resource type R3,Three instances of resource type R4.

Process states
Process P1 is holding an instance of resource type R2, and is waiting for an instance of resource
type R1.
Resource Allocation Graph with a deadlock

Process P2 is holding an instance of R1 and R2 and is waiting for an instance of resource type
R3.Process P3 is holding an instance of R3.
P1->R1->P2->R3->P3->R2->P1
P2->R3->P3->R2->P2

2.3.2 Methods for handling Deadlocks

1. Deadlock Prevention
2. Deadlock Avoidance
3. Deadlock Detection and Recovery

Deadlock Prevention:
 This ensures that the system never enters the deadlock state.
 Deadlock prevention is a set of methods for ensuring that at least one of the necessary
conditions cannot hold.
 By ensuring that at least one of these conditions cannot hold, we can prevent the
occurrence of a deadlock.
1. Denying Mutual exclusion
 Mutual exclusion condition must hold for non-sharable resources.
 Printer cannot be shared simultaneously shared by prevent processes.
 Sharable resource - example Read-only files.
 If several processes attempt to open a read-only file at the same time, they can be granted
simultaneous access to the file.
 A process never needs to wait for a sharable resource.

2. Denying Hold and wait


 Whenever a process requests a resource, it does not hold any other resource.
 One technique that can be used requires each process to request and be allocated all its
resources before it begins execution.
 Another technique is before it can request any additional resources, it must release all the
resources that it is currently allocated.
 These techniques have two main disadvantages :
o First, resource utilization may be low, since many of the resources may be
allocated but unused for a long time.
o We must request all resources at the beginning for both protocols. starvation is
possible.
3. Denying No preemption
 If a process is holding some resources and requests another resource that cannot be
immediately allocated to it. (that is the process must wait), then all resources currently
being held are preempted.(ALLOW PREEMPTION)
 These resources are implicitly released.
 The process will be restarted only when it can regain its old resources.
4. Denying Circular wait
 Impose a total ordering of all resource types and allow each process to request for
resources in an increasing order of enumeration.
 Let R = {R1,R2,...Rm} be the set of resource types.
 Assign to each resource type a unique integer number.
 If the set of resource types R includes tapedrives, disk drives and
printers. F(tapedrive)=1,
F(diskdrive)=5,
F(Printer)=12.
 Each process can request resources only in an increasing order of enumeration.

Deadlock Avoidance:
 Deadlock avoidance request that the OS be given in advance additional information
concerning which resources a process will request and use during its life time. With this
information it can be decided for each request whether or not the process should wait.
 To decide whether the current request can be satisfied or must be delayed, a system must
consider the resources currently available, the resources currently allocated to each
process and future requests and releases of each process.
 Safe State
A state is safe if the system can allocate resources to each process in some order and still
avoid a dead lock.

 A deadlock is an unsafe state.


 Not all unsafe states are dead locks
 An unsafe state may lead to a dead lock
 Two algorithms are used for deadlock avoidance namely;
1. Resource Allocation Graph Algorithm - single instance of a resource type.
2. Banker’s Algorithm – several instances of a resource type.
Resource allocation graph algorithm
 Claim edge - Claim edge Pi---> Rj indicates that process Pi may request resource Rj at
some time, represented by a dashed directed edge.
 When process Pi request resource Rj, the claim edge Pi -> Rj is converted to a request
edge.
 Similarly, when a resource Rj is released by Pi the assignment edge Rj -> Pi is reconverted
to a claim edge Pi -> Rj
 The request can be granted only if converting the request edge Pi -> Rj to an assignment
edge Rj -> Pi does not form a cycle.

 If no cycle exists, then the allocation of the resource will leave the system in a safe state.
 If a cycle is found, then the allocation will put the system in an unsafe state.

Banker's algorithm
 Available: indicates the number of available resources of each type.
 Max: Max[i, j]=k then process Pi may request at most k instances of resource type Rj
 Allocation : Allocation[i. j]=k, then process Pi is currently allocated K instances of
resource type Rj
 Need : if Need[i, j]=k then process Pi may need K more instances of resource type Rj

Need [i, j]=Max[i, j]-Allocation[i, j]


Safety algorithm
1. Initialize work := available and Finish [i]:=false for i=1,2,3 .. n
2. Find an i such that both
a. Finish[i]=false
b. Needi<= Work
if no such i exists, goto step 4
3. work :=work+ allocationi;
Finish[i]:=true
goto step 2
4. If finish[i]=true for all i, then the system is in a safe state
Resource Request Algorithm
Let Requesti be the request from process Pi for resources.
1.
If Requesti<= Needi goto step2, otherwise raise an error condition, since the process has
exceeded its maximum claim.
2.
If Requesti <= Available, goto step3, otherwise Pi must wait, since the resources are not
available.
3.
Available := Availabe-Requesti;
Allocationi := Allocationi + Requesti
Needi := Needi - Requesti;
 Now apply the safety algorithm to check whether this new state is safe or not.
 If it is safe then the request from process Pi can be granted.

Deadlock detection
(i) Single instance of each resource type
 If all resources have only a single instance, then we can define a deadlock detection
algorithm that use a variant of resource-allocation graph called a wait for graph.
Resource Allocation Graph

Wait for Graph


(ii) Several Instances of a resource type
Available : Number of available resources of each type
Allocation : number of resources of each type currently allocated to each process
Request : Current request of each process
If Request [i,j]=k, then process Pi is requesting K more instances of resource type Rj.
1. Initialize work := available
Finish[i]=false, otherwise finish
[i]:=true
2. Find an index i such that both
a. Finish[i]=false
b. Requesti<=work
if no such i exists go to step4.
3. Work:=work+allocationi
Finish[i]:=true
goto step2
4. If finish[i]=false
then process Pi is deadlocked

Deadlock Recovery
1. Process Termination
1. Abort all deadlocked processes.
2. Abort one deadlocked process at a time until the deadlock cycle is eliminated.
After each process is aborted , a deadlock detection algorithm must be invoked to
determine where any process is still dead locked.
2. Resource Preemption
Preemptive some resources from process and give these resources to other processes until the
deadlock cycle is broken.
i. Selecting a victim: which resources and which process are to be preempted.
ii. Rollback: if we preempt a resource from a process it cannot continue with its normal
execution. It is missing some needed resource. we must rollback the process to some safe state,
and restart it from that state.
iii. Starvation : How can we guarantee that resources will not always be preempted from
the same process.

You might also like