0% found this document useful (0 votes)
6 views98 pages

Unit III Final

Uploaded by

Navya Kamble
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views98 pages

Unit III Final

Uploaded by

Navya Kamble
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 98

Process synchronization and

deadlocks

Mr. S. C. Sagare
Introduction
 A cooperating process is one that can affect or be affected by other
processes executing in the system.

 Cooperating processes can either directly share a logical address space (that is,
both code and data) or be allowed to share data only through files or messages.

 Processes can execute concurrently or in parallel.

 One process may only partially complete execution before another process is
scheduled.

 In fact, a process may be interrupted at any point in its instruction stream, and
the processing core may be assigned to execute instructions of another process
• The code for the producer process can be modified as follows:

• Although the producer and consumer routines shown above are correct separately,
they may not function correctly when executed concurrently.
 Suppose that the value of the variable counter is currently 5 and that the producer and
consumer processes concurrently execute the statements “counter++” and
“counter--”.
 Following the execution of these two statements, the value of the variable counter may
be 4, 5, or 6!
 The only correct result, though, is counter == 5, which is generated correctly if the
producer and consumer execute separately.
 We can show that the value of counter may be incorrect as follows. Note that the
statement “counter++” may be implemented in machine language (on a typical
machine) as follows:
register1 = counter
register1 = register1 + 1
counter = register1
 where register1 is one of the local CPU registers. Similarly, the statement “counter--” is
implemented as follows:
register2 = counter
register2 = register2 − 1
counter = register2
 where again register2 is one of the local CPU registers. Even though register1 and
register2 may be the same physical register (an accumulator, say), remember that the
contents of this register will be saved and restored by the interrupt handler
 The concurrent execution of “counter++” and “counter--” is equivalent to a
sequential execution in which the lower-level statements presented
 previously are interleaved in some arbitrary order (but the order within each
high-level statement is preserved). One such interleaving is the following:
T0: producer execute register1 = counter {register1 = 5}
T1: producer execute register1 = register1 + 1 {register1 = 6}
T2: consumer execute register2 = counter {register2 = 5}
T3: consumer execute register2 = register2 − 1 {register2 = 4}
T4: producer execute counter = register1 {counter = 6}
T5: consumer execute counter = register2 {counter = 4}

 Notice that we have arrived at the incorrect state “counter == 4”, indicating
that four buffers are full, when, in fact, five buffers are full.
 If we reversed the order of the statements at T4 and T5, we would arrive at
the incorrect state “counter == 6”.
 We would arrive at this incorrect state because we allowed both processes
to manipulate the variable counter concurrently.
 A situation like this, where several processes access and manipulate the same
data concurrently and the outcome of the execution depends on the
particular order in which the access takes place, is called a race condition.
 Process Synchronization means sharing system resources by processes in a such
a way that, Concurrent access to shared data is handled thereby minimizing the
chance of inconsistent data.

 Process Synchronization was introduced to handle problems that arise while


multiple process executions.

 On the basis of synchronization, processes are categorized as one of the


following two types:

 Independent Process : Execution of one process does not affects the


execution of other processes.
 Cooperative Process : Execution of one process affects the execution of
other processes.
Critical Section Problem
 We begin our consideration of process synchronization by discussing the so
called critical-section problem.

 Consider a system consisting of n processes {P0, P1, ..., Pn−1}.

 Each process has a segment of code, called a critical section, in which the process
may be changing common variables, updating a table, writing a file, and so on.

 The important feature of the system is that, when one process is executing in
its critical section, no other process is allowed to execute in its critical section.

 That is, no two processes are executing in their critical sections at the same
time.
 The critical-section problem is to design a protocol that the processes can
use to cooperate.

 Each process must request permission to enter its critical section.

 The section of code implementing this request is the entry section.

 The critical section may be followed by an exit section.

 The remaining code is the remainder section.


Use of CS in airline reservation
to avoid race conditions

• All processes have an identical form


• nextseatno is examined and incremented within a CS
• Hence a race condition does not exist
Fig. General structure of a typical process Pi .
 A solution to the critical-section problem must satisfy the following three
requirements:

 Mutual exclusion. If process Pi is executing in its critical section, then no


other processes can be executing in their critical sections.

 Progress. If no process is executing in its critical section and some


processes wish to enter their critical sections, then only those processes that
are not executing in their remainder sections can participate in deciding which
will enter its critical section next.

 Bounded waiting. After a process makes a request for getting into its critical
section, there is a limit for how many other processes can get into their critical
section, before this process's request is granted. So after the limit is reached,
system must grant the process permission to get into its critical section..
Peterson’s Solution
 A classic software-based solution to the critical-section problem known as
Peterson’s solution.

 Peterson’s solution is restricted to two processes that alternate execution


between their critical sections and remainder sections.

 The processes are numbered P0 and P1. For convenience, when presenting Pi , we
use Pj to denote the other process; that is, j equals 1 − i.

 In Peterson’s solution, we have two shared variables:

 Boolean flag[i] :Initialized to FALSE, initially no one is interested in entering the


critical section

 int turn :The process whose turn is to enter the critical section.
 The variable turn indicates whose turn it is to enter its critical section.

 That is, if turn == i, then process Pi is allowed to execute in its critical section.

 The flag array is used to indicate if a process is ready to enter its critical section.

 For example, if flag[i] is true, this value indicates that Pi is ready to enter its
critical section.
 Peterson’s Solution preserves all three conditions :

 Mutual Exclusion is assured as only one process can access the critical
section at any time.

 Progress is also assured, as a process outside the critical section does not
blocks other processes from entering the critical section.

 Bounded Waiting is preserved as every process gets a fair chance.

 Disadvantages of Peterson’s Solution


 It involves Busy waiting
 It is limited to 2 processes.
 A mutual exclusion (mutex) is a program object that prevents simultaneous
access to a shared resource.

 To prove property 1, we note that each Pi enters its critical section only if either
flag[j] == false or turn == i.

 To prove properties 2 and 3,we note that a process Pi can be prevented from
entering the critical section only if it is stuck in the while loop with the
condition flag[j] == true and turn == j; this loop is the only one possible.

 If Pj is not ready to enter the critical section, then flag[j] == false, and Pi can enter
its critical section.
Mutex and Mutex Locks
 operating-systems designers build software tools to solve the critical-section
problem.The simplest of these tools is the mutex lock.

 the term mutex is short for mutual exclusion.

 A mutex is a binary semaphore variable whose purpose is to provide locking


mechanism. It is used to provide mutual exclusion to a section of code, means
only one process can work on a particular code section at a time.

 Typically, when a program is started, it creates a mutex for a given resource at


the beginning by requesting it from the system and the system returns a unique
name or ID for it.

 After that, any thread needing the resource must use the mutex to lock the
resource from other threads while it is using the resource.

 If the mutex is already locked, a thread needing the resource is typically queued
by the system and then given control when the mutex becomes unlocked
 We use the mutex lock to protect critical regions and thus prevent race
conditions.

 That is, a process must acquire the lock before entering a critical section

 it releases the lock when it exits the critical section. The acquire()function
acquires the lock, and the release() function releases the lock.

 A mutex lock has a Boolean variable available whose value indicates if the lock is
available or not.

 If the lock is available, a call to acquire() succeeds, and the lock is then considered
unavailable.

 Example ATM Machine


 The main disadvantage of the implementation given here is that it requires
busy waiting.

 While a process is in its critical section, any other process that tries
to enter its critical section must loop continuously in the call to acquire().
Semaphores
 In 1965, Dijkstra proposed a new and very significant technique for managing concurrent processes
by using the value of a simple integer variable to synchronize the progress of interacting processes.
 Semaphore is nothing but a synchronization tool with the help of which we can ensure that the
critical section can be accessed by the processes in a mutually exclusive manner.

 A semaphore S is an integer variable that, apart from initialization, is accessed only through two
standard atomic operations: wait() and signal().
 The wait() operation was originally termed P (from the Dutch proberen, “to test”); signal() was
originally called V (from verhogen,“to increment”).
 The definition of wait() is as follows:
wait(S) {
while (S <=0) ; // busy wait
S--;
}

 The definition of signal() is as follows:


signal(S) {
S++;
 }
 Semaphore is a simply a variable. This variable is used to solve critical section
problem and to achieve process synchronization in the multi processing
environment.

 Wait() :-- it is called when a process wants to access a resource. ( when the
semaphore variable is negative, the process wait is blocked.

 Signal():-- is called when a process is done using a resource.


 Some point regarding P and V operation
 P operation is also called wait, sleep or down operation and V operation is also
called signal, wake-up or up operation.

 Both operations are atomic and semaphore(s) is always initialized to one.

 A critical section is surrounded by both operations to implement process


synchronization.

 critical section of Process P is in between P and V operation.



 Now, let us see how it implements mutual exclusion.

 Let there be two processes P1 and P2 and a semaphore s is initialized as 1.

 Now if suppose P1 enters in its critical section then the value of semaphore s
becomes 0.

 Now if P2 wants to enter its critical section then it will wait until s > 0, this can
only happen when P1 finishes its critical section and calls V operation on
semaphore s.

 This way mutual exclusion is achieved.


Semaphore Implementation
 To overcome the need for busy waiting, we can modify the definition of the
wait() and signal() operations as follows:

 When a process executes the wait() operation and finds that the semaphore
value is not positive, it must wait.

 However, rather than engaging in busy waiting, the process can block itself.

 The block operation places a process into a waiting queue associated with the
semaphore, and the state of the process is switched to the waiting state.

 Then control is transferred to the CPU scheduler, which selects another


process to execute
 A process that is blocked, waiting on a semaphore S, should be restarted when
some other process executes a signal() operation.

 The process is restarted by a wakeup() operation, which changes the process


from the waiting state to the ready state.

 The process is then placed in the ready queue


Deadlocks and Starvation
 The implementation of a semaphore with a waiting queue may result in a
situation where two or more processes are waiting indefinitely for an event
that can be caused only by one of the waiting processes.
 The event in question is the execution of a signal() operation. When such a
state is reached, these processes are said to be deadlocked.
Deadlocks and Starvation
 Another problem related to deadlocks is indefinite blocking or starvation, a
situation in which processes wait indefinitely within the semaphore.

 Indefinite blocking may occur if we remove processes from the list associated
with a semaphore in LIFO (last-in, first-out) order.
Priority Inversion
 A scheduling challenge arises when a higher-priority process needs to read or
modify kernel data that are currently being accessed by a lower-priority process—
or a chain of lower-priority processes.This problem is called priority inversion
 Priority inversion is the condition where high priority process need to wait for low
priority process to release the resource.

 Since kernel data are typically protected with a lock, the higher-priority process will
have to wait for a lower-priority one to finish with the resource.

 As an example, assume we have three processes—L, M, and H—whose priorities


follow the order L < M < H.

 Assume that process H requires resource R, which is currently being accessed by process
L. Ordinarily, process H would wait for L to finish using resource R.

 However, now suppose that process M becomes runnable, thereby preempting process
L.

 Indirectly, a process with a lower priority—process M—has affected how long process
H must wait for L to relinquish resource R.
 This problem is known as priority inversion.

 Processes Pi and Pj now wait for each other indefinitely. Because a high-priority
process waits for a process with a low priority, this situation is called priority
inversion
 Three scenario's of normal execution
1.
--L is running but not in CS.
--If H wish to execute then it will preempt L and execute
2.
--L is running in CS.
--If H wish to execute but not in CS then it will preempt L and execute
3.
--L is running in CS.
--If H wish to execute in CS then H has to wait for L and then execute
Example,
L,M,H
L is executing in CS.
H wish to execute in CS.
M interrupts L and start execution…

Sequence of execution..
L-M-L-H
Classic Problems of Synchronization
 A solution to a process synchronization problem should meet three important
criteria.

 Correctness: Data access synchronization and control synchronization should be


performed in accordance with synchronization requirements of the problem.

 Maximum concurrency: A process should be able to operate freely except when it


needs to wait for other processes to perform synchronization actions.

 No busy waits: To avoid performance degradation, synchronization should be


performed through blocking rather than through busy waits.

 critical sections and signaling are the key elements of process synchronization, so
a solution to a process synchronization problem should incorporate a suitable
combination of these elements.
The Bounded-Buffer Problem
 A producers–consumers system with bounded buffers consists of an unspecified
number of producer and consumer processes and a finite pool of buffers.

 Each buffer is capable of holding one item of information—it is said to


become full when a producer writes a new item into it, and become empty
when a consumer copies out an item contained in it;

 A producer process produces one item of information at a time and writes it


into an empty buffer. A consumer process consumes information one item at a
time from a full buffer.
 A producers–consumers system with bounded buffers is a useful abstraction for
many practical synchronization problems.A print service is a good example

 A fixed-size queue of print requests is the bounded buffer. A process that


adds a print request to the queue is a producer process, and a print daemon is a
consumer process.
The Readers–Writers Problem
 Suppose that a database is to be shared among several concurrent processes.
Some of these processes may want only to read the database, whereas others
may want to update (that is, to read and write) the database.

 We distinguish between these two types of processes by referring to the former


as readers and to the latter as writers.

 Obviously, if two readers access the shared data simultaneously, no adverse


effects will result.

 However, if a writer and some other process (either a reader or a writer) access
the database simultaneously, disorder may ensue.

 To ensure that these difficulties do not arise, we require that the writers have
exclusive access to the shared database while writing to the database. This
synchronization problem is referred to as the readers–writers problem
The Readers–Writers Problem
 The readers–writers problem has several variations, all involving priorities.
 The simplest one, referred to as the first readers–writers problem, requires that
no reader be kept waiting unless a writer has already obtained permission to use
the shared object.
 In other words, no reader should wait for other readers to finish simply because
a writer is waiting.

 The second readers–writers problem requires that, once a writer is ready, that
writer perform its write as soon as possible.
 In other words, if a writer is waiting to access the object, no new readers may
start reading.
 A solution to either problem may result in starvation.

 In the first case, writers may starve; in the second case, readers may starve.
For this reason, other variants of the problem have been proposed.
The Dining-Philosophers Problem
 Consider five philosophers who spend their lives thinking and eating.

 The philosophers share a circular table surrounded by five chairs, each belonging
to one philosopher.

 In the center of the table is a bowl of rice, and the table is laid with five single
chopsticks

 When a philosopher thinks, she does not interact with her colleagues.

 From time to time, a philosopher gets hungry and tries to pick up the two
chopsticks that are closest to her (the chopsticks that are between her left and
right neighbors).

 A philosopher may pick up only one chopstick at a time.


 Obviously, she cannot pick up a chopstick that is already in the hand of a
neighbor.

 When a hungry philosopher has both her chopsticks at the same time, she eats
without releasing the chopsticks.

 When she is finished eating, she puts down both chopsticks and starts thinking
again.
 One simple solution is to represent each chopstick with a semaphore.
 A philosopher tries to grab a chopstick by executing a wait() operation on that
semaphore.
 She releases her chopsticks by executing the signal() operation on the
appropriate semaphores.Thus, the shared data are
semaphore chopstick[5];
Deadlocks
 In a multiprogramming environment, several processes may compete for a finite
number of resources.

 A process requests resources; if the resources are not available at that time, the
process enters a waiting state.

 Sometimes, a waiting process is never again able to change state, because the
resources it has requested are held by other waiting processes.

 This situation is called a deadlock.

 A deadlock is a situation concerning a set of processes in which each process in the


set waits for an event that must be caused by another process in the set. Each
process is then waiting for an event that cannot occur
System Model
 A system consists of a finite number of resources to be
distributed among a number of competing processes.

 The resources may be partitioned into several types (or classes),


each consisting of some number of identical instances.

 CPU cycles, files, and I/O devices (such as printers and DVD
drives) are examples of resource types.
 A set of blocked processes each holding a resource and waiting to acquire a
resource held by another process in the set.

 Example of a deadlock problem:


– System has 2 tape drives.
– Process P1 and process P2 each hold one tape drive and each needs another one.

• A system consists of a finite number of resources to be distributed among a


number of competing processes.

• Resource types R1, R2, . . ., Rm

• The resources may be either physical (I/O devices, tape drives, memory space,
and CPU cycles) or logical (files and semaphores).

• A process must request a resource before using it, and must release the
resource after using it
 Under the normal mode of operation, a process may utilize a resource in only
the following sequence:

1. Request. The process requests the resource to OS.


 If the request cannot be granted immediately (for example, if the resource is
being used by another process), then the requesting process must wait until it
can acquire the resource.

2. Use.
 The process can operate on the resource (for example, if the resource is a
printer, the process can print on the printer).

3. Release.
 The process releases the resource.
Deadlock Characterization
 In a deadlock, processes never finish executing, and system resources
are tied up, preventing other jobs from starting.

 A deadlock happens mainly for four reasons ……..


1. Mutual exclusion. At least one resource must be held in a non sharable
mode; that is, only one process at a time can use the resource. If another
process requests that resource, the requesting process must be delayed until the
resource has been released.(the probability of deadlock will get increased)

2. Hold and wait. A process must be holding at least one resource and waiting
to acquire additional resources that are currently being held by other processes.

3. No preemption. Resources cannot be preempted; that is, a resource can be


released only voluntarily by the process holding it, after that process has
completed its task.

4. Circular wait. A set {P0, P1, ..., Pn} of waiting processes must exist such that P0
is waiting for a resource held by P1, P1 is waiting for a resource held by P2, ..., Pn−1 is
waiting for a resource held by Pn, and Pn is waiting for a resource held by P0.
Resource-Allocation Graph
 Deadlocks can be described more precisely in terms of a directed graph called
a system resource-allocation graph.

 This graph consists of a set of vertices V and a set of edges E.

 The set of vertices V is partitioned into two different types of nodes:

 P = {P1, P2, ..., Pn}, the set consisting of all the active processes in the system, and
 R = {R1, R2, ..., Rm}, the set consisting of all resource types in the system.

 A directed edge from process Pi to resource type Rj is denoted by Pi → Rj ;

 It signifies that process Pi has requested an instance of resource type Rj and is


currently waiting for that resource
 A directed edge from resource type Rj to process Pi is denoted by Rj → Pi;

 It signifies that an instance of resource type Rj has been allocated to process Pi.

 A directed edge Pi → Rj is called a request edge;

 A directed edge Rj → Pi is called an assignment edge.

 Pictorially, each process Pi is represented as a circle and each resource type Rj is


represented as a rectangle.

 Since resource type Rj may have more than one instance, each such instance is
represented as a dot within the rectangle.

 Note that a request edge points to only the rectangle Rj , whereas an assignment
edge must also designate one of the dots in the rectangle.
 When process Pi requests an instance of resource type Rj, a request edge is inserted
in the resource-allocation graph.

 When this request can be fulfilled, the request edge is instantaneously


transformed to an assignment edge.

 When the process no longer needs access to the resource, it releases the
resource.
 Resource allocation graph
 Given the definition of a resource-allocation graph, it can be shown that, if the
graph contains no cycles, then no process in the system is deadlocked.
 If the graph does contain a cycle, then a deadlock may exist.
 If each resource type has exactly one instance, then a cycle implies that a
deadlock has occurred.
 If the cycle involves only a set of resource types, each of which has only a
single instance, then a deadlock has occurred
Methods for Handling Deadlocks
 We can deal with the deadlock problem in one of three ways:
 •We can use a protocol to prevent or avoid deadlocks, ensuring that the system will
never enter a deadlocked state.

 • We can allow the system to enter a deadlocked state, detect it, and recover.

 • We can ignore the problem altogether and pretend that deadlocks never occur in
the system.
 To ensure that deadlocks never occur, the system can use either a deadlock
prevention or a deadlock-avoidance scheme.

 Deadlock prevention algorithms ensure that at least one of the necessary


conditions (Mutual exclusion, hold and wait, no preemption and circular wait)
does not hold true.
 Deadlock avoidance requires that the operating system be given
additional information in advance concerning which resources a process will
request and use during its lifetime
Deadlock Prevention
 Deadlocks can be prevented by preventing at least one of the four required
conditions:

 Mutual Exclusion
 We need to the categorization of all the resources in sharable and non sharable.
 The mutual exclusion condition must hold.
 Shared resources such as read-only files do not lead to deadlocks.
 Unfortunately some resources, such as printers and tape drives, require exclusive
access by a single process.

 Hold and Wait


 To ensure that the hold-and-wait condition never occurs in the system, we must
guarantee that, whenever a process requests a resource, it does not hold any other
resources.
 Requires process to request and be allocated all its resources before it begins
execution
 Example :- process coping data from a tape drive to disk file, sort files and print
results to a printer
 No Preemption
 Preemption of process resource allocations can prevent this condition of
deadlocks, when it is possible.

 The third necessary condition for deadlocks is that there be no preemption of


resources that have already been allocated.

 To ensure that this condition does not hold, we can use the following protocol.

 If a process is holding some resources and requests another resource that


cannot be immediately allocated to it (that is, the process must wait), then all
resources the process is currently holding are preempted.

 In other words, these resources are implicitly released. The preempted


resources are added to the list of resources for which the process is waiting.
 if a process requests some resources, we first check whether they are available.

 If they are, we allocate them. If they are not, we check whether they are
allocated to some other process that is waiting for additional resources.

 If so, we preempt the desired resources from the waiting process and allocate
them to the requesting process.

 If the resources are neither available nor held by a waiting process, the
requesting process must wait.
 Circular Wait
 To ensure that the circular-wait condition never holds is to determine a total
ordering of all resource types, and to require that each process requests
resources in an increasing order of enumeration.
 – Example: Let R={R1, R2, …, Rm} be the set of resource types.

 Assign to each resource type a unique integer number to compare two


resources and to determine whether one proceeds another in ordering.

 Define a one-to-one function F: R→N, where N is the set of natural numbers.


For example: If the set of resource types R includes tape drives, disk drives, and
printers, then a function F might be defined as follows:

 F(tape drive)= 1;
 F(disk drive)= 5;
 F(Printer)= 12.
 A protocol to prevent deadlocks: Each process can request
resources only in an increasing order of enumeration.

 A process can initially request any number of instances of a


resource type Ri.
 1. That process can request instances of resource type Rj if and
only if F(Rj)>F(Ri).
 From the previous example; a process wants to use the tape drive and
printer at the same time, must first request the tape drive and then the
printer.

 2. When a process requests an instance of resource type Rj, it has


released any resources Ri such that F(Ri)>=F(Rj).

 By applying 1 and 2 then the circular-wait condition cannot hold.


Deadlock Avoidance
 Deadlock-prevention algorithms prevent deadlocks by limiting how requests
can be made.

 The limits ensure that at least one of the necessary conditions for deadlock
cannot occur. Possible side effects of preventing deadlocks by this method,
however, are low device utilization and reduced system throughput.

 An alternative method for avoiding deadlocks is to require additional


information about how resources are to be requested.

 For example, in a system with one tape drive and one printer, the system might
need to know that process P will request first the tape drive and then the printer
before releasing both resources, whereas process Q will request first the printer
and then the tape drive.

 Here the operating system should have a prior database regarding the
availability of the resources, occupancy of resources, need of the processes.
Safe State
 A state is safe if the system can allocate resources to each process in some order
and still avoid a deadlock.

 More formally, a system is in a safe state only if there exists a safe sequence.

 If Pi resource needs are not immediately available, then Pi can wait until all Pj
have finished
 When Pj is finished, Pi can obtain needed resources, execute, return allocated
resources, and terminate
 When Pi terminates, Pi +1 can obtain its needed resources, and so on

 A safe state is not a deadlocked state.

 Conversely, a deadlocked state is an unsafe state.

 Not all unsafe states are deadlocks, however An unsafe state may lead
to a deadlock
 As long as the state is safe, the operating system can avoid unsafe (and
deadlocked) states.

 In an unsafe state, the operating system cannot prevent processes from


requesting resources in such a way that a deadlock occurs.

 Basic fact:
 If a system is in safe state  no deadlocks

 If a system is in unsafe state  possibility of deadlock

 Avoidance  ensure that a system will never enter an unsafe state.


Safe, Unsafe, Deadlock State
Example
 we consider a system with twelve magnetic tape drives and three processes:
 P0, P1, and P2.
 Process P0 requires ten tape drives
 process P1 may need as many as four tape drives
 process P2 may need up to nine tape drives.

 Suppose that, at time t0, process P0 is holding five tape drives


 Process P1 is holding two tape drives
 process P2 is holding two tape drives
 At time t0, the system is in a safe state.
 The sequence <P1, P0, P2> satisfies the safety condition.
 Process P1 can immediately be allocated all its tape drives and then return them.
(the system will then have five available tape drives);
 Then process P0 can get all its tape drives and return them (the system will
then have ten available tape drives);
 Finally process P2 can get all its tape drives and return them (the system will then
have all twelve tape drives available).
 A system can go from a safe state to an unsafe state……….

 Suppose that, at time t1, process P2 requests and is allocated one more tape drive.

 The system is no longer in a safe state. At this point, only process P1 can be
allocated all its tape drives.

 When it returns them, the system will have only four available tape drives.
 Since process P0 is allocated five tape drives but has a maximum of ten, it may
request five more tape drives.

 If it does so, it will have to wait, because they are unavailable.

 Similarly, process P2 may request six additional tape drives and have to wait,
resulting in a deadlock.
 Bank transaction example

 Bank initial account balance 1,00,000


 A person withdraws 50,000
 B person withdraws 40,000
 C person wants to withdraw 50,000…………………..deadlock…unsafe
mode

 Bank initial account balance 1,00,000


 A person withdraws 50,000
 B person withdraws 40,000
 D person deposits 70,000
 C person wants to withdraw 50,000…………………..No deadlock…..safe
mode
Resource-Allocation-Graph Algorithm
 If we have a resource-allocation system with only one instance of each resource type,
we can use a variant of the resource-allocation graph for deadlock avoidance.

 In addition to the request and assignment edges, we introduce a new type of edge,
called a claim edge.
 A claim edge Pi → Rj indicates that process Pi may request resource Rj at some time in the
future.

 This edge resembles a request edge in direction but is represented in the graph by a
dashed line.

 When process Pi actually requests resource Rj, the claim edge Pi → Rj is converted to a
request edge.

 Similarly, when a resource Rj is released by Pi, the assignment edge Rj → Pi is reconverted to


a claim edge Pi → Rj .

 Note that the resources must be claimed a priori in the system. That is, before process
Pi starts executing, all its claim edges must already appear in the resource-allocation graph.
Resource-Allocation-Graph Algorithm
 Now suppose that process Pi requests resource Rj.
 The request can be granted only if converting the request edge
Pi → Rj to an assignment edge Rj → Pi does not result in the
formation of a cycle in the resource-allocation graph.
 We check for safety by using a cycle-detection algorithm.
 An algorithm for detecting a cycle in this graph requires an
order of n2 operations, where n = number of processes in the
system.
 If no cycle exists, then the allocation of the resource will leave
the system in a safe state.
 If a cycle is found, then the allocation will put the system in an
unsafe state. In that case, process Pi will have to wait for its
requests to be satisfied.
Banker’s Algorithm
 The resource-allocation-graph algorithm is not applicable to a resource allocation
system with multiple instances of each resource type.

 The banker’s algorithm is applicable to a resource allocation system with


multiple instances of each resource type.

 The name was chosen because the algorithm could be used in a banking system
to ensure that the bank never allocated its available cash in such a way that it
could no longer satisfy the needs of all its customers.

 When a new process enters the system, it must declare the maximum number of
instances of each resource type that it may need.
 This number may not exceed the total number of resources in the system

 When a user requests a set of resources, the system must determine whether
the allocation of these resources will leave the system in a safe state.

 If it will, the resources are allocated; otherwise, the process must wait until
some other process releases enough resources.
 Bankers’s Algorithm is resource allocation and deadlock avoidance algorithm
which test all the request made by processes for resources, it check for safe
state, if after granting request system remains in the safe state it allows the
request and if their is no safe state it don’t allow the request made by the
process.

 Inputs to Banker’s Algorithm

1. Max need of resources by each process.

2. Currently allocated resources by each process.

3. Max free available resources in the system.

 Request will only be granted under below condition.

1. If request made by process is less than equal to max need to that process.
2. If request made by process is less than equal to freely available resource in
the system.
 Several data structures must be maintained to implement the banker’s
algorithm.

 These data structures encode the state of the resource-allocation system. We


need the following data structures, where n is the number of processes in the
system and m is the number of resource types:
Deadlock Detection
 If a system does not employ either a deadlock-prevention or a deadlock
avoidance algorithm, then a deadlock situation may occur.

 In this environment, the system may provide:

• An algorithm that examines the state of the system to determine whether a


deadlock has occurred

• An algorithm to recover from the deadlock.


Single Instance of Each Resource Type
 If all resources have only a single instance, then we can define a deadlock
detection algorithm that uses a variant of the resource-allocation graph, called
a wait-for graph.

 We obtain this graph from the resource-allocation graph by removing the


resource nodes and collapsing the appropriate edges.

 deadlock exists in the system if and only if the wait-for graph contains a cycle.

 To detect deadlocks, the system needs to maintain the wait for graph and
periodically invoke an algorithm that searches for a cycle in the graph.
Several Instances of a Resource Type
 The wait-for graph scheme is not applicable to a resource-allocation system with
multiple instances of each resource type.

 We turn now to a deadlock detection algorithm that is applicable to such a


system

 The algorithm employs several time-varying data structures, where n is the


number of processes in the system and m is the number of resource types:

 • Available. A vector of length m indicates the number of available resources


 of each type.
 • Allocation. An n × m matrix defines the number of resources of each type
 currently allocated to each process.
 • Request. An n × m matrix indicates the current request of each process.
 If Request[i][j] equals k, then process Pi is requesting k more instances of
 resource type Rj .
Algorithm
Detection-Algorithm Usage

 When should we invoke the detection algorithm?


 The answer depends on two factors:
 1. How often is a deadlock likely to occur?
 2. How many processes will be affected by deadlock when it
happens?
 If deadlocks occur frequently, then the detection algorithm should
be invoked frequently.
 Resources allocated to deadlocked processes will be idle until the
deadlock can be broken
 In the extreme, then, we can invoke the deadlock detection
algorithm every time a request for allocation cannot be granted
immediately.
 In this case, we can identify not only the deadlocked set of
processes but also the specific process that “caused” the deadlock
Detection-Algorithm Usage

 Of course, invoking the deadlock-detection algorithm for


every resource request will incur considerable overhead in
computation time.
 A less expensive alternative is simply to invoke the algorithm
at defined intervals—
 For example,
 Once per hour or whenever CPU utilization drops below 40
percent
 If the detection algorithm is invoked at arbitrary points in
time, the resource graph may contain many cycles.
 In this case, we generally cannot tell which of the many
deadlocked processes “caused” the deadlock.
Recovery from Deadlock
 When a detection algorithm determines that a deadlock exists,
several alternatives are available.
 One possibility is to inform the operator that a deadlock has
occurred and to let the operator deal with the deadlock manually.

 Another possibility is to let the system recover from the deadlock


automatically.

 There are two options for breaking a deadlock.


 One is simply to abort one or more processes to break the circular wait.

 The other is to preempt some resources from one or more of the


deadlocked processes.
Process Termination
 To eliminate deadlocks by aborting a process, we use one of two
methods.

 Abort all deadlocked processes. This method clearly


will break the deadlock cycle, but at great expense.
 The deadlocked processes may have computed for a long time.

 Abort one process at a time until the deadlock


cycle is eliminated.
 This method incurs considerable overhead, since after each
process is aborted, a deadlock-detection algorithm must be
invoked to determine whether any processes are still deadlocked.
Resource Pre-emption
 To eliminate deadlocks using resource pre-emption, we successively
preempt some resources from processes and give these resources to
other processes until the deadlock cycle is broken.
 If pre-emption is required to deal with deadlocks, then three issues
need to be addressed:

 1. Selecting a victim. Which resources and which processes are to be


pre-empted?
 As in process termination, we must determine the order of pre-
emption to minimize cost.
 Cost factors may include such parameters as the number of
resources a deadlocked process is holding and the amount of
time the process has thus far consumed.
 2. Rollback. If we preempt a resource from a process, what should be
done with that process?
 We must roll back the process to some safe state and restart it from
that state.
 Starvation:
 How do we ensure that starvation will not occur? That is, how can
we guarantee that resources will not always be pre-empted from
the same process?

 We must ensure that a process can be picked as a victim only a


(small) finite number of times.
HOMEWORK

You might also like