0% found this document useful (0 votes)
34 views38 pages

Os Unit-2-1

Uploaded by

vinaydarling063
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views38 pages

Os Unit-2-1

Uploaded by

vinaydarling063
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 38

UNIT-2

CPU SCHEDULING
Basic Concepts

The objective of multiprogramming is to have some process running at all times, in order to
maximize CPU utilization. The idea of multiprogramming is relatively simple. Several processes
are kept in memory at one time. When one process has to wait, the operating system takes the
CPU away from that process and gives the CPU to another process. This pattern continues.

Scheduling is a fundamental operating-system function. All computer resources are scheduled


before use.

CPU-I/O Burst Cycle

CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait.

CPU burst distribution.

Alternating Sequence of CPU And I/O Bursts

CPU Scheduler
Selects from among the processes in memory that are ready to execute, and allocates the CPU to
one of them

CPU scheduling decisions may take place when a process:

1) Switches from running to waiting state

2) Switches from running to ready state

3) Switches from waiting to ready

4) Terminates

Scheduling under 1 and 4 is non-preemptive. All other scheduling is preemptive.

Dispatcher

Dispatcher module gives control of the CPU to the process selected by the short-term scheduler;
this involves: switching context, switching to user mode, jumping to the proper location in the
user program to restart that program.

Dispatch latency – time it takes for the dispatcher to stop one process and start another running

Scheduling Criteria
 CPU utilization – keep the CPU as busy as possible

 Throughput – # of processes that complete their execution per time unit

 Turnaround time – amount of time to execute a particular process

 Waiting time – amount of time a process has been waiting in the ready queue

 Response time – amount of time it takes from when a request was submitted until the first
response is produced, not output (for time-sharing environment)

Optimization Criteria

 Max CPU utilization

 Max throughput

 Min turnaround time

 Min waiting time and Min response time.

Scheduling Algorithms

 First-come, first served scheduling


 Shortest-job-first scheduling

 Priority scheduling

 Round-robin Scheduling

 Multilevel queue scheduling

 Multilevel feedback queue scheduling

First-Come, First-Served (FCFS) Scheduling

 Simpliest scheduling algorithm

 Easily managed by a FIFO queue

 Average waiting time under FCFS policy is generally not minimal and may vary
substantially if the process CPU burst times vary greatly.

 Non-preemptive scheduling

 Would allow one process to keep the CPU fro an extended period.

Process Burst Time

P1 24

P2 3

P3 3

Suppose that the processes arrive in the order: P1 , P2 , P3

The Gantt Chart for the schedule is:


P
24
27
30
013
2

Waiting time for P1 = 0; P2 = 24; P3 = 27

Average waiting time: (0 + 24 + 27)/3 = 17

Suppose that the processes arrive in the order P2 , P3 , P1 .


The Gantt chart for the schedule is:

P
30
0
3
623
1

Waiting time for P1 = 6; P2 = 0; P3 = 3

Average waiting time: (6 + 0 + 3)/3 = 3

Much better than previous case.

Convoy effect short process behind long process

Algorithm:

1. Get the number by jobs to be scheduled

2. Get the arrival time and service time of each and every job

3. Get the job according time as zero for each jobs

4. Initialize the waiting time as zero for each jobs

5. Compute each job and find the average waiting by dividing total compute each job and find
number of jobs.

SHORTEST JOB SCHEDULING

Associate with each process the length of its next CPU burst. Use these lengths to schedule the
process with the shortest time. Two schemes:

1) Non-Preemptive – once CPU given to the process it cannot be preempted until completes its
CPU burst.

2) Preemptive – if a new process arrives with CPU burst length less than remaining time of
current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First
(SRTF).

SJF is optimal – gives minimum average waiting time for a given set of processes.

Example of Non-Preemptive SJF

Process Arrival Time Burst Time

P1 0.0 7
P2 2.0 4

P3 4.0 1

P4 5.0 4

P1 P3 P2 P4

0 3 7 8 12 16

Average waiting time = (0 + 6 + 3 + 7)/4 - 4 = 16/4 -4 = 4 -4 =0

Example of Preemptive SJF

Process Arrival Time Burst Time

P1 0.0 7

P2 2.0 4

P3 4.0 1

P4 5.0 4

P1 P2 P3 P2 P4 P1

0 11 16
2 4 5 7

Average waiting time = (9 + 1 + 0 +2)/4 - 3 = 12/4 -3 = 3 – 3 = 0

Algorithm:

1) Get the number of jobs to be scheduled

2) Get the arrival time and service time of each job

3) Sort the job according to its service time


4) Intialize the waiting time as zero for each job

5) Initialize the waiting time as zero for each jobs

6) Compute each job and find the average waiting by dividing total compute each job and find
number of jobs.

Priority Scheduling
A priority number (integer) is associated with each process.

The CPU is allocated to the process with the highest priority (smallest integer º highest
priority).

– Preemptive

– nonpreemptive

SJF is a priority scheduling where priority is the predicted next CPU burst time.

Problem º Starvation – low priority processes may never execute.

Solution º Aging – as time progresses increase the priority of the process.

Algorithm:

1) Get the number of jobs to be scheduled

2) Get the arrival time, service time and priority of every jobs

3) Initialize the waiting time as zero for each jobs

4) While processing the jobs according to the started time and the service time of the jobs being
processed currently to the waiting time for each job

5) Compute the total waiting time by adding individual waiting time of the each job and find the
arrange time of job.

ROUND ROBIN SCHEDULING

• Designed especially for time sharing systems.

• Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds.
After this time has elapsed, the OS preempts the process and moves it to the end of the
ready queue, timer set to interrupt in 1 quantum and new process is dispatched.
• If there are n processes in the ready queue and the time quantum is q, then each process
gets 1/n of the CPU time in chunks of at most q time units at a time. No process waits
more than (n-1)q time units.

• Performance

– If q large Þ Same as FCFS (FIFO)

– If q small Þ Called processor sharing ie appears to users like each of n processes


has its own processor running at 1/n the speed of the real processor.

RR with Time Quantum = 20

Process Burst Time

P1 53

P2 17

P3 68

P4 24

The Gantt chart is:

0 20 37 57 77 97 117 121 134 154 162

Typically, higher average turnaround than SJF, but better response.

Algorithm

1) Get the number of jobs to be scheduled

2) Get the arrival time, service time and time slice of each job.

3) Sort the job according to its time of the arrival

4) Repeat the above steps in till all the jobs are completely serviced.

Multilevel Queue

• Ready queue is partitioned into separate queues: foreground (interactive) and background
(batch).
• Each has different response-time requirements.

• Each queue has its own scheduling algorithm, foreground – RR and background – FCFS.

• Scheduling must be done between the queues.

– Fixed priority scheduling; (i.e., serve all from foreground then from background).
Possibility of starvation.

– Time slice – each queue gets a certain amount of CPU time which it can schedule
amongst its processes; i.e., 80% to foreground in RR and 20% to background in
FCFS.

Example: Multilevel Queue Scheduling with 5 queues - scheduling within each queue and

scheduling between each queue.

Multilevel Feedback Queue

• The main idea is to separate processes with different CPU-burst characteristics.

• Allows a process to move between the various queues;

– aging can be implemented this way – if a process has been waiting too long it can
be moved to a higher priority queue.

– If a process uses too much CPU time it can be moved into a lower priority queue.

• Multilevel-feedback-queue scheduler is defined by the following parameters:

– number of queues
– scheduling algorithms for each queue

– method used to determine when to upgrade a process

– method used to determine when to demote a process

– method used to determine which queue a process will enter when that process

needs service.

Example of Multilevel Feedback Queue

 Three queues:

o Q0 – time quantum 8 milliseconds

o Q1 – time quantum 16 milliseconds

o Q2 – FCFS

 Scheduling

o A new job enters queue Q0 which is served FCFS. When it gains CPU, job
receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to
queue Q1.

o At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still


does not complete, it is preempted and moved to queue Q2.

Multiple-Processor Scheduling

 CPU scheduling more complex when multiple CPUs are available


 Homogeneous processors within a multiprocessor

 Load sharing

 Asymmetric multiprocessing – only one processor accesses the system data structures,
alleviating the need for data sharing.

Real-Time Scheduling
Hard real-time systems – required to complete a critical task within a guaranteed amount of
time.
Soft real-time computing – requires that critical processes receive priority over less fortunate
ones.

Algorithm Evaluation
 Maximum CPU utilization under the constraint that the maximum response time is 1
second.

 Maximize throughput such that turnaround time is on average linearly proportional to


total execution time.

Critical Section Problem:


The critical section is a code segment where the shared variables can be accessed. An atomic
action is required in a critical section i.e. only one process can execute in its critical section at a
time.

Critical Section is the part of a program which tries to access shared resources

All the other processes have to wait to execute in their critical sections.

The critical section cannot be executed by more than one process at the same time; operating
system faces the difficulties in allowing and disallowing the processes from entering the critical
section.

A diagram that demonstrates the critical section is as follows.


Solution to the Critical Section Problem
The critical section problem needs a solution to synchronize the different processes. The solution
to the critical section problem must satisfy the following conditions −

 Mutual Exclusion
Mutual exclusion implies that only one process can be inside the critical section at any
time. If any other processes require the critical section, they must wait until it is free.
 Progress
Progress means that if a process is not using the critical section, then it should not stop
any other process from accessing it. In other words, any process can enter a critical
section if it is free.

 Bounded Waiting
Bounded waiting means that each process must have a limited waiting time. Itt should not
wait endlessly to access the critical section.

Peterson’s Problem (Peterson’s solution):


Peterson’s solution provides a good algorithmic description of solving the critical-section
problem and illustrates some of the complexities involved in designing software that addresses
the requirements of mutual exclusion, progress, and bounded waiting.

do {

flag[i] = true;

turn = j;

while (flag[j] && turn == j);

/* critical section */

flag[i] = false;

/* remainder section */ }while (true);


The structure of process Pi in Peterson’s solution. This solution is restricted to two processes that
alternate execution between their critical sections and remainder sections. The processes are
numbered P0 and P1. We use Pj for convenience to denote the other process when Pi is present;
that is, j equals 1 − I, Peterson’s solution requires the two processes to share two data items −

int turn;
boolean flag[2];

The algorithm uses two variables, flag and turn . A flag[n] value of true indicates that the
process n wants to enter the critical section. Entrance to the critical section is granted for process
P0 if P1 does not want to enter its critical section or if P1 has given priority to P0 by
setting turn to 0

The algorithm satisfies the three essential criteria to solve the critical section problem, provided
that changes to the variables turn , flag[0] , and flag[1] propagate immediately and atomically.
The while condition works even with preemption. The three criteria are mutual exclusion,
progress, and bounded waiting.
Synchronization hardware:
In Synchronization hardware, we explore several more solutions to the critical-section problem
using techniques ranging from hardware to software based APIs available to application
programmers. These solutions are based on the premise of locking; however, the design of such
locks can be quite sophisticated.
There are three algorithms in the hardware approach of solving Process Synchronization
problem:

1. Test and Set


2. Swap
3. Unlock and Lock
Hardware instructions in many operating systems help in effective solution of critical section
problems.

1. Test and Set:


Here, the shared variable is lock which is initialized to false. TestAndSet(lock)
algorithm works in this way – it always returns whatever value is sent to it and sets
lock to true. The first process will enter the critical section at once as
TestAndSet(lock) will return false and it’ll break out of the while loop. The other
processes cannot enter now as lock is set to true and so the while loop continues to be
true.
Test and Set Pseudocode –

//Shared variable lock initialized to false


boolean lock;

boolean TestAndSet (boolean &target){

boolean rv = target;

target = true;

return rv;

while(1){

while (TestAndSet(lock));

critical section
lock = false;
remainder section
}

2. Swap:
Swap algorithm is a lot like the TestAndSet algorithm. Instead of directly setting lock to true
in the swap function, key is set to true and then swapped with lock. So, again, when a process
is in the critical section, no other process gets to enter it as the value of lock is true
Swap Pseudocode –
// Shared variable lock initialized to false

// and individual key initialized to false;

boolean lock;

Individual key;

void swap(booelan &a, boolean &b){

boolean temp = a;

a = b;

b = temp;

}
while (1){

key = true;

while(key)

swap(lock,key);

critical section
lock = false;
remainder section
}
3. Unlock and Lock :
Unlock and Lock Algorithm uses TestAndSet to regulate the value of lock but it adds another
value, waiting[i], for each process which checks whether or not a process has been waiting. A
ready queue is maintained with respect to the process in the critical section. All the processes
coming in next are added to the ready queue with respect to their process number, not
necessarily sequentially.
Unlock and Lock Pseudocode –
// Shared variable lock initialized to false

boolean lock;

Individual key;

Individual waiting[i];

while(1){

waiting[i] = true;

key = true;

while(waiting[i] && key)

key = TestAndSet(lock);

critical section
j = (i+1) % n;
while(j != i && !waiting[j])
j = (j+1) % n;
if(j == i)
lock = false;
else
waiting[j] = false;
remainder section
}
Semaphores:
Semaphores are integer variables that are used to solve the critical section problem by using two
atomic operations, wait and signal that are used for process synchronization.
The definitions of wait and signal are as follows −
 Wait
The wait operation decrements the value of its argument S, if it is positive. If S is
negative or zero, then no operation is performed.

wait(S)

while (S<=0);

S--;

 Signal
The signal operation increments the value of its argument S.

signal(S)

S++;

Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary semaphores.
Details about these are given as follows −
 Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These
semaphores are used to coordinate the resource access, where the semaphore count is the
number of available resources. If the resources are added, semaphore count automatically
incremented and if the resources are removed, the count is decremented.

 Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted to 0 and
1. The wait operation only works when the semaphore is 1 and the signal operation
succeeds when semaphore is 0. It is sometimes easier to implement binary semaphores
than counting semaphores.
Advantages of Semaphores
Some of the advantages of semaphores are as follows −
 Semaphores allow only one process into the critical section. They follow the mutual
exclusion principle strictly and are much more efficient than some other methods of
synchronization.
 There is no resource wastage because of busy waiting in semaphores as processor time is
not wasted unnecessarily to check if a condition is fulfilled to allow a process to access
the critical section.
 Semaphores are implemented in the machine independent code of the microkernel. So
they are machine independent.
Disadvantages of Semaphores
Some of the disadvantages of semaphores are as follows −
 Semaphores are complicated so the wait and signal operations must be implemented in
the correct order to prevent deadlocks.
 Semaphores are impractical for last scale use as their use leads to loss of modularity. This
happens because the wait and signal operations prevent the creation of a structured layout
for the system.
 Semaphores may lead to a priority inversion where low priority processes may access the
critical section first and high priority processes later.

Properties of Semaphores

1. It's simple and always have a non-negative integer value.


2. Works with many processes.
3. Can have many different critical sections with different semaphores.
4. Each critical section has unique access semaphores.
5. Can permit multiple processes into the critical section at once, if desirable.

Example of Use

Here is a simple step-wise implementation involving declaration and usage of semaphore.

Shared var mutex: semaphore = 1;

Process i
begin

P(mutex);

execute CS;

V(mutex);

End;

Classical problems of Synchronization

These problems are used for testing nearly every new proposed synchronization scheme.

• The Bounded Buffer Problem (also called the The Producer-Consumer Problem)

• The Readers-Writers Problem

• The Dining Philosophers Problem

These problems are used to test nearly every newly proposed synchronization scheme or
primitive.

The Bounded Buffer Problem(Producer Consumer Problem):


Consider,

• a buffer which can store n items

• a producer process which creates the items (1 at a time)

• a consumer process which processes them (1 at a time)

A producer cannot produce unless there is an empty buffer slot to fill.

A consumer cannot consume unless there is at least one produced item.

Semaphore empty=N, full=0, mutex=1;

process producer

while (true)
{

empty.acquire();

mutex.acquire();

// produce

mutex.release();

full.release();

process consumer

while (true) {

full.acquire();

mutex.acquire();

// consume

mutex.release();

empty.release();

The semaphore mutex provides mutual exclusion for access to the buffer

The Readers-Writers Problem


A data item such as a file is shared among several processes.

Each process is classified as either a reader or writer.Multiple readers may access the file
simultaneously.
A writer must have exclusive access (i.e., cannot share with either a reader or another writer).A
solution gives priority to either readers or writers.

• readers' priority: no reader is kept waiting unless a writer has already obtained permission to
access the database

• writers' priority: if a writer is waiting to access the database, no new readers can start reading.

A solution to either version may cause starvation in the readers' priority version, writers may
starving the writers' priority version, readers may starve

A semaphore solution to the readers' priority version (without addressing starvation):

Semaphore mutex = 1;

Semaphore db = 1;

int readerCount = 0;

process writer {

db.acquire();

// write

db.release();

process reader {

// protecting readerCount

mutex.acquire();

++readerCount;

if (readerCount == 1)

db.acquire();

mutex.release();

// read
// protecting readerCount

mutex.acquire();

--readerCount;

if (readerCount == 0)

db.release;

mutex.release();

readerCount is a <cs> over which we must maintain control and we use mutex to do so.

3. The Dining Philosophers Problem

n philosophers sit around a table thinking and eating. When a philosopher thinks she does not
interact with her colleagues. Periodically, a philosopher gets hungry and tries to pick up the
chopstick on his left and on his right. A philosopher may only pick up one chopstick at a time
and, obviously, cannot pick up a chopstick already in the hand of neighbor philosopher.

The dining philosophers problems is an example of a large class or concurrency control


problems; it is a simple representation of the need to allocate several resources among several
processes in a deadlock-free and starvation-free manner.

A semaphore solution:

// represent each chopstick with a semaphore

Semaphore chopstick[] = new Semaphore[5]; // all = 1 initially

process philosopher_i

while (true)
{

// pick up left chopstick

chopstick[i].acquire();

// pick up right chopstick

chopstick[(i+1) % 5].acquire();

// eat

// put down left chopstick

chopstick[i].release();

// put down right chopstick

chopstick[(i+1) % 5].release();

// think

This solution guarantees no two neighboring philosophers eat simultaneously, but has the
possibility of creating a deadlock.

Atomic Transactions:

 System Model

 Log-based Recovery

 Checkpoints

 Concurrent Atomic Transactions


System Model:

 Assures that operations happen as a single logical unit of work, in its entirety, or not at all

 Related to field of database systems

 Challenge is assuring atomicity despite computer system failures

 Transaction - collection of instructions or operations that performs single logical


function

 Here we are concerned with changes to stable storage – disk

 Transaction is series of read and write operations

 Terminated by commit (transaction successful) or abort (transaction failed)


operation

 Aborted transaction must be rolled back to undo any changes it performed

Types of Storage Media:

 Volatile storage – information stored here does not survive system crashes

 Example: main memory, cache

 Nonvolatile storage – Information usually survives crashes


 Example: disk and tape

 Stable storage – Information never lost

 Not actually possible, so approximated via replication or RAID to devices with


independent failure modes

Log-Based Recovery:

 Record to stable storage information about all modifications by a transaction

 Most common is write-ahead logging

 Log on stable storage, each log record describes single transaction write
operation, including

 Transaction name

 Data item name

 Old value

 New value

 <Ti starts> written to log when transaction Ti starts

 <Ti commits> written when Ti commits

 Log entry must reach stable storage before operation on data occurs

Log-Based Recovery Algorithm:

 Using the log, system can handle any volatile memory errors

 Undo(Ti) restores value of all data updated by Ti

 Redo(Ti) sets values of all data in transaction Ti to new values

Checkpoints:

 Log could become long, and recovery could take long

 Checkpoints shorten log and recovery time.

 Checkpoint scheme:
1. Output all log records currently in volatile storage to stable storage

2. Output all modified data from volatile to stable storage

3. Output a log record <checkpoint> to the log on stable storage

Serializability:

 Consider two data items A and B

 Consider Transactions T0 and T1

 Execute T0, T1 atomically

 Execution sequence called schedule

 Atomically executed transaction order called serial schedule

For N transactions, there are N! valid serial schedules

Nonserial Schedule:

 Nonserial schedule allows overlapped execute

 Resulting execution not necessarily incorrect

 Consider schedule S, operations Oi, Oj

 Conflict if access same data item, with at least one write


 If Oi, Oj consecutive and operations of different transactions & O i and Oj don’t
conflict

 Then S’ with swapped order Oj Oi equivalent to S

 If S can become S’ via swapping nonconflicting operations

 S is conflict serializable

Concurrent Serializable Schedule:

Locking Protocol:

 Ensure serializability by associating lock with each data item

 Follow locking protocol for access control

 Locks

 Shared – Ti has shared-mode lock (S) on item Q, Ti can read Q but not write Q

 Exclusive – Ti has exclusive-mode lock (X) on Q, Ti can read and write Q

 Require every transaction on item Q acquire appropriate lock

 If lock already held, new request may have to wait

 Similar to readers-writers algorithm


Two-phase Locking Protocol:

 Generally ensures conflict serializability

 Each transaction issues lock and unlock requests in two phases

 Growing – obtaining locks

 Shrinking – releasing locks

 Does not prevent deadlock

Timestamp-based Protocols:

 Select order among transactions in advance – timestamp-ordering

 Transaction Ti associated with timestamp TS(Ti) before Ti starts

 TS(Ti) < TS(Tj) if Ti entered system before Tj

 TS can be generated from system clock or as logical counter incremented at each


entry of transaction

DEAD LOCKS:
In a multiprogramming environment, several processes may compete for a finite number of
resources. A process requests resources, if the resources are not available at the time, the process
enters a wait state. It may happen that waiting processes will never again change state, because
other waiting processes holds the resources they have requested. This is known as a deadlock

SYSTEM MODEL:
A system consists of a finite number or resources to be distributed among a number of
competing processes. The resources are partitioned into several types, each of which consists of
some number of identical instances.

Under the normal mode of operation, a process may utilize a resource in only the following
sequence:

Request: If the request cannot be granted immediately (for example, if the resource is a printer,
the process can acquire the resource.

Use: The process can operate on the resource (for, example, if the resource is a printer, the
process can print on the printer).

Release: The process releases the resource.


DEADLOCK CHARACTERIZATION

A deadlock situation can arise if the following four conditions hold simultaneously in
a system:

Mutual exclusion: At least one resource must be held in a non-sharable mode; that is, only one
process at a time can use the resource. If another process requests that resource, the requesting
process must be delayed until the resource has been released.

Hold and wait: There must exist a process that is holding at least one resource and is waiting to
acquire additional resources that are currently being held by other processes.

No preemption resources: Cannot be preempted; that is, the process holding it after that process
has completed its task can release a resource only voluntarily by the process holding it, after that
process has completed its task.

Circular wait: There must exist a set {Po,P1,…,Pn} of waiting processes such that P0 is
waiting for a resource that is held by P1,P1 is waiting for a resource that is held by P2,…,Pn-1 is
waiting for a resource that held by Pn and Pn is waiting for a resource that is held by Po.
A set of vertices V and a set of edges E.

V is partitioned into two types:

P = {P1, P2, …, Pn}, the set consisting of all the processes in the system.
R = {R1, R2, …, Rm}, the set consisting of all resource types in the system.

request edge – directed edge P1 ® Rj


assignment edge – directed edge Rj ® Pi

Process

Resource Type with 4 instances

Pi requests instance of Rj

Pi

Pi is holding an instance of Rj

Pi
Resource Allocation Graph With A Deadlock

If graph contains no cycles Þ no deadlock.


If graph contains a cycle Þ
o if only one instance per resource type, then deadlock.
o if several instances per resource type, possibility of deadlock.
Methods for Handling Deadlocks
 We can use a protocol to ensure that the system will never enter a deadlock state.

 We can allow the system to enter a deadlock state and then recover

 We can ignore the problem all together, and pretend that deadlocks never occur in the
system. This solution is the one used by most operating systems, including UNIX.

 Deadlock prevention is a set of methods for ensuring that at least one of the necessary
conditions cannot hold.

 Deadlock avoidance, on the other hand, requires that the operating system be given in
advance additional information concerning which resources a process will request and
use during its lifetime.

Deadlock Prevention

Restrain the ways request can be made.

Mutual Exclusion – not required for sharable resources; must hold for nonsharable resources.

Hold and Wait – must guarantee that whenever a process requests a resource, it does not hold any
other resources.
 Require process to request and be allocated all its resources before it begins
execution, or allow process to request resources only when the process has none.

 Low resource utilization; starvation possible.

No Preemption –

 If a process that is holding some resources requests another resource that cannot
be immediately allocated to it, then all resources currently being held are released.

 Preempted resources are added to the list of resources for which the process is
waiting.

 Process will be restarted only when it can regain its old resources, as well as the
new ones that it is requesting.

Circular Wait –impose a total ordering of all resource types, and require that each process
requests resources in an increasing order of enumeration.

Deadlock Avoidance
The simplest and most useful model requires that each process declare the maximum number of
resources of each type that it may need.

A deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure


that there can never be a circular-wait condition.

SAFE STATE

A state is safe if the system can allocate resources to each process (up to its maximum) in some
order and still avoid deadlock.

More formally, a system is in a safe state only if there exist a safe sequence.

A sequence of processes <P1,P2,…,Pn> is a safe sequence for the current allocation state if, for
each Pi, the resources that Pi can still request can be satisfied by the currently available resources
plus the resources held by all the Pj, with j<i.
 If a system is in safe state Þ no deadlocks.

 If a system is in unsafe state Þ possibility of deadlock.

 Avoidance Þ ensure that a system will never enter an unsafe state.


RESOURCE-ALLOCATION GRPAH ALGORITHM

Claim edge indicates the process Pi may request resource Rj at some time in the future. The
edge resembles a request edge in direction, but is represented by a dashed line. The resource
allocation graph for deadlock avoidance is shown below.
BANKER’S ALGORITHM

 Available: A vector of length m indicates the number of available resources of each


type.

 Max: A nXm matrix defines the maximum demand of each process.

 Allocation: An nXm matrix defines the number of resources of each type currently
allocated to each process.

 Need: An nXm matrix indicates the remaining resource need of each process.

Need [i,j] = Max[i,j] – Allocation [i,j].

Safety Algorithm

Let Work and Finish be vectors of length m and n, respectively. Initialize:

Work = Available

Finish [i] = false for i - 1,3, …, n.

2.Find and i such that both:

(a) Finish [i] = false

(b) Needi £ Work

If no such i exists, go to step 4.

3.Work=Work+Allocationi

Finish[i] = true

go to step 2.
4. If Finish [i] == true for all i, then the system is in a safe state.

Resource-Request Algorithm for Process Pi

Request = request vector for process Pi. If Requesti [j] = k then process Pi wants k instances of
resource type Rj.

1.If Requesti £ Needi go to step 2. Otherwise, raise error condition, since process has exceeded
its maximum claim.

2.If Requesti £ Available, go to step 3. Otherwise Pi must wait, since resources are not
available.
3.Pretend to allocate requested resources to Pi by modifying the state as follows:

Available = Available = Requesti;

Allocationi = Allocationi + Requesti;

Needi = Needi – Requesti;;

• If safe Þ the resources are allocated to Pi.

• If unsafe Þ Pi must wait, and the old resource-allocation state is restored

Example of Banker’s Algorithm

5 processes P0 through P4; 3 resource types A (10 instances),


B (5instances, and C (7 instances).Snapshot at time T0:

Allocation Max Available

ABC ABC ABC

P0 010 753 332

P1 200 322

P2 302 902

P3 211 222

P4 002 433

The content of the matrix. Need is defined to be Max – Allocation.

Need

ABC

P0 7 4 3

P1 1 2 2

P2 6 0 0

P3 0 1 1

P4 4 3 1

The system is in a safe state since the sequence < P1, P3, P4, P2, P0> satisfies safety criteria.
Check that Request £ Available (that is, (1,0,2) £ (3,3,2) Þ true.

Allocation Need Available

ABC ABC ABC

P0 0 1 0 743 230

P1 3 0 2 020

P2 3 0 1 600

P3 2 1 1 011

P4 0 0 2 431

Executing safety algorithm shows that sequence <P1, P3, P4, P0, P2> satisfies safety
requirement.

Can request for (3,3,0) by P4 be granted?

Can request for (0,2,0) by P0 be granted?

Deadlock Detection
Allow system to enter deadlock state

Detection algorithm

Recovery scheme

Single Instance of Each Resource Type

Maintain wait-for graph

 Nodes are processes.

 Pi ® Pj if Pi is waiting for Pj.

Periodically invoke an algorithm that searches for a cycle in the graph.

An algorithm to detect a cycle in a graph requires an order of n2 operations, where n is the


number of vertices in the graph.

Resource-Allocation Graph and Wait-for Graph


Resource-Allocation Graph Corresponding wait-for graph

Several Instances of a Resource Type

Available: A vector of length m indicates the number of available resources of each type.

Allocation: An n x m matrix defines the number of resources of each type currently allocated to
each process.

Request: An n x m matrix indicates the current request of each process. If Request [ij] = k, then
process Pi is requesting k more instances of resource type. Rj.

Detection Algorithm

Let Work and Finish be vectors of length m and n, respectively Initialize:

(a) Work = Available

(b)For i = 1,2, …, n, if Allocationi ¹ 0, then


Finish[i] = false;otherwise, Finish[i] = true.

2.Find an index i such that both:

(a)Finish[i] == false

(b)Requesti £ Work

If no such i exists, go to step 4.


3. Work = Work + Allocationi
Finish[i] = true
go to step 2.

4.If Finish[i] == false, for some i, 1 £ i £ n, then the system is in deadlock state. Moreover, if
Finish[i] == false, then Pi is deadlocked.

Algorithm requires an order of O(m x n2) operations to detect whether the system is in
deadlocked state.

Example of Detection Algorithm

Five processes P0 through P4; three resource types


A (7 instances), B (2 instances), and C (6 instances).

Snapshot at time T0:

Allocation Request Available

ABC ABC ABC

P0 010 000 000

P1 200 202

P2 303 000

P3 211 100

P4 002 002

Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all

P2 requests an additional instance of type C.

Request

ABC

P0 000

P1 201

P2 001
P3 100

P4 002

State of system

 Can reclaim resources held by process P0, but insufficient resources to fulfill other
processes; requests.

 Deadlock exists, consisting of processes P1, P2, P3, and P4.

Detection-Algorithm Usage

When, and how often, to invoke depends on: How often a deadlock is likely to occur?

How many processes will need to be rolled back? one for each disjoint cycle

If detection algorithm is invoked arbitrarily, there may be many cycles in the resource graph and
so we would not be able to tell which of the many deadlocked processes “caused” the deadlock.

Deadlock Recovery
Process Termination

Abort all deadlocked processes.

Abort one process at a time until the deadlock cycle is eliminated.

The order we choose to abort depends on:

Priority of the process.

How long process has computed, and how much longer to completion.

Resources the process has used.

Resources process needs to complete.

How many processes will need to be terminated.

The type of process interactive or batch.

Resource Preemption

 Selecting a victim – minimize cost.

 Rollback – return to some safe state, restart process for that state.
 Starvation – same process may always be picked as victim, include number of rollback
in cost factor.

Combined Approach to Deadlock Handling

Combine the three basic approaches prevention, avoidance and detection allowing the use of the

optimal approach for each of resources in the system.

Partition resources into hierarchically ordered classes.

Use most appropriate technique for handling deadlocks within each class.

You might also like