Unit Ii Process Management
Unit Ii Process Management
Unit Ii Process Management
OF
Year/Semester: II / IV
2023 – 2024
UNIT II PROCESS MANAGEMENT
Processes - Process Concept - Process Scheduling - Operations on Processes - Inter-process
Communication;
CPU Scheduling - Scheduling criteria - Scheduling algorithms;
Threads -Multithread Models – Threading issues;
Process Synchronization - The critical-section problem -Synchronization hardware – Semaphores – Mutex -
Classical problems of synchronization -Monitors;
Deadlock - Methods for handling deadlocks, Deadlock prevention, Deadlock avoidance,Deadlock detection,
Recovery from deadlock.
PROCESS
2.1 Process Concept
Process States:
1 1
Waiting: The process is waiting for some event to occur (such
as an I/O completion or reception of a signal).
Ready: The process is waiting to be assigned to a processor.
Terminated: The process has finished execution.
1 2
2.2 Process Scheduling
The objective of multiprogramming is to have some process running at all
times, so as to maximize CPU utilization.
Scheduling Queues
There are 3 types of scheduling queues .They are :
1. Job Queue
2. Ready Queue
3. Device Queue
As processes enter the system, they are put into a job queue.
The processes that are residing in main memory and are ready and waiting to
execute are kept on a list called the ready queue.
The list of processes waiting for an I/O device is kept in a device queue for
that particular device.
1 3
Fig:Queuing Diagram Representation of Process Scheduling
Schedulers
A process migrates between the various scheduling queues throughout its
lifetime.
The operating system must select, for scheduling purposes, processes from
these queues in some fashion.
The selection process is carried out by the appropriate
scheduler. There are three different types of schedulers.They are:
1. Long-term Scheduler or Job Scheduler
2. Short-term Scheduler or CPU Scheduler
3. Medium term Scheduler
The long-term scheduler, or job scheduler, selects processes from
this pool and loads them into memory for execution. It is invoked very
infrequently.It controls the degree of multiprogramming.
1 4
bound and I/O-bound processes.
Context Switch
Switching the CPU to another process requires saving the state of the old
process and loading the saved state for the new process.
This task is known as a context switch.
Context-switch time is pure overhead, because the system does no useful work
while switching.
Its speed varies from machine to machine, depending on the memory speed, the
number of registers that must be copied, and the existence of special
instructions.
1 5
2.3 Operations on Processes
1. Process Creation
A process may create several new processes, during the course of execution.
The creating process is called a parent process, whereas the new processes are
called the children of that process.
When a process creates a new process, two possibilities exist in terms of
execution:
1. The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have terminated.
There are also two possibilities in terms of the address space of the new
process:
1. The child process is a duplicate of the parent process.
2. The child process has a program loaded into it.
In UNIX, each process is identified by its process identifier, which is a unique
integer. A new process is created by the fork system call.
2. Process Termination
A process terminates when it finishes executing its final statement and asks the
operating system to delete it by using the exit system call.
At that point, the process may return data (output) to its parent process (via the
wait system call).
A process can cause the termination of another process via an appropriate
system call.
A parent may terminate the execution of one of its children for a variety of
reasons, such as these:
1 6
1. The child has exceeded its usage of some of the resources that it has
been allocated.
2. The task assigned to the child is no longer required.
3. The parent is exiting, and the operating system does not allow a child to
continue if its parent terminates. On such systems, if a process
terminates (either normally or abnormally), then all its children must
also be terminated. This phenomenon, referred to as cascading
termination, is normally initiated by the operating system.
Cooperating Processes
Shared data
#define BUFFER_SIZE
10 typedef struct {
1 7
...
} item;
item
buffer[BUFFER_SIZE]; int
in = 0;
int out = 0;
The shared buffer is implemented as a circular array with two logical pointers: in and
out. The variable in points to the next free position in the buffer; out points to the
first full position in the buffer. The buffer is empty when in == out ; the buffer is full
when ((in + 1) % BUFFERSIZE) == out.
Producer Process
while (1)
{
while (((in + 1) % BUFFER_SIZE) == out);
/* do nothing */
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
}
Consumer process
while (1)
{
while (in == out);
/* do nothing */
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
}
Basic Structure:
1 8
Physical implementation of the link is done through a hardware bus , network
etc,
There are several methods for logically implementing a link and the operations:
1. Direct or indirect communication
2. Symmetric or asymmetric communication
3. Automatic or explicit buffering
4. Send by copy or send by reference
5. Fixed-sized or variable-sized messages
Naming:
Processes that want to communicate must have a way to refer to each other.
They can use either direct or indirect communication.
1. Direct Communication
Each process that wants to communicate must explicitly name the
recipient or sender of the communication.
A communication link in this scheme has the following properties:
i. A link is established automatically between every pair of
processes that want to communicate. The processes need to know
only each other's identity to communicate.
ii. A link is associated with exactly two processes.
iii. Exactly one link exists between each pair of processes.
There are two ways of addressing namely
Symmetry in addressing
Asymmetry in addressing
In symmetry in addressing, the send and receive primitives are
defined as:
send(P, message) Send a message to process P
receive(Q, message) Receive a message from Q
In asymmetry in addressing, the send & receive primitives are
defined as:
send (p, message) send a message to process p
receive(id, message) receive message from any process, id is
set to the name of the process with which communication has
taken place
2. Indirect Communication
With indirect communication, the messages are sent to and received from
mailboxes, or ports.
The send and receive primitives are defined as follows:
send (A, message) Send a message to mailbox A.
1 9
receive (A, message) Receive a message from mailbox A.
A communication link has the following properties:
i. A link is established between a pair of processes only if both
members of the pair have a shared mailbox.
ii. A link may be associated with more than two processes.
iii. A number of different links may exist between each pair of
communicating processes, with each link corresponding to one
mailbox
3. Buffering
A link has some capacity that determines the number of message that can reside
in it temporarily. This property can be viewed as a queue of messages attached
to the link.
There are three ways that such a queue can be implemented.
Zero capacity: Queue length of maximum is 0. No message is waiting in a
queue. The sender must wait until the recipient receives the message. (message
system with no buffering)
Bounded capacity: The queue has finite length n. Thus at most n messages can
reside in it.
Unbounded capacity: The queue has potentially infinite length. Thus any
number of messages can wait in it. The sender is never delayed
4. Synchronization
1 10
CPU-I/O Burst Cycle
Process execution consists of a cycle of CPU execution and I/O wait.
Processes alternate between these two states.
Process execution begins with a CPU burst.
That is followed by an I/O burst, then another CPU burst, then another I/O burst,
and so on.
Eventually, the last CPU burst will end with a system request to terminate
execution, rather than with another I/O burst.
Preemptive Scheduling
CPU scheduling decisions may take place under the following four
circumstances:
1. When a process switches from the running state to the waiting state
2. When a process switches from the running state to the ready state
3. When a process switches from the waiting state to the ready state
1 11
4. When a process terminates
Under 1 & 4 scheduling scheme is non
preemptive. Otherwise the scheduling scheme is
preemptive.
Non-preemptive Scheduling
In non preemptive scheduling, once the CPU has been allocated a process, the
process keeps the CPU until it releases the CPU either by termination or by
switching to the waiting state.
This scheduling method is used by the Microsoft windows environment.
Dispatcher
The dispatcher is the module that gives control of the CPU to the process
selected by the short-term scheduler.
This function involves:
1. Switching context
2. Switching to user mode
3. Jumping to the proper location in the user program to restart that
program
Scheduling Criteria
1. CPU utilization: The CPU should be kept as busy as possible. CPU utilization
may range from 0 to 100 percent. In a real system, it should range from 40
percent (for a lightly loaded system) to 90 percent (for a heavily used system).
2. Throughput: Itis the number of processes completed per time unit. For long
processes, this rate may be 1 process per hour; for short transactions,
throughput might be 10 processes per second.
3. Turnaround time: The interval from the time of submission of a process to the
time of completion is the turnaround time. Turnaround time is the sum of the
periods spent waiting to get into memory, waiting in the ready queue, executing
on the CPU, and doing I/O.
4. Waiting time: Waiting time is the sum of the periods spent waiting in the
ready queue.
5. Response time: It is the amount of time it takes to start responding, but not the
time that it takes to output that response.
1 12
2.5.2 CPU Scheduling Algorithms
Example:
Process Burst Time
P1 24
P2 3
P3 3
If the processes arrive in the order PI, P2, P3, and are served in FCFS order, we
get the result shown in the following Gantt chart:
Gantt Chart
Example :
Process Burst Time
P1 6
P2 8
P3 7
1 13
P4 3
Gantt Chart
Preemptive Scheduling
Average waiting
time : P1 : 10
–1=9
P2 : 1 – 1 = 0
P3 : 17 – 2 = 15
P4 : 5 – 3 = 2
AWT = (9+0+15+2) / 4 = 6.5 ms
Non-preemptive Scheduling
Priority Scheduling
A priority is associated with each process, and the CPU is allocated to the process
1 14
with the highest priority.( smallest integer highest priority).
Example :
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
AWT=8.2 ms
Round-Robin Scheduling
Example :
Process Burst Time
P1 24
P2 3
P3 3
Waiting time
P1 = 26 – 20 = 6
1 15
P2 = 4
P3 = 7 (6+4+7 / 3 = 5.66 ms)
The average waiting time is 17/3 = 5.66 milliseconds.
The performance of the RR algorithm depends heavily on the size of the time–
quantum.
If time-quantum is very large(infinite) then RR policy is same as FCFS policy.
If time quantum is very small, RR approach is called processor sharing and
appears to the users as though each of n process has its own processor running
at 1/n the speed of real processor.
1 16
Multilevel Feedback Queue Scheduling
It allows a process to move between queues.
The idea is to separate processes with different CPU-burst characteristics.
If a process uses too much CPU time, it will be moved to a lower-priority
queue.
This scheme leaves I/O-bound and interactive processes in the higher-
priority queues.
Similarly, a process that waits too long in a lower priority queue may be
moved to a higher-priority queue.
This form of aging prevents starvation.
Example:
Consider a multilevel feedback queue scheduler with three queues, numbered
from 0 to 2 .
The scheduler first executes all processes in queue 0.
Only when queue 0 is empty will it execute processes in queue 1.
Similarly, processes in queue 2 will be executed only if queues 0 and 1 are empty.
A process that arrives for queue 1 will preempt a process in queue 2.
A process that arrives for queue 0 will, in turn, preempt a process in queue 1.
1 17
5. The method used to determine which queue a process will enter
when that process needs service
1 18
process can start executing.
The high-priority process would be waiting for a lower-priority one to finish.
This situation is known as priority inversion.
Algorithm Evaluation
To select an algorithm, we must first define the relative importance of these
measures.
Maximize CPU utilization
Maximize throughput
Algorithm Evaluation can be done using
1. Deterministic Modeling
2. Queueing Models
3. Simulation
Deterministic Modeling
One major class of evaluation methods is called analytic evaluation.
One type of analytic evaluation is deterministic modeling.
This method takes a particular predetermined workload and defines the
performance of each algorithm for that workload.
Queueing Models
The computer system is described as a network of servers.
Each server has a queue of waiting processes.
The CPU is a server with its ready queue, as is the I/O system with its device
queues.
Knowing arrival rates and service rates, we can compute utilization, average
queue length, average wait time, and so on.
This area of study is called queueing-network analysis.
Let n be the average queue length, let W be the average waiting time in the
queue, and let X be the average arrival rate for new processes in the queue.
n=λ*W
2.8 Threads
A thread is the basic unit of CPU utilization.
It is sometimes called as a lightweight process.
It consists of a thread ID ,a program counter, a register set and a stack.
It shares with other threads belonging to the same process its code section , data
1 19
section, and resources such as open files and signals.
User threads
Supported above the kernel and implemented by a thread library at
the user level.
Thread creation , management and scheduling are done in user space.
Fast to create and manage
When a user thread performs a blocking system call ,it will cause the
entire process to block even if other threads are available to run within
the application.
Example: POSIX Pthreads,Mach C-threads and Solaris 2 UI-threads.
Kernel threads
Supported directly by the OS.
Thread creation , management and scheduling are done in kernel space.
Slow to create and manage
1 20
When a kernel thread performs a blocking system call ,the kernel
schedules another thread in the application for execution.
Example: Windows NT, Windows 2000 , Solaris 2,BeOS and Tru64 UNIX
support kernel threads.
1. Many-to-One:
Many user-level threads mapped to single kernel thread.
Used on systems that do not support kernel threads.
Many-to-One Model
2. One-to-One:
Each user-level thread maps to a kernel thread.
Examples
- Windows 95/98/NT/2000
- OS/2
One-to-one Model
1 21
3. Many-to-Many Model:
Allows many user level threads to be mapped to many kernel threads.
Allows the operating system to create a sufficient number of kernel threads.
Solaris 2
Windows NT/2000
Many-to-Many Model
1 22
b. Deliver the signal to every thread in the process.
c. Deliver the signal to certain threads in the process.
d. Assign a specific thread to receive all signals for the process.
3. Once delivered the signal must be handled.
a. Signal is handled by
i. A default signal handler
ii. A user defined signal handler
4. Thread pools
Creation of unlimited threads exhausts system resources such as CPU time or
memory. Hence we use a thread pool. In a thread pool, a number of threads are
created at process startup and placed in the pool.
When there is a need for a thread the process will pick a thread from the pool and
assign it a task.
After completion of the task, the thread is returned to the pool.
5. Thread specific data
Threads belonging to a process share the data of the process. However each thread
might need its own copy of certain data known as thread-specific data.
Windows Threads:
Windows implements the Windows API, which is the primary API for the family
of Microsoft operating systems (Windows 98, NT, 2000, and XP, as well as
Windows 7).
A Windows application runs as a separate process, and each process may contain
one or more threads.
The general components of a thread include:
1 23
The key components of the ETHREAD include a pointer to the process to which
the thread belongs and the address of the routine in which the thread starts control.
The ETHREAD also contains a pointer to the corresponding KTHREAD.
The KTHREAD includes scheduling and synchronization information for the
thread. In addition, the KTHREAD includes the kernel stack (used when the thread
is running in kernel mode) and a pointer to the TEB.
The ETHREAD and the KTHREAD exist entirely in kernel space; this means that
only the kernel can access them. The TEB is a user-space data structure that is
accessed when the thread is running in user mode. Among other fields, the TEB
contains the thread identifier, a user-mode stack, and an array for thread-local
storage.
Consider the bounded buffer problem , where an integer variable counter, initialized
to 0 is added . counter is incremented every time we add a new item to the buffer
and is decremented every time we remove one item from the buffer.
The code for the producer process can be modified as follows:
while (true)
{ /* produce an item in next produced */
while (counter == BUFFER SIZE) ;
/* do nothing */
buffer[in] = next produced;
in = (in + 1) % BUFFER SIZE; counter++;
}
1 24
The code for the consumer process can be modified as follows:
while (true)
{ while (counter == 0) ;
/* do nothing */
next consumed = buffer[out];
out = (out + 1) % BUFFER SIZE;
counter--;
/* consume the item in next consumed */ }
Let the current value of counter be 5. If producer process and consumer process
execute the statements counter++ and counter—concurrently then the value of
counter may be 4,5 or 6 which is incorrect.
To explain this further, counter ++ may be implemented in machine language as
follows:
register1 = counter
register1 = register1 + 1
counter = register1
and counter - - may be implemented as follows:
register2 = counter
register2 = register2 - 1
counter = register2
The concurrent execution of counter ++ and counter - - is equivalent to a sequential
execution of the statement are interleaved in some arbitrary order. One such
interleaving is given below:
T0: producer execute register1 = counter {register1 = 5}
T1: producer execute register1 = register1 + 1 { register1 = 6}
T2: consumer execute register2 = counter { register2 = 5}
T3: consumer execute register2 = register2 − 1 { register2 = 4}
T4: producer execute counter = register1 {counter = 6}
T5: consumer execute counter = register2 {counter = 4}
A situation like this, where several processes access and manipulate the same data
concurrently and the outcome of the execution depends on the particular order in
which the access takes place, is called a race condition.
To guard against the race condition above, we need to ensure that only one process
at a time can be manipulating the variable counter.
1 25
There are n processes that are competing to use some shared data
Each process has a code segment, called critical section, in which the
shared data is accessed.
Problem – ensure that when one process is executing in its critical section,
no other process is allowed to execute in its critical section.
Requirements to be satisfied for a Solution to the Critical-Section Problem:
do {
entry section
critical section
exit section
remainder section
} while (1);
Two general approaches are used to handle critical sections in operating systems:
preemptive kernels and non-preemptive kernels.
1 26
Preemptive kernels are especially difficult to design for SMP architectures,
since in these environments it is possible for two kernel-mode processes to run
simultaneously on different processors.
Algorithm 1:
do {
while (turn != i) ;
critical section
turn =j;
remainder section
} while (1);
Algorithm 2:
do {
flag[i]=true;
while (flag[j])
; critical section
flag[i]=false;
remainder
section } while (1);
CONCLUSION: Satisfies mutual exclusion, but not progress and bounded waiting
Algorithm 3:
do {
flag[i]=true;
turn = j;
while (flag[j]&& turn==j)
; critical section
flag[i]=false;
remainder
section } while (1);
CONCLUSION: Meets all three requirements; solves the critical-section problem for
two processes.
1 27
Multiple –process solution or n- process solution or Bakery Algorithm :
Before entering its critical section, process receives a number. Holder of the
smallest number enters the critical section.
If processes Pi and Pj receive the same number, if i < j, then Pi is served first;
else Pj is served first.
(a,b) < (c,d) if a < c or if a = c and b < d
boolean
choosing[n]; int
number[n];
Data structures are initialized to false and 0 respectively
do {
choosing[i] = true;
number[i] = max(number[0], number[1], …, number [n – 1])+1;
choosing[i] = false; for (j = 0; j < n; j++)
{
while (choosing[j]) ;
while ((number[j] != 0) && (number[j,j] < number[i,i])) ;
cCritical section
number[i] = 0;
remainder section
} while (1);
1 28
is available or not.
If the lock is available, a call to acquire() succeeds, and the lock is then
considered unavailable.
A process that attempts to acquire an unavailable lock is blocked until the lock
is released.
acquire()
{ while (!available) ;
/* busy wait */
available = false;; }
do
{ acquire lock
critical section
release lock
remainder section
} while (true);
3. Synchronization Hardware:
The two instructions that are used to provide synchronization to hardware are :
1. TestAndSet
2. Swap
1 29
TestAndSet instruction
while (TestAndSet(lock)) ;
critical section
lock = false;
remainder section
}while(1);
Swap instruction
key = true;
critical section
lock=false;
1 30
remainder section
}while(1);
4. Semaphores:
It is a synchronization tool that is used to generalize the solution to the critical
section problem in complex situations.
wait (s)
{
while(s0); s--;
}
signal (s)
{
s++;
}
do
{
wait(mutex);
critical section
signal(mutex);
remainder
section } while (1);
Semaphore Implementation
1 31
To overcome the busy waiting problem, the definition of the semaphore
operations wait and signal should be modified.
When a process executes the wait operation and finds that the
semaphore value is not positive, the process can block itself. The block
operation places the process into a waiting queue associated with the
semaphore.
A process that is blocked waiting on a semaphore should be restarted
when some other process executes a signal operation. The blocked
process should be restarted by a wakeup operation which put that
process into ready queue.
To implemented the semaphore, we define a semaphore as a record as:
typedef
struct {
int value;
struct process *L;
} semaphore;
if (S.value < 0) {
add this process to
S.L; block;
}
signal(S)
{
S.value++;
if (S.value <= 0) {
remove a process P from S.L; wakeup(P);
}
. .
. .
. .
Signal(S) Signal(Q)
Signal(Q) Signal(S)
Types of Semaphores
Counting semaphore – any positive integer value
Binary semaphore – integer value can range only between 0
6. Monitors
A monitor is a synchronization construct that supports mutual exclusion and
the ability to wait /block until a certain condition becomes true.
A monitor is an abstract datatype that encapsulates data with a set of functions
to operate on the data.
Characteristics of Monitor
The local variables of a monitor can be accessed only by the local functions.
A function defined within a monitor can only access the local variables of a
monitor and its formal parameter.
Only one process may be active within the monitor at a time.
Syntax of a Monitor
monitor monitor-name
{
// shared variable declarations
procedure body P1 (…) { ….
}
…
1 33
procedure body Pn (…) {……}
{
initialization code
}
}
To allow a process to wait within the monitor, a condition variable must be
declared as
o condition x, y;
Two operations on a condition variable:
x.wait () –a process that invokes the operation is suspended.
x.signal () –resumes one of the suspended processes(if any)
1 34
When a philosopher thinks, she does not interact with her colleagues. From time to
time, a philosopher gets hungry and tries to pick up the two chopsticks that are closest
to her (the chopsticks that are between her and her left and right neighbors). A
philosopher may pick up only one chopstick at a time. Obviously, she cannot pick up a
chopstick that is already in the hand of a neighbor. When a hungry philosopher has
both her chopsticks at the same time, she eats without releasing the chopsticks. When
she is finished eating, she puts down both chopsticks and starts thinking again.
Each philosopher, before starting to eat, must invoke the operation pickup() followed
1 35
by eating and finally invoke putdown().
This solution ensures that no two neighbors are eating simultaneously and
that no deadlocks will occur. However, with this solution it is possible for a
philosopher to starve to death.
Implementing a Monitor using a semaphore
For each condition variable x, we introduce a semaphore x_sem and an
integer variable x_count, both initialized to 0.
The operation x.wait() is implemented as:
wait(mutex);
…
body of F
...
if (next_count > 0)
signal(next);
else
signal(mutex);
2.11 Deadlock
Definition: A process requests resources. If the resources are not available at that time
,the process enters a wait state. Waiting processes may never change state again
because the resources they have requested are held by other waiting processes. This
situation is called a deadlock.
A process must request a resource before using it, and must release resource after
using it.
1. Request: If the request cannot be granted immediately then the requesting
1 36
process must wait until it can acquire the resource.
2. Use: The process can operate on the resource
3. Release: The process releases the resource.
Mutual exclusion: At least one resource must be held in a non sharable mode. That
is only one process at a time can use the resource. If another process requests that
resource, the requesting process must be delayed until the resource has been released.
Hold and wait: A process must be holding at least one resource and waiting to
acquire additional resources that are currently being held by other processes.
Resource-Allocation Graph
It is a Directed Graph with a set of vertices V and set of edges E.
V is partitioned into two types:
1 37
Resource instances
One instance of resource type R1, Two instance of resource type R2,One
instance of resource type R3,Three instances of resource type R4.
Process states
Process P1 is holding an instance of resource type R2, and is waiting for an instance
of resource type R1.Resource Allocation Graph with a deadlock
1. Deadlock Prevention
2. Deadlock Avoidance
3. Deadlock Detection and Recovery
This ensures that the system never enters the deadlock state.
Deadlock prevention is a set of methods for ensuring that at least one of the
necessary conditions cannot hold.
By ensuring that at least one of these conditions cannot hold, we can prevent
the occurrence of a deadlock.
1 38
sharable resource - example Read-only files.
If several processes attempt to open a read-only file at the same time, they can
be granted simultaneous access to the file.
A process never needs to wait for a sharable resource.
Whenever a process requests a resource, it does not hold any other resource.
One technique that can be used requires each process to request and be
allocated all its resources before it begins execution.
Another technique is before it can request any additional resources, it must
release all the resources that it is currently allocated.
These techniques have two main disadvantages :
First, resource utilization may be low, since many of the resources may
be allocated but unused for a long time.
We must request all resources at the beginning for both protocols.
starvation is possible.
3. Denying No preemption
Impose a total ordering of all resource types and allow each process to request
for resources in an increasing order of enumeration.
Let R = {R1,R2,...Rm} be the set of resource types.
Assign to each resource type a unique integer number.
If the set of resource types R includes tapedrives, disk drives and printers.
F(tapedrive)=1,
F(diskdrive)=5,
F(Printer)=12.
1 39
2.11.3 Deadlock Avoidance:
Claim edge - Claim edge Pi---> Rj indicates that process Pi may request
resource Rj at some time, represented by a dashed directed edge.
When process Pi request resource Rj, the claim edge Pi -> Rj is converted to a
request edge.
Similarly, when a resource Rj is released by Pi the assignment edge Rj -> Pi is
reconverted to a claim edge Pi -> Rj
The request can be granted only if converting the request edge Pi -> Rj to an
assignment edge Rj -> Pi does not form a cycle.
1 40
If no cycle exists, then the allocation of the resource will leave the system in a
safe state.
If a cycle is found, then the allocation will put the system in an unsafe state.
Banker's algorithm
Safety algorithm
Initialize work := available and Finish [i]:=false for i=1,2,3 .. n
Find an i such that both
i. Finish[i]=false
ii. Needi<= Work
if no such i exists, goto step 4
3. work :=work+ allocationi;
Finish[i]:=true
goto step 2
4. If finish[i]=true for all i, then the system is in a safe state
1 41
Allocationi := Allocationi + Requesti
Needi := Needi - Requesti;
Now apply the safety algorithm to check whether this new state is safe or not.
If it is safe then the request from process Pi can be granted.
1 42
If Request [i,j]=k, then process Pi is requesting K more instances of resource type Rj.
1. Process Termination
1. Abort all deadlocked processes.
2. Abort one deadlocked process at a time until the deadlock cycle is eliminated.
After each process is aborted , a deadlock detection algorithm must be
invoked to determine where any process is still dead locked.
2. Resource Preemption
Preemptive some resources from process and give these resources to other
processes until the deadlock cycle is broken.
i. Selecting a victim: which resources and which process are to be preempted.
ii. Rollback: if we preempt a resource from a process it cannot continue with
its normal execution. It is missing some needed resource. We must rollback the
process to some safe state, and restart it from that state.
iii. Starvation: How can we guarantee that resources will not always be
preempted from the same process.
1 43