Unit 3 Process Synchronization
Unit 3 Process Synchronization
A cooperative process is the one which can affect the execution of other
process or can be affected by the execution of other process. Such
processes need to be synchronized so that their order of execution can be
guaranteed.
The procedure involved in preserving the appropriate order of execution
of cooperative processes is known as Process Synchronization.
Race Condition: A Race Condition occurs when two or more threads try
to read, write and possibly make the decisions based on the memory that
they are accessing concurrently.
Critical Section Problem
The regions of a program that try to access shared resources and may
cause race conditions are called critical section.
Critical Section is the part of program which tries to access shared
resources. That resource may be any resource in a computer like
memory location, Data structure, CPU or any IO device.
The critical section cannot be executed by more than one process at
the same time; operating system faces the difficulties in allowing and
disallowing the processes from entering the critical section.
In order to synchronize the cooperative processes, its necessary to
solve the critical section problem.
RequirementS of Synchronization mechanisms
1. Mutual Exclusion
By Mutual Exclusion, mean that if one process is executing inside critical
section then the other process must not enter in the critical section.
2. Progress
Progress means that if one process doesn't need to execute into critical
section then it should not stop other processes to get into the critical
section.
3. Bounded Waiting
We should be able to predict the waiting time for every process to get into
the critical section. The process must not be endlessly waiting for getting
into the critical section.
4. Architectural Neutrality
It means that if our solution is working fine on one architecture then it
should also run on the other ones as well.
LOCK VARIABLE
In this mechanism, a Lock variable lock is used. Two values of lock
can be possible, either 0 or 1. Lock value 0 means that the critical
section is vacant(empty) while the lock value 1 means that it is
occupied.
A process which wants to get into the critical section first checks the
value of the lock variable. If it is 0 then it sets the value of lock as 1
and enters into the critical section, otherwise it waits.
The pseudo code is as follows:
Entry Section →
While (lock! = 0);
Lock = 1;
//Critical Section
Exit Section →
Lock =0;
Here we have three sections: Entry Section, Critical Section and the exit section.
Initially the value of lock variable is 0. The process which needs to get into
the critical section, enters into the entry section and checks the condition
provided in the while loop.
The process will wait infinitely until the value of lock is 1 (that is implied by
while loop). Since, at the very first time critical section is vacant hence the
process will enter the critical section by setting the lock variable as 1.
When the process exits from the critical section, then in the exit section, it
reassigns the value of lock as 0.
Every Synchronization mechanism is judged on the basis of four conditions.
Mutual Exclusion
Progress
Bounded Waiting
Portability
Mutual Exclusion
The lock variable mechanism doesn't provide Mutual Exclusion in some of the cases. This
can be better described by looking at the pseudo code by the Operating System point of
view i.e., Assembly code of the program.
Let's convert the Code into the assembly language.
Load Lock, R0
CMP R0, #0
JNZ Step 1
Store #1, Lock
Store #0, Lock
Let us consider that we have two processes P1 and P2. The process P1 wants to
execute its critical section. P1 gets into the entry section. Since the value of lock is 0
hence P1 changes its value from 0 to 1 and enters into the critical section.
Now, CPU changes P1's state from waiting to running. P1 is yet to finish its critical
section. P1 has already checked the value of lock variable and remembers that its value
was 0 when it previously checked it. Hence, it also enters into the critical section
without checking the updated value of lock variable.
Now, we got two processes in the critical section. According to the condition of mutual
exclusion, more than one process in the critical section must not be present at the same
time. Hence, the lock variable mechanism doesn't guarantee the mutual exclusion.
The problem with the lock variable mechanism is that, at the same
time, more than one process can see the vacant tag and more than
one process can enter in the critical section. Hence, the lock variable
doesn't provide the mutual exclusion that's why it cannot be used in
general. Since, this method is failed at the basic step; hence, there is
no need to talk about the other conditions to be fulfilled.
TEST SET LOCK MECHANISM
There is a possibility of having more than one process in the critical section. In order to
address the problem, the operating system provides a special instruction called Test Set
Lock (TSL) instruction which simply loads the value of lock variable into the local
register R0 and sets it to 1 simultaneously.
The process which executes the TSL first will enter into the critical section and no
other process after that can enter until the first process comes out. No process can
execute the critical section even in the case of preemption of the first process.
Pi finishes its critical section and assigns j to turn variable. Pj will get the chance to enter into
the critical section. The value of turn remains j until Pj finishes its critical section.
Analysis of Strict Alternation approach
Mutual Exclusion
This approach provides mutual exclusion in every case. This procedure works only for
two processes. The pseudo code is different for both of the processes. The process will
only enter when it sees that the turn variable is equal to its Process ID otherwise not.
Hence no process can enter in the critical section regardless of its turn.
Progress
Progress is not guaranteed in this mechanism. If Pi doesn't want to get enter into the
critical section on its turn then Pj got blocked for infinite time. Pj has to wait for so
long for its turn since the turn variable will remain 0 until Pi assigns it to j.
Portability
The solution provides portability. It is a pure software mechanism implemented at user
mode and doesn't need any special instruction from the Operating System.
Paterson Solution
It is a busy waiting solution can be implemented for only two processes. It uses two
variables that are turn variable and interested variable.
# define N 2
# define TRUE 1
# define FALSE 0
int interested[N] = FALSE;
int turn;
voidEntry_Section (int process)
{
int other;
other = 1-process;
interested[process] = TRUE;
turn = process;
while (interested [other] =True && TURN=process);
}
voidExit_Section (int process)
{
interested [process] = FALSE;
}
The Peterson solution provides all the necessary requirements such as Mutual Exclusion, Progress,
Bounded Waiting and Portability.
Analysis of Peterson Solution
voidEntry_Section (int process)
{
1. int other;
2. other = 1-process;
3. interested[process] = TRUE;
4. turn = process;
5. while (interested [other] =True && TURN=process);
}
Critical Section
P1 → 1 2 3 4 5 CS
Now, Process P1 got preempted and process P2 got scheduled. P2 also wants to enter in the
critical section and executes instructions 1, 2, 3 and 4 of entry section. On instruction 5, it
got stuck since it doesn't satisfy the condition (value of other interested variable is still true).
Therefore it gets into the busy waiting.
P2 → 1 2 3 4 5
P1 again got scheduled and finish the critical section by executing the
instruction no. 6 (setting interested variable to false). Now if P2 checks
then it’s going to satisfy the condition since other process's interested
variable becomes false. P2 will also get enter the critical section.
P1 → 6
P2 → 5 CS
Any of the process may enter in the critical section for multiple numbers
of times. Hence the procedure occurs in the cyclic order.
Mutual Exclusion
The method provides mutual exclusion for sure. In entry section, the while condition involves the
criteria for two variables therefore a process cannot enter in the critical section until the other
process is interested and the process is the last one to update turn variable.
Progress
An uninterested process will never stop the other interested process from entering in the critical
section. If the other process is also interested then the process will wait.
Bounded waiting
However, in Peterson solution, A deadlock can never happen because the process which first sets the
turn variable will enter in the critical section. Therefore, if a process is preempted after executing
line number 4 of the entry section then it will definitely get into the critical section in its next
chance.
Portability
This is the complete software solution and therefore it is portable on every hardware.
SEMAPHORES
Semaphore is a Hardware Solution is written or given to critical section
problem.
The Semaphore is just a normal integer. The Semaphore cannot be negative. The least
value for a Semaphore is zero (0). The Maximum value of a Semaphore can be anything.
The Semaphores usually have two operations. The two operations have the capability to
decide the values of the semaphores.
The two Semaphore Operations are:
Wait ( )
Signal ( )
Wait Semaphore Operation
The Wait Operation is used for deciding the condition for the process to enter the critical
state or wait for execution of process.
Here, the wait operation has many different names. The different names are:
Sleep Operation
Down Operation
Decrease Operation
P Function (most important alias name for wait operation)
The Wait Operation works on the basis of Semaphore or Mutex Value.
Here, if the Semaphore value is greater than zero or positive then the Process can enter
the Critical Section Area.
If the Semaphore value is equal to zero then the Process has to wait for the Process to
exit the Critical Section Area.
This function is only present until the process enters the critical state. If the Processes
enters the critical state, then the P Function or Wait Operation has no job to do.
If the Process exits the Critical Section we have to reduce the value of Semaphore.
Basic Algorithm of P Function or Wait Operation
P (Semaphore value)
{
Allow the process to enter if the value of Semaphore is greater than zero or positive.
Do not allow the process if the value of Semaphore is less than zero or zero.
Decrement the Semaphore value if the Process leaves the Critical State.
}
Signal Semaphore Operation
The Signal Semaphore Operation is used to update the value of Semaphore. The Semaphore
value is updated when the new processes are ready to enter the Critical Section.
The Signal Operation is also known as:
Wake up Operation
Up Operation
Increase Operation
V Function (most important alias name for signal operation)
We use signal operation which increments the semaphore value. This induces the critical
section to receive many processes into it.
The most important part is that this Signal Operation or V Function is executed only
when the process comes out of the critical section. The value of semaphore cannot be
incremented before the exit of process from the critical section
Basic Algorithm of V Function or Signal Operation
V (Semaphore value)
{
If the process goes out of the critical section then add 1 to the semaphore
value
Else keep same until process exits
}
Types of Semaphores
1. Binary Semaphore
Here, there are only two values of Semaphore in Binary Semaphore Concept. The two
values are 1 and 0.
If the Value of Binary Semaphore is 1, then the process has the capability to enter the
critical section area.
If the value of Binary Semaphore is 0 then the process does not have the capability to
enter the critical section area.
2. Counting Semaphore
Here, there are two sets of values of Semaphore in Counting Semaphore Concept. The
two types of values
If the Value of Counter Semaphore is greater than or equal to 1, then the process has
the capability to enter the critical section area.
If the value of Counter Semaphore is 0 then the process does not have the capability to
enter the critical section area.
Problem on Counting Semaphore
Wait → Decre → Down → P
Signal → Inc → Up → V
A Counting Semaphore was initialized to 12. then 10P (wait) and
4V (Signal) operations were computed on this semaphore. What is
the result?
S = 12 (initial)
10 p (wait) :
SS = S -10 = 12 - 10 = 2
then 4 V :
SS = S + 4 =2 + 4 = 6
Hence, the final value of counting semaphore is 6.
Binary Semaphore or Mutex
In counting semaphore, Mutual exclusion was not provided because we
have set of processes which required to execute in the critical section
simultaneously.
However, Binary Semaphore strictly provides mutual exclusion. Here,
instead of having more than 1 slots available in the critical section, we
can only have at most 1 process in the critical section. The semaphore
can have only two values, 0 or 1.
StructBsemaphore
{
enum Value(0,1); //value is enumerated data type which can only have two values 0 or 1.
Queue type L;
}
/* L contains all PCBs corresponding to process
Blocked while processing down operation unsuccessfully.
*/
Down (Bsemaphore S)
{
if (s.value == 1) // if a slot is available in the
//critical section then let the process enter in the queue.
{
S.value = 0; // initialize the value to 0 so that no other process can read it as 1.
}
else
{
put the process (PCB) in S.L; //if no slot is available
//then let the process wait in the blocked queue.
sleep();
} } }
Up (Bsemaphore S)
{
if (S.L is empty) //an empty blocked processes list implies that no process
//has ever tried to get enter in the critical section.
{
S.Value =1;
}
else
{
Select a process from S.L;
Wakeup(); // if it is not empty then wake the first process of the blocked queue.
}
Advantages:
Semaphores are machine independent since their implementation and
codes are written in the microkernel's machine independent code area.
They strictly enforce mutual exclusion and let processes enter the
crucial part one at a time (only in the case of binary semaphores).
With the use of semaphores, no resources are lost due to busy waiting
since we do not need any processor time to verify that a condition is
met before allowing a process access to the crucial area.
Semaphores have very good management of resources.
Disadvantages:
Due to the employment of semaphores, it is possible for high priority
processes to reach the vital area before low priority processes.
Semaphores are a little complex, it is important to design the wait and
signal actions in a way that avoids deadlocks.
Programming a semaphore is very challenging and there is a danger that
mutual exclusion won't be achieved.
The wait ( ) and signal ( ) actions must be carried out in the appropriate
order to prevent deadlocks.
READERS WRITERS PROBLEM
The readers-writers problem is a classical problem of process synchronization, it relates to
a data set such as a file that is shared between more than one process at a time.
The readers-writers problem is used for managing synchronization among various reader
and writer process so that there are no problems with the data sets.
Example - If two or more than two readers want to access the file at the same point in
time there will be no problem. However, in other situations like when two writers or one
reader and one writer wants to access the file at the same point of time, there may occur
some problems, hence design the code in such a manner that if one reader is reading then
no writer is allowed to update at the same point of time, similarly, if one writer is writing
no reader is allowed to read the file at that point of time and if one writer is updating a file
other writers should not be allowed to update the file at the same point of time. However,
multiple readers can access the object at the same time.
Case Process 1 Process 2 Allowed / Not
Allowed
wait(mutex);
readcount --; // on every exit of reader decrement readcount
if (readcount == 0)
{
signal (write);
}
signal(mutex);
In the above code of reader, mutex and write are semaphores that have
an initial value of 1, whereas the readcount variable has an initial value
as 0. Both mutex and write are common in reader and writer process
code, semaphore mutex ensures mutual exclusion and semaphore write
handles the writing mechanism.
The readcount variable denotes the number of readers accessing the file
concurrently. The moment variable readcount becomes 1, wait operation
is used to write semaphore which decreases the value by one. This means
that a writer is not allowed how to access the file anymore. On
completion of the read operation, readcount is decremented by one.
When readcount becomes 0, the signal operation which is used
to write permits a writer to access the file.
Code for Writer Process
wait(write);
WRITE INTO THE FILE
signal(write);
If a writer wishes to access the file, wait operation is performed
on write semaphore, which decrements write to 0 and no other writer
can access the file. On completion of the writing job by the writer who
was accessing the file, the signal operation is performed on write.
DINING PHILOSOPHERS PROBLEM
The dining philosopher's problem is the classical problem of synchronization which
says that Five philosophers are sitting around a circular table and their job is to think
and eat alternatively. A bowl of noodles is placed at the center of the table along with
five chopsticks for each of the philosophers. To eat a philosopher needs both their right
and a left chopstick. A philosopher can only eat if both immediate left and right
chopsticks of the philosopher is available. In case if both immediate left and right
chopsticks of the philosopher are not available then the philosopher puts down their
(either left or right) chopstick and starts thinking again.
Dining Philosophers Problem- The five Philosophers are represented as P0, P1, P2, P3,
and P4 and five chopsticks by C0, C1, C2, C3, and C4.
DEADLOCKS
A Deadlock is a situation where each of the computer
process waits for a resource which is being assigned to
some another process. In this situation, none of the process
gets executed since the resource it needs, is held by some
other process which is also waiting for some other
resource to be released.
Example:
Let us assume that there are three processes P1, P2 and P3. There are three
different resources R1, R2 and R3. R1 is assigned to P1, R2 is assigned to P2
and R3 is assigned to P3.
After some time, P1 demands for R1 which is being used by P3. P1 halts its
execution since it can't complete without R2. P2 also demands for R3 which
is being used by P3. P2 also stops its execution because it can't continue
without R3. P3 also demands for R1 which is being used by P1 therefore P3
also stops its execution.
In this scenario, a cycle is being formed among the three processes. None of
the process is progressing and they are all waiting. The computer becomes
unresponsive since all the processes got blocked.
Necessary conditions for Deadlocks:
Mutual Exclusion
A resource can only be shared in mutually exclusive manner. It implies,
if two process cannot use the same resource at the same time.
Hold and Wait
A process waits for some resources while holding another resource at
the same time.
No pre-emption
The process which once scheduled will be executed till the completion.
No other process can be scheduled by the scheduler meanwhile.
Circular Wait
All the processes must be waiting for the resources in a cyclic manner
so that the last process is waiting for the resource which is being held by
the first process.
Strategies for handling Deadlock:
1. Deadlock Ignorance
2. Deadlock prevention
3. Deadlock avoidance
4. Deadlock detection and recovery
1. Deadlock Ignorance
In this approach, the Operating system assumes that deadlock never
occurs. It simply ignores deadlock. This approach is best suitable for a
single end user system where User uses the system only for browsing
and all other normal stuff. In these types of systems, the user has to
simply restart the computer in the case of deadlock. Windows and
Linux are mainly using this approach.
Deadlock Prevention
If we can able to violate one of the four necessary conditions and don't let
them occur together then we can prevent the deadlock.
1. Mutual Exclusion
If a resource could have been used by more than one process at the same time
then the process would have never been waiting for any resource.
However, if we can be able to violate resources behaving in the mutually
exclusive manner then the deadlock can be prevented.
2. Hold and Wait
Hold and wait condition lies when a process holds a resource and waiting
for some other resource to complete its task. Deadlock occurs because
there can be more than one process which are holding one resource and
waiting for other in the cyclic order.
A process must be assigned all the necessary resources before the
execution starts. A process must not wait for any resource once the
execution has been started. Then deadlock is prevented.
!(Hold and wait) = !hold or !wait (negation of hold and wait is, either
you don't hold or you don't wait)
3. No Pre-emption
Deadlock arises due to the fact that a process can't be stopped once it starts.
However, if we take the resource away from the process which is causing
deadlock then we can prevent deadlock.
This is not a good approach at all since if we take a resource away which is
being used by the process then all the work which it has done till that can
become inconsistent.
4. Circular Wait
To violate circular wait, we can assign a priority number to each of the
resource. A process can't request for a lesser priority resource. This
ensures that not a single process can request a resource which is being
utilized by some other process and no cycle will be formed.
Deadlock avoidance
In deadlock avoidance, the request for any resource will be granted
if the resulting state of the system doesn't cause deadlock in the
system.
The state of the system will continuously be checked for safe and
unsafe states.
In order to avoid deadlocks, the process must tell OS, the maximum
number of resources a process can request to complete its execution.
Safe and Unsafe States
A state of the system is called safe if the system can allocate all the resources requested
by all the processes without entering into deadlock.
If the system cannot fulfil the request of all processes then the state of the system is
called unsafe.
The resource allocation state of a system can be defined by the instances of available and
allocated resources and the maximum instance of the resources demanded by the
processes.
Resources assigned
Table shows the instances of each resource assigned to each process.
Process Type 1 Type 2 Type 3 Type 4
A 3 0 2 2
B 0 0 1 1
C 1 1 1 0
D 2 1 4 0
Resources still needed
Table shows the instances of the resources, each process still needs.
A 1 1 0 0
B 0 1 1 2
C 1 2 1 0
D 2 1 1 2
E = (7 6 8 4)
P = (6 2 8 3)
A = (1 4 0 1)
Above tables and vector E, P and A describes the resource allocation state of a
system.
There are 4 processes and 4 types of the resources in a system.
Vector E is the representation of total instances of each resource in the system.
Vector P represents the instances of resources that have been assigned to
processes.
Vector A represents the number of resources that are not in use.
The key of Deadlock avoidance approach is when the request is made for
resources then the request must only be approved in the case if the resulting state
is also a safe state.
Resource Allocation Graph
The resource allocation graph is the pictorial representation of the state of
a system. Means resource allocation graph is the complete information
about all the processes which are holding some resources or waiting for
some resources.
In Resource allocation graph, the process is represented by a Circle while
the Resource is represented by a rectangle.
Vertices are mainly of two types, Resource and process. Each of them
will be represented by a different shape. Circle represents process while
rectangle represents resource.
A resource can have more than one instance. Each instance will be
represented by a dot inside the rectangle.
Edges in RAG are also of two types, one represents assignment and other represents
the wait of a process for a resource. The below image shows each of them.
A resource is shown as assigned to a process if the tail of the arrow is attached
to an instance to the resource and the head is attached to a process.
A process is shown as waiting for a resource if the tail of an arrow is attached
to the process while the head is pointing towards the resource.
Example
Let's consider 3 processes P1, P2 and P3 and two types of resources R1 and R2. The
resources are having 1 instance each.
According to the graph, R1 is being used by P1, P2 is holding R2 and waiting for R1,
P3 is waiting for R1 as well as R2.
The graph is deadlock free since no cycle is being formed in the graph.
Deadlock Detection using RAG
If a cycle is being formed in a Resource allocation graph where all the
resources have the single instance then the system is deadlocked.
In Case of Resource allocation graph with multi-instanced resource
types, Cycle is a necessary condition of deadlock but not the sufficient
condition.
The following example contains three processes P1, P2, P3 and three
resources R2, R2, R3. All the resources are having single instances each.
If we analyze the graph then we can find out that there is a cycle formed in the graph since
the system is satisfying all the four conditions of deadlock.
Allocation Matrix:
Allocation matrix can be formed by using the Resource allocation graph of a system. In
Allocation matrix, an entry will be made for each of the resource assigned.
For Example, in the following matrix, an entry is being made in front of P1 and below R3
since R3 is assigned to P1.
Process R1 R2 R3
P1 0 0 1
P2 1 0 0
P3 0 1 0
Request Matrix
In request matrix, an entry will be made for each of the resource requested.
In the following example, P1 needs R1 therefore an entry is being made in front of P1 and
below R1.
Process R1 R2 R3
P1 1 0 0
P2 0 1 0
P3 0 0 1
Avail = (0,0,0)
Neither we are having any resource available in the system nor a process going to release.
Each of the process needs at least single resource to complete therefore they will continuously
be holding each one of them.
We cannot fulfil the demand of at least one process using the available resources therefore the
system is deadlocked as we detected a cycle in the graph.
BANKER’S ALGORITHM:
It is a banker algorithm used to avoid deadlock and allocate
resources safely to each process in the computer system. The 'S-
State' examines all possible tests or activities before deciding whether the
allocation should be allowed to each process. It also helps the operating
system to successfully share the resources between all the processes.
***The banker's algorithm is named because it checks whether a
person should be sanctioned a loan amount or not to help the bank
system safely simulate allocation resources.
In an operating system, When a new process is created in a computer
system, the process must provide all types of information to the operating
system like upcoming processes, requests for their resources, counting them
and delays. Based on these criteria, the operating system decides which
process sequence should be executed or waited so that no deadlock occurs in
a system. Therefore, it is also known as deadlock avoidance
algorithm or deadlock detection in the operating system.
Advantages:
It contains various resources that meet the requirements of each process.
Each process should provide information to the operating system for
upcoming resource requests, the number of resources and how long the
resources will be held.
It helps the operating system manage and control process requests for
each type of resource in the computer system.
The algorithm has a Max resource attribute that indicates each process
can hold the maximum number of resources in a system.
Disadvantages
It requires a fixed number of processes and no additional processes can be
started in the system while executing the process.
The algorithm does no longer allows the processes to exchange its
maximum needs while processing its tasks.
Each process has to know and state their maximum resource requirement
in advance for the system.
The number of resource requests can be granted in a finite time, but the
time limit for allocating the resources is one year.
In banker's algorithm, it requests to know about three things:
How much each process can request for each resource in the system. It is
denoted by the [MAX] request.
How much each process is currently holding each resource in a system. It
is denoted by the [ALLOCATED] resource.
It represents the number of each resource currently available in the
system. It is denoted by the [AVAILABLE] resource.
Suppose n is the number of processes and m is the number of each type of resource used in a
computer system.
Available: It is an array of length 'm' that defines each type of resource available in the
system. When Available[j] = K, means that 'K' instances of Resources type R[j] are
available in the system.
Max: It is a [n x m] matrix that indicates each process P[i] can store the maximum number
of resources R[j] (each type) in a system.
Allocation: It is a matrix of m x n orders that indicates the type of resources currently
allocated to each process in the system. When Allocation [i, j] = K, it means that process
P[i] is currently allocated K instances of Resources type R[j] in the system.
Need: It is an M x N matrix sequence representing the number of remaining resources for
each process. When the Need[i] [j] = k, then process P[i] may require K more instances of
resources type Rj to complete the assigned work.
Need[i][j] = Max[i][j] - Allocation[i][j].
Finish: It is the vector of the order m. It includes a Boolean value (true/false) indicating
whether the process has been allocated to the requested resources and all resources have
been released after finishing its task.
The Banker's Algorithm is the combination of the safety algorithm and the resource
request algorithm to control the processes and avoid deadlock in a system:
Safety Algorithm
It is a safety algorithm used to check whether or not a system is in a safe state or follows the
safe sequence in a banker's algorithm:
1. There are two vectors Wok and Finish of length m and n in a safety algorithm.
Initialize: Work = Available
Finish[i] = false; for I = 0, 1, 2, 3, 4… n - 1.
2. Check the availability status for each type of resources [i], such as:
Need[i] <= Work
Finish[i] == false
If the i does not exist, go to step 4.
3. Work = Work +Allocation(i) // to get new resource allocation
Finish[i] = true
Go to step 2 to check the status of resource availability for the next process.
4. If Finish[i] == true; it means that the system is safe for all processes.
Resource Request Algorithm
A resource request algorithm checks how a system will behave when a process
makes each type of resource request in a system as a request matrix.
Let create a resource request array R[i] for each process P[i]. If the Resource
Requesti [j] equal to 'K', which means the process P[i] requires 'k' instances of
Resources type R[j] in the system.
When the resource allocation state is safe, its resources are allocated to
the process P(i). And
if the new state is unsafe, the Process P (i) has to wait for each type of
Request R(i) and restore the old resource-allocation state.
Example: Consider a system that contains five processes P1, P2, P3, P4, P5 and the
three resource types A, B and C.
Following are the resources types: A has 10, B has 5 and the resource type C has 7
instances.
P1 0 1 0 7 5 3 3 3 2
P2 2 0 0 3 2 2
P3 3 0 2 9 0 2
P4 2 1 1 2 2 2
P5 0 0 2 4 3 3
It is a process swapping
1 It is a job scheduler It is a CPU scheduler
scheduler.
It is almost absent or
It is also minimal in time It is a part of Time sharing
4 minimal in time sharing
sharing system systems.
system