OS_3_UNIT-process_synchronization
OS_3_UNIT-process_synchronization
The main objective of process synchronization is to ensure that multiple processes access shared
resources without interfering with each other, and to prevent the possibility of inconsistent data due
to concurrent access. To achieve this, various synchronization techniques such as semaphores,
monitors, and critical sections are used.
On the basis of synchronization, processes are categorized as one of the following two types:
Independent Process: The execution of one process does not affect the execution of
other processes.
Cooperative Process: A process that can affect or be affected by other processes
executing in the system.
Process synchronization problem arises in the case of Cooperative process also because resources
are shared in Cooperative processes.
Race Condition
When more than one process is executing the same code or accessing the same memory or any
shared variable in that condition there is a possibility that the output or the value of the shared
variable is wrong so for that all the processes doing the race to say that my output is correct this
condition known as a race condition. Several processes access and process the manipulations over
the same data concurrently, then the outcome depends on the particular order in which the access
takes place. A race condition is a situation that may occur inside a critical section. This happens
when the result of multiple thread execution in the critical section differs according to the order in
which the threads execute. Race conditions in critical sections can be avoided if the critical
section is treated as an atomic instruction. Also, proper thread synchronization using locks or
atomic variables can prevent race conditions.
A critical section is a code segment that can be accessed by only one process at a time. The
critical section contains shared variables that need to be synchronized to maintain the consistency
of data variables. So the critical section problem means designing a way for cooperative processes
to access shared resources without creating data inconsistencies.
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
Mutual Exclusion: If a process is executing in its critical section, then no other process is
allowed to execute in the critical section.
Progress: If no process is executing in the critical section and other processes are waiting
outside the critical section, then only those processes that are not executing in their
remainder section can participate in deciding which will enter in the critical section next,
and the selection can not be postponed indefinitely.
Bounded Waiting: A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical
section and before that request is granted.
Peterson’s Solution
Peterson's solution is based on two processes, P0 and P1, which alternate between their critical
sections and remainder sections. For convenience of discussion, "this" process is Pi, and the
"other" process is Pj. ( i.e. j = 1 - i )
int turn - Indicates whose turn it is to enter into the critical section. If turn = =i, then
process i is allowed into their critical section.
boolean flag[ 2 ] - Indicates when a process wants to enter into their critical section.
When process i wants to enter their critical section, it sets flag[ i ] to true.
In the following diagram, the entry and exit sections are enclosed in boxes.
In the entry section, process i first raises a flag indicating a desire to enter thecritical
section.
Then turn is set to j to allow the other process to enter their critical section if process j
so desires.
The while loop is a busy loop ( notice the semicolon at the end ), which makesprocess
i wait as long as process j has the turn and wants to enter the critical section.
Process i lowers the flag[ i ] in the exit section, allowing process j to continueif it has
been waiting.
To prove that the solution is correct, we must examine the three conditions listed above:
1. Mutual exclusion - If one process is executing their critical section when the other
wishes to do so, the second process will become blocked by the flag of the first
process. If both processes attempt to enter at the same time, the lastprocess to execute
"turn = j" will be blocked.
2. Progress - Each process can only be blocked at the while if the other process wants to
use the critical section (flag[ j ] = = true ), AND it is the other process's turn to use the
critical section ( turn = = j ). If both of those conditions are true, then the other process
( j ) will be allowed to enter the critical section, and upon exiting the critical section,
Smt. G.Sreevani , Lecturer at ASMC BALLARI 4
UNIT 3: PROCESS SYNCHRONIZATION
will set flag[ j ] to false, releasing process i. The shared variable turn assures that only
one process ata time can be blocked, and the flag variable allows one process to
release the other when exiting their critical section.
3. Bounded Waiting - As each process enters their entry section, they set the turn
variable to be the other processes turn. Since no process ever sets itback to their own
turn, this ensures that each process will have to let the other process go first at most
one time before it becomes their turn again.
Note that the instruction "turn = j" is atomic, that is it is a single machine instruction
which cannot be interrupted.
Synchronization Hardware
To generalize the solution(s) expressed above, each process when entering their critical
section must set some sort of lock, to prevent other processes from entering their critical
sections simultaneously, and must release the lock when exiting their critical section, to allow
other processes to proceed. Obviously it must be possible to attain the lock only whenno
other process has already set a lock. Specific implementations of this general procedure can
get quite complicated, and may include hardware solutions as outlined in this section.
One simple solution to the critical section problem is to simply prevent a process from being
Smt. G.Sreevani , Lecturer at ASMC BALLARI 5
UNIT 3: PROCESS SYNCHRONIZATION
interrupted while in their critical section, which is the approach taken by non preemptive
kernels. Unfortunately, this does not work well in multiprocessor environments, due to the
difficulties in disabling and the re-enabling interrupts on all processors. There is also a
question as to how this approach affects timing if the clock interrupt is disabled.
Another approach is for hardware to provide certain atomic operations. These operations are
guaranteed to operate as a single instruction, without interruption. One such operation is the
"Test and Set", which simultaneously sets a Boolean lock variable and returns its previous
value
Another variation on the test-and-set is an atomic swap of two Booleans variables which acts
as a lock and key. This swaps contents of two memory words and maintain mutual exclusion
in accessing them.
Semaphores
Semaphores are more robust to use and it is just an integer variable for which only two atomic
operations are defined, the wait and signal operations, as shown in the following figure.Note that
not only must the variable-changing steps (S-- and S++) be indivisible, it is also necessary that
for the wait operation when the test proves false that there be no interruptions before S gets
decremented. It is okay, however, for the busy loop to be interrupted when the test is true, which
2) Counting semaphores can take on any integer value, and are usually used to count the
number remaining of some limited resource. The counter is initialized to the number of
such resources available in the system, and whenever the counting semaphore is greater
than zero, then a process can enter a critical section and use one of the resources. When
the counter gets to zero ( or negative in some implementations ), then the process blocks
until another process frees up a resource and increments the counting semaphore with a
signal call. ( The binary semaphore can be seen as just a special case where the number
of resources initially available is just one. )
In this solution, the two counting semaphores "full" and "empty" keep track of the current
number of full and empty buffers respectively (and initialized to 0 and N respectively. ) The
binary semaphore mutex controls access to the critical section. The producer and consumer
processes are nearly identical - One can think of the producer as producing full buffers, and
the consumer producing empty buffers.
In the readers-writers problem there are some processes (termed readers ) who onlyread
the shared data, and never change it, and there are other processes ( termed writers ) who
may change the data in addition to or instead of reading it. There is no limit to how many
readers can access the data simultaneously, but when a writer accesses the data, it needs
exclusive access.
There are several variations to the readers-writers problem, most centered around relative
priorities of readers versus writers.
Smt. G.Sreevani , Lecturer at ASMC BALLARI 9
UNIT 3: PROCESS SYNCHRONIZATION
The first readers-writers problem gives priority to readers. In this problem, ifa reader
wants access to the data, and there is not already a writer accessing it, then access is
granted to the reader. A solution to this problem can lead to starvation of the writers, as
there could always be more readers coming along to access the data.
The second readers-writers problem gives priority to the writers. In this problem, when a
writer wants access to the data it jumps to the head of the queue - All waiting readers are
blocked, and the writer gets access to thedata as soon as it becomes available. In this
solution the readers may be starved by a steady stream of writers.
The above code is an example of the first readers-writers problem, and involvesan important
counter and two binary semaphores:
readcount is used by the reader processes, to count the number of readers currently
accessing the data.
mutex is a semaphore used only by the readers for controlled accessto readcount.
rw_mutex is a semaphore used to block and release the writers. The first reader to access
the data will set this lock and the last reader to exit will release it; The remaining readers
do not touch rw_mutex. ( Eighth edition called this variable wrt. )
Note that the first reader to come along will block on rw_mutex if there is currently a
writer accessing the data, and that all following readers will only block on mutex for their
turn to increment readcount.
The dining philosopher’s problem is a classic synchronization problem involving the allocation
of limited resources amongst a group of processes in a deadlock-free and starvation-free manner:
Consider five philosophers sitting around a table, in which there are five chopsticks evenly
distributed and an endless bowl of rice in the center, as shown in the diagram below.
(There is exactly one chopstick between each pair of dining philosophers). These
philosophers spend their lives alternating between two activities: eating and thinking.
Whenit is time for a philosopher to eat, it must first acquire two chopsticks - one from their left
and one from their right. When a philosopher thinks, it puts down both chopsticks in their
original locations.
One possible solution, as shown in the following code section, is to use a set of five
semaphore in the following code section, is to use a set of five semaphores ( chopsticks[
5
] ), and to have each hungry philosopher first wait on their left chopstick ( chopsticks[ i ]
), and then wait on their right chopstick ( chopsticks[ ( i + 1 ) % 5 ] )
But suppose that all five philosophers get hungry at the same time, and each starts by
picking up their left chopstick. They then look for their right chopstick, but because it is
unavailable, they wait for it, forever, and eventually all the philosophers starve due to the
resulting deadlock
Allow philosophers to pick up chopsticks only when both are available, in a critical section.
Use an asymmetric solution, in which odd philosophers pick up their left chopstick first and
even philosophers pick up their right chopstick first.
The computer system consists of finite number of resources which has to be shared among the
competing process. The resources can be CPU cycles, files, memory space, I/O devices etc.
Some of the resources can be single and some might be multiple. Each of resource of the
same type is called as an instance. The resources can be represented as R1, R2…Rm and Each
resource can Wi instances. Each process request for a resource, use the resource and release the
resource.
Deadlock happens in operating system when two or more processes need some resource tocomplete their
execution that is held by the other process. Deadlock is a situation where a set of processes are blocked
because each process is holding a resource and waiting for another resource acquired by some other process.
Deadlock Characterization
The following are the necessary conditions for the deadlock. When all of the 4 conditions occur
simultaneously then deadlock occurs.
1) Mutual Exclusion
There should be a resource that can only be held by one process at a time. In the diagram below,
there is a single instance of Resource 1 and it is held by Process 1 only.
A process can hold multiple resources and still request more resources from other
processeswhich are holding them. In the diagram given below, Process 2 holds Resource 2
andResource 3 and is requesting the Resource 1 which is held by Process 1.
3) No Preemption
A resource cannot be preempted from a process by force. A process can only release a resource
voluntarily. In the diagram below, Process 2 cannot preempt Resource 1 from Process 1. It will
only be released when Process 1 relinquishes it voluntarily after its execution is complete.
4) Circular Wait
A process is waiting for the resource held by the second process, which is waiting for the
resource held by the third process and so on, till the last process is waiting for a resource held by
the first process. This forms a circular chain. For example: Process 1 is allocated Resource2
and it is
requesting Resource 1. Similarly, Process
2 is allocated Resource 1 and it is
requesting Resource 2 This forms a
circular wait loop.
1) Deadlock prevention or avoidance - Do not allow the system to get into a deadlocked state.
2) Deadlock detection and recovery - Abort a process or preempt some resources when
deadlocks are detected.
3) Ignore the problem all together - If deadlocks only occur once a year or so, it may be
better to simply let them happen and reboot as necessary than to incur the constant
overhead and system performance penalties associated with deadlock prevention or
detection. This is the approach that both Windows and UNIX take.
In order to avoid deadlocks, the system must have additional information about all processes. In
particular, the system must know what resources a process will or may requesti n the future.
(Ranging from a simple worst-case maximum to a complete resource request and release plan for
each process, depending on the particular algorithm. )
Deadlock detection is fairly straightforward, but deadlock recovery requires either aborting
processes or preempting resources, neither of which is an attractive alternative
If deadlocks are neither prevented nor detected, then when a deadlock occurs the system will
gradually slow down, as more and more processes become stuck waiting for resources currently
held by the deadlock and by other waiting processes. Unfortunately this slowdowncan be
indistinguishable from a general system slowdown when a real-time process has heavy computing
needs.
Deadlock prevention
Deadlocks can be prevented by preventing at least one of the four required conditions is not met.
Mutual Exclusion
Shared resources such as read-only files do not lead to deadlocks.
Unfortunately some resources, such as printers and tape drives, require exclusive access by a
single process.
Hold and Wait
To prevent this condition processes must be prevented from holding one or more resources
while simultaneously waiting for one or more others. There are several possibilities for this:
Require that all processes request all resources at one time. This can be wasteful of
system resources if a process needs one resource early in its execution and doesn't need
some other resource until much later.
PROF G.Sreevani, ASMC BALLARI 15
UNIT 3: PROCESS SYNCHRONIZATION
Require that processes holding resources must release them before requesting new
resources, and then re-acquire the released resources along with the new ones in a single
new request. This can be a problem if a process has partially completed an operation
using a resource and then fails to get it re-allocated after releasing it.
Either of the methods described above can lead to starvation if a process requires one or
more popular resources.
No Preemption
Preemption of process resource allocations can prevent this condition of deadlocks, when it is
possible.
One approach is that if a process is forced to wait when requesting a new resource, then
all other resources previously held by this process are implicitly released, ( preempted ),
forcing this process to re-acquire the old resources along with the new resources in a
single request, similar to the previous discussion.
Another approach is that when a resource is requested and not available, then the system
looks to see what other processes currently have those resources and are themselves
blocked waiting for some other resource. If such a process is found, then some of their
resources may get preempted andadded to the list of resources for which the process is
waiting.
Either of these approaches may be applicable for resources whose states are easily saved
and restored, such as registers and memory, but are generally not applicable to other
devices such as printers etc.
Circular Wait
One way to avoid circular wait is to number all resources, and the processes has to request
resources only in strictly increasing ( or decreasing ) order.
In other words, in order to request resource Rj, a process must first release all Risuch
that i >= j.
One big challenge in this scheme is determining the relative ordering of the different
resources.
The general idea behind deadlock avoidance is to prevent deadlocks from ever
happening, by preventing at least one of the aforementioned conditions. This requires
more information about each process, AND tends to lead to low device utilization. In
some
algorithms the scheduler only needs to know the maximum number of each resource that
a process might potentially use. In more complex algorithms the scheduler can also take
advantage of the schedule of exactly what resources may be needed in what order. When
a scheduler sees that starting a process or granting resource requests may lead to future
deadlocks, then that process is just not started or the request is not granted.
A resource allocation state is defined by the number of available and allocated resources, and the
maximum requirements of all processes in the system.
Safe State
A state is safe if the system can allocate all resources requested by all processes without
entering a deadlock state.
More formally, a state is safe if there exists a safe sequence of processes { P0, P1, P2,...,
PN}such that all of the resource requests for Pi can be granted using the resources currently
allocated to Pi and all processes Pj where j < i. ( i.e. if all the processes prior to Pi finish and
free up their resources, then Pi will be able to finish also, using the resources that they have freed
up.
If a safe sequence does not exist, then the system is in an unsafe state, which MAY lead to
deadlock.
The deadlock can be avoided for a single resource instance using Resource allocation graph.
In this case, unsafe states can be recognized and avoided by augmenting the resource-
allocation graph with claim edges, noted by dashed lines, which point from a process to a
resource that it may request in the future.
In order for this technique to work, all claim edges must be added to the graph for any
particular process before that process is allowed to request any resources. (
Alternatively, processes may only make requests for resources for which they have
already established claim edges, and claim edges cannot be added to any process that is
currently holding resources. )
When a process makes a request, the claim edge Pi->Rj is converted to a request edge.
Similarly when a resource is released, the assignment reverts back to a claim edge.
This approach works by denying requests that would produce cycles in the resource- allocation
graph, taking claim edges into effect. Consider for example what happens when process P2
requests resource R2:
The resulting resource-allocation graph would have a cycle in it, and so the request cannot be
granted.
The banker’s algorithm is a resource allocation and deadlock avoidance algorithm thattests
for safety by simulating the allocation for predetermined maximum possible amounts of all
resources, then makes an “safe-state” check to test for possible activities, before deciding
whether allocation should be allowed to continue.
Banker’s algorithm is named so because it is used in banking system to check whether loan
can be sanctioned to a person or not. Suppose there are n number of account holders in a bank
and the total sum of their money is S. If a person applies for a loan, then the bank first
subtracts the loan amount from the total money that bank has and if the remaining amount is
greater than S then only the loan is sanctioned. It is done because if all the account holders
come to withdraw their money then the bank can easily do it.
In other words, the bank would never allocate its money in such a way that it can no
longer satisfy the needs of all its customers. The bank would try to be in safe state always.
Available :
It is a 1-d array of size ‘m’ indicating the number of available resources of eachtype.
Max:
It is a 2-d array of size ‘n*m’ that defines the maximum demand of eachprocess in a
system.
Max[ i, j ] = k means process Pi may request at most ‘k’ instances of resourcetype Rj.
Allocation:
It is a 2-d array of size ‘n*m’ that defines the number of resources of each typecurrently
allocated to each process.
Need:
It is a 2-d array of size ‘n*m’ that indicates the remaining resource need ofeach
process.
Need [ i,j ] = k means process Pi currently need ‘k’ instances of resourcetype Rj for its
execution.
Allocation i specifies the resources currently allocated to process Pi and Needi specifies the
additional resources that process Pi may still request to complete its task.
Safety Algorithm
The algorithm for finding out whether or not a system is in a safe state can be describedas
follows:
1) Let Work and Finish be vectors of length ‘m’ and ‘n’ respectively.
a) Finish[i] = false
4) if Finish [i] = true for all I, then the system is in a safe state
Example:
Considering a system with five processes P 0 through P4 and three resources of type A, B, C.
Resource type A has 10 instances, B has 5 instances and type C has 7 instances. Suppose at time
t0 following snapshot of the system has been taken:
Need Matrix
Safe Sequence
If deadlocks are not avoided, then another approach is to detect when they have occurred and
recover somehow. Constantly check for deadlocks, a policy / algorithm must be in place for
recovering from deadlocks, and there is potential for lost work when processes must be aborted
or have their resources preempted.
If each resource category has a single instance, then we can use a variation of the
resource-allocation graph known as a wait-for graph.
This algorithm must maintain the wait-for graph, and periodically search it for cycles.
The detection algorithm outlined here is essentially the same as the Banker's algorithm,
PROF G.Sreevani, ASMC BALLARI 23
BALLARI
UNIT 3: PROCESS SYNCHRONIZATION
In step 1, the Banker's Algorithm sets Finish[ i ] to false for all i. The algorithm presented
here sets Finish[ i ] to false only if Allocation[ i ] is not zero. If the currently allocated
resources for this process are zero, the algorithm sets Finish[ i ] to true. This is essentially
assuming that IF all of the other processescan finish, then this process can finish also.
Furthermore, this algorithm is specifically looking for which processes are involved in a
deadlock situation, and a process that does not have any resources allocated cannot be
involved in a deadlock, and so can be removed from any further consideration.
In step 4, the basic Banker's Algorithm says that if Finish[ i ] == true for all i, that there is
no deadlock. This algorithm is more specific, by stating that if Finish[ i ] == false for any
process Pi, then that process is specifically involved in the deadlock which has been
detected.
Consider, for example, the following state, and determine if it is currently deadlock:
A B C A B C A B C
P0 0 1 0 0 0 0 0 0 0
P1 2 0 0 2 0 2
P2 3 0 0 0 0 0
P3 2 1 1 1 0 0
P4 0 0 2 0 0 2
Now suppose that process P2 makes a request for an additional instance of type C, yielding the
state shown below. Is the system now deadlocked?
A B C A B C A B C
P0 0 1 0 0 0 0 0 0 0
P1 2 0 0 2 0 2
P2 3 0 0 0 0 1
P3 2 1 1 1 0 0
P4 0 0 2 0 0 2
There occurs a deadlock because the resource B is hold by process P1 does not allow P3 to complete
and hence the resources required for the other processes is not release and hence P4, P2 also cannot
complete, so there occurs a deadlock.
Deadlock Recovery
After detecting the deadlock, the system can use anyone of the following procedure to recover from
the deadlock.
2) Preempt resources
Process Termination
Two basic approaches, both of which recover resources allocated to terminated processes:
Terminate all processes involved in the deadlock. This definitely solves the deadlock, but
at the expense of terminating more processes than would be absolutely necessary.
Terminate processes one by one until the deadlock is broken. This is more conservative,
but requires doing deadlock detection after each step.
In the latter case there are many factors that can go into deciding which processes toterminate
next:
1. Process priorities.
2. How long the process has been running, and how close it is to finishing.
Resource Preemption
When preempting resources to relieve deadlock, there are three important issues to be addressed:
Selecting a victim - Deciding which resources to preempt from which processes involves
many of the same decision criteria outlined above.
Rollback - Ideally one would like to roll back a preempted process to a safe state prior to
the point at which that resource was originally allocated to the process. Unfortunately, it
can be difficult or impossible to determine what such a safe state is,and so the only safe
rollback is to roll back all the way back to the beginning.
Starvation - How do you guarantee that a process won't starve because its resourcesare
constantly being preempted? One option would be to use a priority system, and increase
the priority of a process every time its resources get preempted. Eventually it should get a
high enough priority that it won't get preempted any more.