0% found this document useful (0 votes)
4 views

Operating Systems Notes-Unit 3

The document discusses process synchronization, which coordinates processes using shared data to maintain consistency and prevent race conditions. It explains critical sections, deadlocks, and various synchronization mechanisms like semaphores and Peterson’s solution, highlighting their advantages and limitations. Additionally, it covers classic synchronization problems such as the Producer-Consumer problem and the Dining Philosophers problem.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Operating Systems Notes-Unit 3

The document discusses process synchronization, which coordinates processes using shared data to maintain consistency and prevent race conditions. It explains critical sections, deadlocks, and various synchronization mechanisms like semaphores and Peterson’s solution, highlighting their advantages and limitations. Additionally, it covers classic synchronization problems such as the Producer-Consumer problem and the Dining Philosophers problem.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Unit III

Chapter – I

Synchronization:

Process Synchronization is a way to coordinate processes that use shared data. It occurs in an
operating system among cooperating processes.

Cooperating processes are processes that share resources. While executing many concurrent
processes, process synchronization helps to maintain shared data consistency and cooperating process
execution.

Processes have to be scheduled to ensure that concurrent access to shared data does not create
inconsistencies. Data inconsistency can result in what is called a race condition.

A race condition occurs when two or more operations are executed at the same time, not scheduled in
the proper sequence, and not exited in the critical section correctly.

When two or more process cooperates with each other, their order of execution must be preserved
otherwise there can be conflicts in their execution and inappropriate outputs can be produced.

A cooperative process is the one which can affect the execution of other process or can be affected by
the execution of other process. Such processes need to be synchronized so that their order of execution can
be guaranteed.

The procedure involved in preserving the appropriate order of execution of cooperative processes is
known as Process Synchronization.

There are various synchronization mechanisms that are used to synchronize the processes. On the
basis of synchronization, processes are categorized as one of the following two types:

Independent Process: Execution of one process does not affects the execution of other processes.

Cooperative Process: Execution of one process affects the execution of other processes.
Process synchronization problem arises in the case of Cooperative process also because resources are
shared in Cooperative processes.

Race Condition

When more than one processes are executing the same code or accessing the same memory or any
shared variable in that condition there is a possibility that the output or the value of the shared variable is
wrong so for that all the processes doing the race to say that my output is correct this condition known as a
race condition.

Several processes access and process the manipulations over the same data concurrently, then the
outcome depends on the particular order in which the access takes place.

A race condition is a situation that may occur inside a critical section. This happens when the result of
multiple thread execution in the critical section differs according to the order in which the threads execute.

Race conditions in critical sections can be avoided if the critical section is treated as an atomic
instruction. Also, proper thread synchronization using locks or atomic variables can prevent race conditions.

Critical Section Problem

Critical section is a code segment that can be accessed by only one process at a time. Critical
section contains shared variables which need to be synchronized to maintain consistency of data variables.

A critical section is a segment of code that can be accessed by only one signal process at a certain
instance in time. This section consists of shared data resources that need to be accessed by other processes.
The entry to the critical section is handled by the wait() function, represented as P().

The exit from a critical section is controlled by the signal() function, represented as V(). Only one
process can be executed inside the critical section at a time.

Other processes waiting to execute their critical sections have to wait until the current process
finishes executing its critical section.

In the entry section, the process requests for entry in the Critical Section. Any solution to the
critical section problem must satisfy three requirements:

Mutual Exclusion: If a process is executing in its critical section, then no other process is allowed
to execute in the critical section.
Progress : If no process is executing in the critical section and other processes are waiting outside
the critical section, then only those processes that are not executing in their remainder section can
participate in deciding which will enter in the critical section next, and the selection cannot be postponed
indefinitely.

Bounded Waiting: A bound must exist on the number of times that other processes are allowed to
enter their critical sections after a process has made a request to enter its critical section and before that
request is granted.

Peterson’s Solution

Peterson’s Solution is a classical software based solution to the critical section problem. In
Peterson’s solution, we have two shared variables:

boolean flag[i]: Initialized to FALSE, initially no one is interested in entering the critical section
int turn: The process whose turn is to enter the critical section.

Peterson’s Solution preserves all three conditions:

 Mutual Exclusion is assured as only one process can access the critical section at any time.
 Progress is also assured, as a process outside the critical section does not block other processes
from entering the critical section.
 Bounded Waiting is preserved as every process gets a fair chance.

Disadvantages of Peterson’s Solution


 It involves Busy waiting
 It is limited to 2 processes.
TestAndSet
TestAndSet is a hardware solution to the synchronization problem. In TestAndSet, we have a
shared lock variable which can take either of the two values, 0 or 1.
0 Unlock
1 Lock
Before entering into the critical section, a process inquires about the lock. If it is locked, it keeps on
waiting until it becomes free and if it is not locked, it takes the lock and executes the critical section. In
TestAndSet, Mutual exclusion and progress are preserved but bounded waiting cannot be preserved.
Semaphores
Semaphores are integer variables that are used to solve the critical section problem by using two
atomic operations wait and signal that are used for process synchronization. The definitions of wait and
signal are as follows.,
Characteristic of Semaphore

Here, are characteristic of a semaphore:

 It is a mechanism that can be used to provide synchronization of tasks.


 It is a low-level synchronization mechanism.
 Semaphore will always hold a non-negative integer value.
 Semaphore can be implemented using test operations and interrupts, which should be executed using
file descriptors.

Wait
The wait operation decrements the value of its argument S, if it is positive. If S is negative or zero,
then no operation is performed.
This type of semaphore operation helps you to control the entry of a task into the critical section.
However, if the value of wait is positive, then the value of the wait argument X is decremented. In the case of
negative or zero value, no operation is executed. It is also called P(S) operation.

After the semaphore value is decreased, which becomes negative, the command is held up until the
required conditions are satisfied.

wait(S)
{
while(S<=0);
S--;
}

 Signal
The signal operation increments the value of its argument S.
signal(S)
{
S++;
}
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary semaphores. Details
about these are given as follows.,

 Counting Semaphores
 Binary Semaphores
Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These semaphores
are used to coordinate the resource access, where the semaphore count is the number of available
resources.

If the resources are added, semaphore count automatically incremented and if the resources
are removed, the count is decremented.

Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted to 0 and 1.
The wait operation only works when the semaphore is 1 and the signal operation succeeds when
semaphore is 0.

It is sometimes easier to implement binary semaphores than counting semaphores.

Advantages of Semaphores
Some of the advantages of semaphores are as follows −

 Semaphores allow only one process into the critical section. They follow the mutual exclusion
principle strictly and are much more efficient than some other methods of synchronization.
 There is no resource wastage because of busy waiting in semaphores as processor time is not wasted
unnecessarily to check if a condition is fulfilled to allow a process to access the critical section.
 Semaphores are implemented in the machine independent code of the microkernel. So they are
machine independent.
Classic problems of Synchronization

The classical problems of synchronization are as follows:


o Bound-Buffer problem
o Sleeping barber problem
o Dining Philosophers problem
o Readers and writers problem
Bound-Buffer problem
Also known as the Producer-Consumer problem. In this
problem, there is a buffer of n slots, and each buffer is capable of
storing one unit of data. There are two processes that are operating
on the buffer – Producer and Consumer.

The producer tries to insert data and the consumer tries to


remove data. If the processes are run simultaneously they will not
yield the expected output. The solution to this problem is creating
two semaphores, one full and the other empty to keep a track of the
concurrent processes.

Sleeping Barber Problem

This problem is based on a hypothetical barbershop


with one barber. When there are no customers the barber
sleeps in his chair.

If any customer enters he will wake up the barber and


sit in the customer chair. If there are no chairs empty they wait in the waiting queue.

Dining Philosopher’s problem


This problem states that there are K numbers of philosophers
sitting around a circular table with one chopstick placed between each pair
of philosophers.

The philosopher will be able to eat if he can pick up two chopsticks


that are adjacent to the philosopher. This problem deals with the
allocation of limited resources.

Readers and Writers Problem


This problem occurs when many threads of execution try to
access the same shared resources at a time.

Some threads may read, and some may write. In this


scenario, we may get faulty outputs.
Chapter - II

DeadLocks

Deadlock is a situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by some other process.

Consider an example when two trains are coming toward each other on the same track and there is
only one track, none of the trains can move once they are in front of each other.

A similar situation occurs in operating systems when there are two or more processes that hold
some resources and wait for resources held by other(s). For example, in the below diagram, Process 1 is
holding Resource 1 and waiting for resource 2 which is acquired by process 2, and process 2 is waiting for
resource 1.

Deadlock can arise if the following four conditions hold simultaneously (Necessary Conditions)

Mutual Exclusion: One or more than one resource are non-shareable (Only one process can use at a time)

Hold and Wait: A process is holding at least one resource and waiting for resources.

No Preemption: A resource cannot be taken from a process unless the process releases the resource.

Circular Wait: A set of processes are waiting for each other in circular form.

System models
 For the purposes of deadlock discussion, a system can be modeled as a collection of limited
resources that can be divided into different categories and allocated to a variety of processes, each
with different requirements.

 Memory, printers, CPUs, open files, tape drives, CD-ROMs, and other resources are examples of
resource categories.

 By definition, all resources within a category are equivalent, and any of the resources within that
category can equally satisfy a request from that category. If this is not the case (i.e. if there is some
difference between the resources within a category), then that category must be subdivided further.
For example, the term “printers” may need to be subdivided into “laser printers” and “color inkjet
printers.”

 Some categories may only have one resource.


 The kernel keeps track of which resources are free and which are allocated, to which process they
are allocated, and a queue of processes waiting for this resource to become available for all kernel-
managed resources. Mutexes or wait() and signal() calls can be used to control application-managed
resources (i.e. binary or counting semaphores)

 When every process in a set is waiting for a resource that is currently assigned to another process in
the set, the set is said to be deadlocked.

Operations:

In normal operation, a process must request a resource before using it and release it when finished,
as shown below.

Request – If the request cannot be granted immediately, the process must wait until the resource(s)
required to become available. The system, for example, uses the functions open(), malloc(), new(), and
request ().

Use – The process makes use of the resource, such as printing to a printer or reading from a file.

Release – The process relinquishes the resource, allowing it to be used by other processes.

Deadlock Characterization

Deadlock characterization describes the distinctive features that are the cause of deadlock
occurrence.

Deadlock is a condition in the multiprogramming environment where the executing processes get
stuck in the middle of execution waiting for the resources that have been held by the other waiting processes
thereby preventing the execution of the processes.

The four conditions that must sustain at the same time to eventuate a deadlock are: mutual
experience, hold and wait, no preemption, circular wait.

Deadlock Characterization

 Deadlock Prerequisites
 Systems Resource Allocation Graph

Deadlock Prerequisites

i) Mutual Exclusion

In a multiprogramming environment, there may be several processes requesting the same resource at
a time. The mutual exclusion condition, allow only a single process to access the resource at a time.

While the other processes requesting the same resource must wait and delay their execution until it
has been released.

The mutual exclusion condition prevents two or more processes to access the same resource at a time.
ii) Hold and wait

The hold and wait condition simply means that the process must be holding access to one resource
and must be waiting to get hold of other resources that have been acquired by the other processes.

iii) No Preemption

A process acquiring a resource, cannot be preempted in between, to release the acquired resource.
Instead, the process must voluntarily release the resource it has acquired when the task of the process has
been completed.

iv) Circular Wait

The processes must be waiting in a circular pattern to acquire the resource. This is similar to hold and
waits the only difference is that the processes are waiting in a circular pattern.

Let us discuss this with an example there are five processes i.e. P 0, P1, P2, P3, P4. Now the process P0 is
waiting to acquire the process held by the P1, further the process P1 is waiting to acquire the resource held by
P2, …., process P4 is waiting to acquire the resource held by P0.

System Resource Allocation Graph

The system reallocation graph is a directed graph that briefs you about the deadlock more precisely.
Like every graph, it also has a set of vertices and a set of edges. Further, the set of vertices can be classified
into two types of nodes P and R. Where P is the set of vertices indicating the set of active processes and R is
the set of vertices indicating all types of resources in the system.

When a process requests for a resource it denoted by the request edge in the resource-allocation
graph. The request edge is a directed edge from the requesting process P i to requested resource Rj i.e. Pi -> Rj.

Well, when a resource is allotted to some process then it is denoted by the assignment edge. The
assignment edge is the directed edge from the instance of resource Rj to the process Pi i.e. Rj -> Pi.

In the graph, resources are denoted by the rectangles and the processes are denoted by the circles. If
a resource has multiple instances then it is denoted by the dots inside the rectangle.

When a process request for an instance of the resource it directs a request edge to the resource. If the
resource is able to allocate the resource instance to the requesting process then immediately the request edge
is converted to assignment edge.

The request edge always points to the resource rectangle in the graph, not to dots (instance) inside the
rectangle. Although the assignment edge nominates the dot (instance) to a process.

To understand deadlock we let us take an example. Consider we have following set of nodes and edges.

1. There are three active processes P = {P1, P2, P3}


2. There are four resources R = { R1, R2, R3, R4}
3. The set of request edge and assignment edges we have
E = { P1 -> R1, P2 -> R3, R1 -> P2, R2 -> P2, R2 -> P1, R3 -> P3}
Observe the figure above that the resource R1 has only one instance, resource R2 has two instances,
resource R3 has one instance, and resource R4 has three instances. Let us check the status of the processes.

The figure below shows that the process P1 has requested for the instance of resource R1 is already
holding the instance of resource R2. The process P2 has requested for the instance of resource R3 and is
already holding the instances of resource R1 and R3. The process P3 has not requested for any resource
instance but is holding the instance for resource R3.

Remember if the resource allocation graph has a cycle and every resource has a single instance then it
implies that a deadlock has occurred. In case, the resources have multiple instances then a cycle in the graph
need not be indicating the occurrence of deadlock.

Consider that the process P3 is requesting for the instance of resource R2 which is already held by the
process P1 and P2. In this case, you will observe that there are two cycles in the resource allocation graph:

P1 -> R 1 -> P2 -> R3 -> P3 -> R2 -> P1

P2 -> R3 -> P3 -> R2 -> P2

Process P1, P2 and P3 are now in deadlock as each process in the cycle is waiting for the resource held
by another process. But every cycle in the resource allocation graph does not indicate the deadlock, you have
to observe the cycle carefully while dealing with deadlock problem. So, this is how you can characterize the
deadlock in the system.
Methods for handling deadlock

There are three ways to handle deadlock

1) Deadlock prevention or avoidance: The idea is to not let the system into a deadlock state.
One can zoom into each category individually, Prevention is done by negating one of above mentioned
necessary conditions for deadlock.

Avoidance is kind of futuristic in nature. By using strategy of “Avoidance”, we have to make an


assumption. We need to ensure that all information about resources which process will need are known to
us prior to execution of the process. We use Banker’s algorithm (Which is in-turn a gift from Dijkstra) in
order to avoid deadlock.
2) Deadlock detection and recovery: Let deadlock occur, then do preemption to handle it once occurred.
3) Ignore the problem altogether: If deadlock is very rare, then let it happen and reboot the system. This is
the approach that both Windows and UNIX take.

Deadlock Prevention

Let us take an example of a chair, as we know that chair always stands on its four legs. Likewise, for
the deadlock problem, all the above given four conditions are needed. If anyone leg of the chair gets broken,
then definitely it will fall.

The same is the situation with the deadlock if we become able to violate any condition among the
four and do not let them occur together then there can be prevented from the deadlock problem.

We will elaborate deadlock prevention approach by examining each of the four necessary conditions
separately.

Mutual Exclusion
This condition must hold for non-sharable resources. For example, a printer cannot be simultaneously
shared by several processes. In contrast, Sharable resources do not require mutually exclusive access and
thus cannot be involved in a deadlock.
A good example of a sharable resource is Read-only files because if several processes attempt to open
a read-only file at the same time, then they can be granted simultaneous access to the file.
A process need not to wait for the sharable resource. Generally, deadlocks cannot be prevented by
denying the mutual exclusion condition because there are some resources that are intrinsically non-sharable.
Hold and Wait
Hold and wait condition occurs when a process holds a resource and is also waiting for some other
resource in order to complete its execution. Thus if we did not want the occurrence of this condition then we
must guarantee that when a process requests a resource, it does not hold any other resource.

There are some protocols that can be used in order to ensure that the Hold and Wait condition never occurs:

 According to the first protocol; each process must request and gets all its resources before the
beginning of its execution.
 The second protocol allows a process to request resources only when it does not occupy any resource.
Let us illustrate the difference between these two protocols:

We will consider a process that mainly copies data from a DVD drive to a file on disk, sorts the file,
and then prints the results to a printer. If all the resources must be requested at the beginning of the process
according to the first protocol, then the process requests the DVD drive, disk file, and printer initially. It will
hold the printer during its entire execution, even though the printer is needed only at the end.

While the second method allows the process to request initially only the DVD drive and disk file. It
copies the data from the DVD drive to the disk and then releases both the DVD drive and the disk file. The
process must then again request the disk file and printer. After copying the disk file to the printer, the process
releases these two resources as well and then terminates.

Disadvantages of Both Protocols

 Utilization of resources may be low, since resources may be allocated but unused for a long period. In
the above-given example, for instance, we can release the DVD drive and disk file and again request
the disk file and printer only if we can be sure that our data will remain on the disk file. Otherwise,
we must request all the resources at the beginning of both protocols.
 There is a possibility of starvation. A process that needs several popular resources may have to wait
indefinitely because at least one of the resources that it needs is always allocated to some other
process.

No Preemption
The third necessary condition for deadlocks is that there should be no preemption of resources that
have already been allocated. In order to ensure that this condition does not hold the following protocols can
be used :

 According to the First Protocol: "If a process that is already holding some resources requests another
resource and if the requested resources cannot be allocated to it, then it must release all the resources
currently allocated to it."
 According to the Second Protocol: "When a process requests some resources, if they are available,
then allocate them. If in case the requested resource is not available then we will check whether it is
being used or is allocated to some other process waiting for other resources. If that resource is not
being used, then the operating system preempts it from the waiting process and allocate it to the
requesting process. And if that resource is being used, then the requesting process must wait".

The second protocol can be applied to those resources whose state can be easily saved and restored later for
example CPU registers and memory space, and cannot be applied to resources like printers and tape drivers.

Circular Wait
The Fourth necessary condition to cause deadlock is circular wait, In order to ensure violate this
condition we can do the following:

Assign a priority number to each resource. There will be a condition that any process cannot request
for a lesser priority resource. This method ensures that not a single process can request a resource that is
being utilized by any other process and due to which no cycle will be formed.

Example: Assume that R5 resource is allocated to P1, if next time P1 asks for R4, R3 that are lesser
than R5; then such request will not be granted. Only the request for resources that are more than R5 will be
granted.
Necessary Practical
S.No Approach
Conditions Implementation

Mutual
1 The approach used to violate this condition is spooling. Not possible
Exclusion

In order to violate this condition, the approach is to


2 Hold and wait Not possible
request all the resources for a process initially

In order to violate this condition, the approach is: snatch


3 No Preemption Not possible
all the resources from the process.

In this approach is to assign priority to each resource and


4 Circular Wait possible
order them numerically

Deadlock Avoidance

The deadlock Avoidance method is used by the operating system in order to check whether the
system is in a safe state or in an unsafe state and in order to avoid the deadlocks, the process must need to tell
the operating system about the maximum number of resources a process can request in order to complete its
execution.

How does Deadlock Avoidance work?

In this method, the request for any resource will be granted only if the resulting state of the system
doesn't cause any deadlock in the system. This method checks every step performed by the operating system.
Any process continues its execution until the system is in a safe state. Once the system enters into an unsafe
state, the operating system has to take a step back.
With the help of a deadlock-avoidance algorithm, you can dynamically assess the resource-allocation
state so that there can never be a circular-wait situation.

According to the simplest and useful approach, any process should declare the maximum number of
resources of each type it will need. The algorithms of deadlock avoidance mainly examine the resource
allocations so that there can never be an occurrence of circular wait conditions.

Deadlock avoidance can mainly be done with the help of Banker's Algorithm.

Safe State and Unsafe State

A state is safe if the system can allocate resources to each process( up to


its maximum requirement) in some order and still avoid a deadlock. Formally, a
system is in a safe state only, if there exists a safe sequence. So a safe state is not
a deadlocked state and conversely a deadlocked state is an unsafe state.

In an Unsafe state, the operating system cannot prevent processes from


requesting resources in such a way that any deadlock occurs. It is not necessary
that all unsafe states are deadlocks; an unsafe state may lead to a deadlock.

The above Figure shows the Safe, unsafe, and deadlocked state spaces

Deadlock Avoidance Example

Let us consider a system having 12 magnetic tapes and three processes P1, P2, P3. Process P1
requires 10 magnetic tapes, process P2 may need as many as 4 tapes, process P3 may need up to 9 tapes.
Suppose at a time to, process P1 is holding 5 tapes, process P2 is holding 2 tapes and process P3 is holding 2
tapes. (There are 3 free magnetic tapes)

Processes Maximum Needs Current Needs

P1 10 5

P2 4 2

P3 9 2
So at time t0, the system is in a safe state. The sequence is <P2,P1,P3> satisfies the safety condition.
Process P2 can immediately be allocated all its tape drives and then return them.

After the return the system will have 5 available tapes, then process P1 can get all its tapes and return
them ( the system will then have 10 tapes);

Finally, process P3 can get all its tapes and return them (The system will then have 12 available tape)

A system can go from a safe state to an unsafe state. Suppose at time t1, process P3 requests and is
allocated one more tape.

The system is no longer in a safe state. At this point, only process P2 can be allocated all its tapes.
When it returns them the system will then have only 4 available tapes.

Since P1 is allocated five tapes but has a maximum of ten so it may request 5 more tapes. If it does
so, it will have to wait because they are unavailable.

Similarly, process P3 may request its additional 6 tapes and have to wait which then results in a
deadlock.

The mistake was granting the request from P3 for one more tape. If we made P3 wait until either of
the other processes had finished and released its resources, then we could have avoided the deadlock

Note: In a case, if the system is unable to fulfill the request of all processes then the state of the system is
called unsafe.

The main key of the deadlock avoidance method is whenever the request is made for resources then
the request must only be approved only in the case if the resulting state is a safe state.

Deadlock Detection:

1. If resources have a single instance –


In this case for Deadlock detection, we can run an algorithm to check for the cycle in the Resource
Allocation Graph. The presence of a cycle in the graph is a sufficient condition for deadlock.

2. In the above diagram, resource 1 and resource 2 have single instances. There is a cycle R1 → P1 → R2
→ P2. So, Deadlock is confirmed.
3. If there are multiple instances of resources –
Detection of the cycle is necessary but not sufficient condition for deadlock detection, in this case,
the system may or may not be in deadlock varies according to different situations.

Deadlock Recovery

A traditional operating system such as Windows doesn’t deal with deadlock recovery as it is a time
and space-consuming process. Real-time operating systems use Deadlock recovery.

1. Killing the process

Killing all the processes involved in the deadlock. Killing process one by one. After killing each
process check for deadlock again keep repeating the process till the system recovers from deadlock.

Killing all the processes one by one helps a system to break circular wait condit ion.

2. Resource Preemption

Resources are preempted from the processes involved in the deadlock, preempted resources are
allocated to other processes so that there is a possibility of recovering the system from deadlock.

In this case, the system goes into starvation.

For Resource

Preempt the resource

We can snatch one of the resources from the owner of the resource (process) and give it to the other
process with the expectation that it will complete the execution and will release this resource sooner.

Well, choosing a resource which will be snatched is going to be a bit difficult.


Rollback to a safe state

System passes through various states to get into the deadlock state. The operating systems can
rollback the system to the previous safe state. For this purpose, OS needs to implement check pointing at
every state.

The moment, we get into deadlock, we will rollback all the allocations to get into the previous safe
state.

For Process

Kill a process

Killing a process can solve our problem but the bigger concern is to decide which process to kill.
Generally, Operating system kills a process which has done least amount of work until now.

Kill all process

This is not a suggestible approach but can be implemented if the problem becomes very serious.
Killing all process will lead to inefficiency in the system because all the processes will execute again from
starting.

You might also like