0% found this document useful (0 votes)
45 views74 pages

OS U-III Process Coordination

ppt

Uploaded by

Rahul Moin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views74 pages

OS U-III Process Coordination

ppt

Uploaded by

Rahul Moin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

Artificial Intelligence and Data

Science
Operating System
by
Prof. A.A.Salunke
M.Tech. Computer

Course Code 217521


SEM III
Credit - 03
Unit III

Process Coordination
08 Hrs.
1. Synchronization:
Principles of Concurrency
Requirements for Mutual Exclusion
Mutual Exclusion: Hardware Support, Operating System Support (Semaphores and Mutex), Programming
Language Support (Monitors).
2. Classical synchronization problems:
Readers/Writers Problem,
Producer and Consumer problem,
Inter-process communication (Pipes, shared memory: system V)

3. Deadlock:
Deadlock Characterization,
Methods for Handling Deadlocks,
Deadlock Prevention,
Deadlock Avoidance,
Deadlock Detection,
Recovery from Deadlock
Principles of Concurrency

1. Concurrency is the execution of a set of multiple instruction


sequences at the same time. This occurs when there are several
process threads running in parallel.
2. These threads communicate with the other
threads/processes through a concept of shared memory or
through message passing.
3. Because concurrency results in the sharing of system resources -
instructions, memory, files - problems can occur. like deadlocks
and resources starvation.
Principles of Concurrency:

With current technology such as multi core processors, and

parallel processing, which allow for multiple

processes/threads to be executed concurrently - that is at the

same time .

it is possible to have more than a single process/thread

accessing the same space in memory, the same declared

variable in the code, or even attempting to read/write to the

same file.
The amount of time it takes for a process to execute is not easily calculated,

so we are unable to predict which process will complete first,

thereby allowing us to implement algorithms to deal with the issues that


concurrency creates.

The amount of time a process takes to complete depends on the following:


1. The activities of other processes
2. The way operating system handles interrupts
3. The scheduling policies of the operating system
Advantages of Concurrency :
Running of multiple applications
Having concurrency allows the operating system to run multiple applications at the same
time.
Better resource utilization
Having concurrency allows the resources that are NOT being used by one application can
be used for other applications.
Better average response time
Without concurrency, each application has to be run to completion before the next one
can be run.
Better performance
Concurrency provides better performance by the operating system. When one application
uses only the processor and another application uses only the disk drive then the time to
concurrently run both applications to completion will be shorter than the time to run each
application consecutively.
Drawbacks of Concurrency :
When concurrency is used, it is pretty much required to protect
multiple processes/threads from one another.

Concurrency requires the coordination of multiple


processes/threads through additional sequences of operations
within the operating system.

Additional performance enhancements are necessary within the


operating systems to provide for switching among applications.

Sometimes running too many applications concurrently leads to


severely degraded performance.
Issues of Concurrency :

Non-atomic

Operations that are non-atomic but interruptible by multiple processes can cause

problems. (an atomic operation is one that runs completely independently of any

other processes/threads - any process that is dependent on another process/thread

is non-atomic)

Race conditions

A situation where several processes access and manipulate the same data

concurrently and the outcome of the execution depends on the particular order

in which the access takes place, is called a race condition Race conditions.
Blocking

A process that is blocked is one that is waiting for some event, such as resource becoming

available or the completion of an I/O operation.[

Processes can block waiting for resources.

A process could be blocked for long period of time waiting for input from a terminal.

If the process is required to periodically update some data, this would be very undesirable.
Issues of Concurrency :
Starvation
A problem encountered in concurrent computing where a process is perpetually
denied necessary resources to process its work.
Starvation may be caused by errors in a scheduling or mutual exclusion algorithm,
but can also be caused by resource leaks

Deadlock
In concurrent computing, a deadlock is a state in which each member of a group
waits for another member, including itself, to take action, such as sending a
message or more commonly releasing a lock.
Deadlocks are a common problem in multiprocessing systems, parallel computing,
and distributed systems, where software and hardware locks are used to arbitrate
shared resources and implement process synchronization
Requirements for Mutual Exclusion

Mutual exclusion is a concurrency control property which is introduced to


prevent race conditions.

It is the requirement that a process can not enter its critical section
while another concurrent process is currently present or executing in its critical
section

i.e only one process is allowed to execute the critical section at any given
instance of time.
REQUIREMENTS FOR MUTUAL EXCLUSION:

1) Only one process at a time is allowed in the critical section for a resource.

2) A process that halts in its Non-critical section must do so without interfering with
the processes.

3) No dead lock or no starvation.

4) A process must not be delayed access to a critical section when there is no other
process using it.

5) No Assumptions are made about relative process speeds or number of processes.

6) A process remains inside its critical section for a finite time only
Critical Section Problem
The part of the process, where the code for accessing the

shared resources is written, that part or section is the critical

section (CS) of that process


Critical Section Problem

We know that there are multiple processes in the system and these processes access
shared resources.

When these processes access the shared resources simultaneously then the results
obtained are inconsistent.

To avoid such inconsistency in the result the processes must cooperate while accessing
the shared resources.

Therefore, the code to access shared resources is written under the critical section of the
process.

Let us understand the inconsistency in the result while accessing the shared resource
simultaneously with the help of an example.
Look at the figure above, suppose there are two processes P0 and P1.
Both share a common variable A=0.
While accessing A both the processes increments the value of A by 1.
First case

The order of execution of the processes is P0, P1 respectively.

Process P0 reads the value of A=0, increments it by 1 (A=1) and writes the incremented
value in A.

Now, process P1 reads the value of A =1, increments its value by 1 (A=2) and writes the
incremented value in A.

So, after both the processes P0 & P1 finishes accessing the variable A. The value of A is
2.
Second case

1. Consider that process P0 has read the variable A=0. Suddenly context switch

happens and P1 takes the charge, and start executing. P1 would increment the value of

A (A=1).

2. After execution P1 gives the charge again to P0. Now the value of A for P0 is 0 &

when it starts executing it would increment the value of A, from 0 to 1.

So here, when both the processes P0 & P1 end up accessing the variable A, the value of

A =1 which is different from the value of A=2 in the first case.

Now, this type of condition “ where the sequence of execution of the processes

affects the result is called race condition “ .


The critical section problem

Consider a system consisting of n processes {Po,P1, ..., Pn-1).

Each process has a segment of code, called a critical section, in which the process
may be changing common variables, updating a table, writing a file, and so on.

The important feature of the system is that, when one process is executing in its
critical section, no other process is to be allowed to execute in its critical section.

Thus, the execution of critical sections by the processes is mutually exclusive in


time.

The critical-section problem is to design a protocol that the processes can use to
cooperate.
Each process must request permission to enter its critical section.

The section of code implementing this request is the entry section.

The critical section may be followed by an exit section.

The remaining code is the remainder section.


1. Mutual Exclusion: The hardware instruction must verify that at
a point in time only one process can be in its critical section.

1. Bounded Waiting: The processes interested to execute their


critical section must not wait for long to enter their critical
section.

1. Progress: The process not interested in entering its critical


section must not block other processes from entering into their
critical section.
Mutual Exclusion : Hardware Support

Synchronization Hardware
Synchronization hardware is a hardware-based solution to resolve the critical section
problem.

how the multiple processes sharing common resources must be synchronized to avoid
inconsistent results.

The hardware instructions that can be used to resolve the critical section
problem effectively

Hardware solutions are often easier and also improves the efficiency of the system.
The hardware-based solution to critical section problem is based on a simple

tool i.e. lock.

The solution implies that before entering into the critical section the process

must acquire a lock and must release the lock when it exits its critical section.

Using of lock also prevent the ”race condition “.


Solution to Critical-section Problem Using Locks
Hardware
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);

TestAndSet and Swap are Hardware Instruction


The hardware synchronization provides two kinds of hardware

instructions that are TestAndSet and Swap.


mutual exclusion using TestAndSet() Hardware Instruction

TestAndSet() instruction be implemented to achieve mutual exclusion,


bounded wait and progress.

Example
1. you have to globally declare a Boolean variable lock and
initialize it to false

1. Consider we have two processes P0, and P1 are interested to enter their critical
section.

1. So the structure for achieving mutual exclusion is as follow :


globally declare a Boolean variable lock and initialize it to false

P0, and P1 are interested to enter their critical section


Let’s say process P0 wants to enter the critical section it executes
1. Using the TestAndSet() instruction the P0 modifies the lock value to true to
acquire the lock and enters the critical section.

1. Now, when P0 is already in its critical section process P1 also wants to enter
in its critical section

1. Now P1 it will execute the do-while loop and invoke TestAndSet() instruction
only to see that the lock is already set to true which means some process is in
the critical section

1. which will make P1 repeat while loop unless P0 turns the lock to false.

1. Once the process P0 complete executing its critical section its will turn the
lock variable to false.

1. Then P1 can modify the lock variable to true using TestAndSet() instruction
and enter its critical section.

This is how you can achieve mutual exclusion with the do-while structure above
i.e. it let only one process to execute its critical section at a time
Swap Hardware Instruction
1. Like TestAndSet() instruction the swap() hardware instruction is also an atomic
instruction.
2. With a difference that it operates on two variables provided in its parameter.
3. The structure of swap() instruction is :
To achieve mutual exclusion using swap() instruction, the structure
we use is a follow:

1. The structure above operates on one global shared Boolean variable lock and another
local Boolean variable key.

1. Both of which are initially set to false.

1. The process P0 interested in executing its critical section execute code above and set
lock as true and enter its critical section.

1. Thus refrain (Block) other processes from executing their critical section satisfying
mutual exclusion.
Mutual Exclusion : Operating System Support (Semaphores and Mutex)

Semaphores
1. Semaphores serve another important purpose, mutual exclusion and
condition synchronization. It is a type of signaling mechanism.
2. Semaphores are integer variables that are used to solve the
critical section problem
3. Semaphores use two atomic operations, wait and signal that are used for
process synchronization.
4. The wait and signal operations can modify a semaphore.
5. semaphore S is an integer variable that, a part from initialization, is
accessed only through two standard atomic operations: wait and signal.
Semaphores
Three kinds of operations are performed on semaphores;
1. To initialize the semaphore
2. To increment the semaphore value
3. To decrement the semaphore value
Binary Semaphores
Binary Semaphore strictly provides mutual exclusion.
The semaphore can have only two values, 0 or 1.
It is used to implement the solution of critical section problems with multiple
processes.

Let P1, P2 , P3 …….,PN are the process that want to go inside critical section
Let P1, P2 , P3 …….,PN are the process that want to go inside
critical section
Initially S=1 (semaphore) & P(wait) , V(signal)
Some point regarding P and V operation :

P operation is also called wait, sleep, or down operation, and V operation is also
called signal, wake-up, or up operation.

Both operations are atomic and semaphore(s) is always initialized to one. Here
atomic means that variable on which read, modify and update happens at the
same time/moment with no pre-emption i.e. in-between read, modify and
update no other operation is performed that may change the variable.

A critical section is surrounded by both operations to implement process


synchronization. See the below image. The critical section of Process P is in
between P and V operation
Semaphores
Counting semaphores
1. Counting semaphores have the non-negative integer value
2. integer value can range over an unrestricted domain
3. A counting semaphore has two components-An integer value
An associated waiting list (usually a queue)
4.The value of counting semaphore may be positive or negative.

Positive value indicates the number of processes that can be present in the critical
section at the same time.

Negative value indicates the number of processes that are blocked in the waiting
list (Queue).
Counting semaphores
The wait operation is executed when a process tries to enter the critical section.
Wait operation decrements the value of counting semaphore by 1.

Case-01: Counting Semaphore Value >= 0

If the resulting value of counting semaphore is greater than or equal to 0, process is allowed
to enter the critical section.

Case-02: Counting Semaphore Value < 0

If the resulting value of counting semaphore is less than 0, process is not allowed to enter
the critical section. In this case, process is put to sleep in the waiting list (queue) .
Counting semaphores

The signal operation is executed when a process takes exit from the critical
section.
Signal operation increments the value of counting semaphore by 1.

Then, following two cases are possible-

Case-01: Counting Semaphore <= 0

If the resulting value of counting semaphore is less than or equal to 0, a process is chosen
from the waiting list and wake up to execute.

Case-02: Counting Semaphore > 0

If the resulting value of counting semaphore is greater than 0, no action is taken.


1. struct semaphore
2. {
3. int value;
4. Queue type L;
5. }
6. Wait (semaphore s)
7. {
8. s.value = s.value - 1;
9. if (s.value < 0)
10. {
11. put process (PCB) in L;
12. sleep();
13. }
14. else
15. return;
16. }
17. Signal (semaphore s)
18. {
19. s.value = s.value + 1;
20. if (s.value <=0 )
21. {
22. select a process (PCB) from L;
23. wake up();
24. }
25. }
Mutex
Mutex and Semaphore both provide synchronization services but they are not the
same
1. Mutex is a mutual exclusion object that synchronizes access to a resource.

2. It is created with a unique name at the start of a program.

3. The Mutex is a locking mechanism that makes sure only one thread can acquire the

Mutex at a time and enter the critical section.

4. This thread only releases the Mutex when it exits the critical section.
Mutex
1. A Mutex is different than a semaphore as it is a locking mechanism while a
semaphore is a signaling mechanism.

1. A binary semaphore can be used as a Mutex but a Mutex can never be used as a
semaphore.
Programming Language Support (Monitors).

1. Monitors are used for process synchronization.

1. With the help of programming languages, we can use a monitor to achieve mutual
exclusion among the processes.

1. Example of monitors: Java Synchronized methods such as Java offers notify() and
wait() constructs.

1. monitors are defined as the construct of programming language, which helps in


controlling shared data access.

1. The Monitor is a module or package which encapsulates shared data structure,


procedures, and the synchronization between the concurrent procedure invocations.
Characteristics of Monitors.
1. Inside the monitors, we can only execute one process at a time.

2. Monitors are the group of procedures, and condition variables that are merged

together in a special type of module.

3. If the process is running outside the monitor, then it cannot access the monitor’s

internal variable. But a process can call the procedures of the monitor.

4. Monitors offer high-level of synchronization

5. Monitors were derived to simplify the complexity of synchronization problems.

6. There is only one process that can be active at a time inside the monitor.
Components of Monitor
There are four main components of the monitor:
1. Initialization
2. Private data
3. Monitor procedure
4. Monitor entry queue

Initialization: – Initialization comprises the code, and when the monitors are created, we use
this code exactly once.

Private Data: – Private data is another component of the monitor. It comprises all the private
data, and the private data contains private procedures that can only be used within the
monitor. So, outside the monitor, private data is not visible.

Monitor Procedure: – Monitors Procedures are those procedures that can be called from
outside the monitor.

Monitor Entry Queue: – Monitor entry queue is another essential component of the
monitor that includes all the threads, which are called procedures.
Condition Variables

There are two types of operations that we can perform on the condition variables of
the monitor:

Wait
Signal
let say we have 2 condition variables

condition x, y; // Declaring variable

Wait operation
x.wait () : Process performing wait operation on any condition variable are
suspended.
The suspended processes are placed in block queue of that condition variable.

Note: Each condition variable has its unique block queue.

Signal operation
x.signal (): When a process performs signal operation on condition variable, one
of the blocked processes is given chance.

Disadvantages of Monitor:
Monitors have to be implemented as part of the programming language .
2. Classical synchronization problems:

1. Readers/Writers Problem,
2. Producer and Consumer problem,
3. Inter-process communication (Pipes, shared memory: system V)
Classical synchronization problems Readers/Writers Problem
1. The readers-writers problem relates to an object such as a file that is shared between
multiple processes.

1. Some of these processes are readers i.e. they only want to read the data from the object
and some of the processes are writers i.e. they want to write into the object.

1. The readers-writers problem is used to manage synchronization so that there are no


problems with the object data.

1. For example - If two readers access the object at the same time there is no problem.
However if two writers or a reader and writer access the object at the same time,
there may be problems.

1. To solve this situation, a writer should get exclusive access to an object i.e. when a writer
is accessing the object, no reader or writer may access it.

1. However, multiple readers can access the object at the same time.

1. This can be implemented using semaphores. The codes for the reader and writer process
in the reader-writer problem are given as follows −
Producer and Consumer problem
The producer consumer problem is a synchronization problem

There is a fixed size buffer and the producer produces items and enters them into
the buffer.

The consumer removes the items from the buffer and consumes them.

A producer should not produce items into the buffer when the consumer is
consuming an item from the buffer and vice versa. So the buffer should only be
accessed by the producer or consumer at a time

The producer consumer problem can be resolved using semaphores

The codes for the producer and consumer process are given as follows −
What are the Problems in the Producer-Consumer Problem?

There are various types of problems in the Producer-Consumer problem:

1. At the same time, the producer and consumer cannot access the buffer.

2. The producer cannot produce the data if the memory buffer is full. It means when the

memory buffer is not full, then only the producer can produce the data.

3. The consumer can only consume the data if the memory buffer is not vacant. In a

condition where memory buffer is empty, the consumer is not allowed to take data

from the memory buffer.


In the above code, mutex, empty and full are semaphores. Here mutex is initialized to 1, empty is
initialized to n (maximum size of the buffer) and full is initialized to 0.
The mutex semaphore ensures mutual exclusion. The empty and full semaphores count the number of
empty and full spaces in the buffer.
After the item is produced, wait operation is carried out on empty. This indicates that the empty space in
the buffer has decreased by 1. Then wait operation is carried out on mutex so that consumer process
cannot interfere.
After the item is put in the buffer, signal operation is carried out on mutex and full. The former indicates
that consumer process can now act and the latter shows that the buffer is full by 1.
The wait operation is carried out on full. This indicates that items in the buffer have
decreased by 1. Then wait operation is carried out on mutex so that producer process
cannot interfere.
Then the item is removed from buffer. After that, signal operation is carried out on mutex
and empty. The former indicates that consumer process can now act and the latter shows
that the empty space in the buffer has increased by 1.
3. Deadlock:
Deadlock Characterization,
Methods for Handling Deadlocks,
Deadlock Prevention,
Deadlock Avoidance,
Deadlock Detection,
Recovery from Deadlock
What is Deadlock
A deadlock happens in operating system when two or more processes
need some resource to complete their execution that is held by the
other process.
What is Deadlock

when there are two or more processes that hold some resources and wait for

resources held by other(s).

For example, in the above diagram, Process 1 is holding Resource 1 and waiting

for resource 2 which is acquired by process 2, and process 2 is waiting for

resource 1.
Deadlock Characterization OR Deadlock can arise

Mutual Exclusion: One or more than one resource are non-shareable (Only one
process can use at a time)

Hold and Wait: A process is holding at least one resource and waiting for other
resources also.
No Preemption: A resource cannot be taken from a process unless the process
releases the resource.
1. A resource cannot be preempted from a process by force.
2. A process can only release a resource voluntarily.
3. In the diagram below, Process 2 cannot preempt Resource 1 from Process 1. It will
only be released when Process 1 relinquishes it voluntarily after its execution is
complete.

Circular Wait: A set of processes are waiting for each other in circular form.
Methods for Handling Deadlocks
1. Deadlock Ignorance
1. Deadlock Ignorance is the most widely used approach among all the mechanism.
2. In this approach, the Operating system assumes that deadlock never occurs.
3. It simply ignores deadlock.
4. This approach is best suitable for a single end user system where User uses the
system only for browsing and all other normal stuff.
5. The operating systems like Windows and Linux mainly focus upon performance.
6. However, the performance of the system decreases if it uses deadlock handling
mechanism all the time if deadlock happens 1 out of 100 times then it is completely
unnecessary to use the deadlock handling mechanism all the time.
7. In these types of systems, the user has to simply restart the computer in the case of
deadlock. Windows and Linux are mainly using this approach.
Methods for Handling Deadlocks

2. Deadlock prevention
1. Deadlock happens only when Mutual Exclusion, hold and wait, No preemption
and circular wait holds simultaneously.

1. If it is possible to violate(Destroy) one of the four conditions at any time then


the deadlock can never occur in the system

1. The idea behind the approach is very simple that we have to fail one of the four
conditions but there can be a big argument on its physical implementation in the
system.
Methods for Handling Deadlocks

3. Deadlock avoidance

1. In deadlock avoidance, the operating system checks whether the system is in


safe state or in unsafe state at every step which the operating system performs.

1. The process continues until the system is in safe state. Once the system moves
to unsafe state, the OS has to backtrack one step.

1. In simple words, The OS reviews each allocation so that the allocation doesn't
cause the deadlock in the system.
Methods for Handling Deadlocks

4. Deadlock detection and recovery


This approach let the processes fall in deadlock and then periodically check whether deadlock

occur in the system or not.

If it occurs then it applies some of the recovery methods to the system to get rid of deadlock.
Deadlock Prevention
1. Mutual Exclusion

1. Mutual section Resource can never be used by more than one process
simultaneously which is fair enough but that is the main reason behind the
deadlock.

1. If a resource could have been used by more than one process at the same
time then the process would have never been waiting for any resource.

1. However, if we can be able to violate resources behaving in the mutually


exclusive manner then the deadlock can be prevented.
Deadlock Prevention
1. Mutual Exclusion Deadlock Prevention using Spooling
Spooling
1. For a device like printer, spooling can work.
2. There is a memory associated with the printer which stores jobs from each of
the process into it.
3. Later, Printer collects all the jobs and print each one of them according to
FCFS. By using this mechanism, the process doesn't have to wait for the
printer and it can continue whatever it was doing.
4. Later, it collects the output when it is produced.
Deadlock Prevention
1. Mutual Exclusion Deadlock Prevention using Spooling

Although, Spooling can be an effective approach to violate mutual exclusion but it


suffers from two kinds of problems.

1. This cannot be applied to every resource.


2. After some point of time, there may arise a race condition between the
processes to get space in that spool.

We cannot force a resource to be used by more than one process at the same time
since it will not be fair enough and some serious problems may arise in the
performance. Therefore, we cannot violate mutual exclusion for a process
practically.
Deadlock Prevention
2. Hold and Wait Deadlock Prevention
1. Deadlock occurs because there can be more than one process which are holding
one resource and waiting for other in the cyclic order.

1. However, we have to find out some mechanism by which a process either


doesn't hold any resource or doesn't wait.

1. That means, a process must be assigned all the necessary resources before the
execution starts.

1. A process must not wait for any resource once the execution has been started.

1. This can be implemented practically if a process declares all the resources


initially.

1. However, this sounds very practical but can't be done in the computer system
because a process can't determine necessary resources initially.
Deadlock Prevention
2. Hold and Wait Deadlock Prevention

Process is the set of instructions which are executed by the CPU.


Each of the instruction may demand multiple resources at the multiple times.
The need cannot be fixed by the OS.

The problem with the approach is:


1. Practically not possible.
2. Possibility of getting starved will be increases due to the fact that some process
may hold a resource for a very long time.
Deadlock Prevention
3. No Preemption
1. Deadlock arises due to the fact that a process can't be stopped once it starts.
However, if we take the resource away from the process which is causing
deadlock then we can prevent deadlock.

1. This is not a good approach at all since if we take a resource away which is
being used by the process then all the work which it has done till now can
become inconsistent.
Deadlock Prevention
4. Circular Wait
1. To violate circular wait, we can assign a priority number to each of the resource.

1. A process can't request for a lesser priority resource.

1. This ensures that not a single process can request a resource which is being
utilized by some other process and no cycle will be formed.

1. Among all the methods, violating Circular wait is the only approach that can be
implemented practically
Deadlock Avoidance
Deadlock avoidance is the simplest and most useful model that each
process declares the maximum number of resources of each type
that it may need.
Banker’s Algorithm
The banker’s algorithm is a resource allocation and deadlock avoidance algorithm
https://binaryterms.com/synchronization-hardware-in-os.html

https://t4tutorials.com/semaphores-in-operating-systems-os/

https://www.gatevidyalay.com/semaphore-semaphore-in-os-countin
g-semaphore/

https://www.javatpoint.com/os-counting-semaphore

https://www.guru99.com/semaphore-in-operating-system.html#2

https://www.geeksforgeeks.org/semaphores-in-process-synchronization/

https://www.tutorialspoint.com/semaphores-in-operating-system

https://www.cs.jhu.edu/~yairamir/cs418/os4/
sld025.htm

You might also like