Unit 2 Notes Srmcem
Unit 2 Notes Srmcem
Unit 2 Notes Srmcem
Introduction of Process
Synchronization
On the basis of synchronization, processes are categorized as one of the following
two types:
Independent Process: The execution of one process does not affect the
execution of other processes.
Cooperative Process: A process that can affect or be affected by other
processes executing in the system.
Process synchronization problem arises in the case of Cooperative process also
because resources are shared in Cooperative processes.
Race Condition:
When more than one process is executing the same code or accessing the same
memory or any shared variable in that condition there is a possibility that the output
or the value of the shared variable is wrong so for that all the processes doing the
race to say that my output is correct this condition known as a race condition.
Several processes access and process the manipulations over the same data
concurrently, then the outcome depends on the particular order in which the access
takes place. A race condition is a situation that may occur inside a critical section.
This happens when the result of multiple thread execution in the critical section
differs according to the order in which the threads execute. Race conditions in
critical sections can be avoided if the critical section is treated as an atomic
instruction. Also, proper thread synchronization using locks or atomic variables can
prevent race conditions.
A critical section is a code segment that can be accessed by only one process at a
time. The critical section contains shared variables that need to be synchronized to
maintain the consistency of data variables. So the critical section problem means
designing a way for cooperative processes to access shared resources without
creating data inconsistencies.
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
Mutual Exclusion: If a process is executing in its critical section, then no other
process is allowed to execute in the critical section.
Progress: If no process is executing in the critical section and other processes
are waiting outside the critical section, then only those processes that are not
executing in their remainder section can participate in deciding which will enter
in the critical section next, and the selection can not be postponed indefinitely.
Bounded Waiting: A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted. A process is
bounded for certain times for critical section rather than unbounded access that
will create starvation for another process.
No assumption of hardware and speed: process should not depend on the
hardware configuration like 32 or 64 bit processor or processing speed of
processor.
Example 1: understanding the need of Synchronization
Int shared
Process 1 Process 2
x++; y--;
Using above example if the process 1 preempt at sleep(1) and control transferred to
process 2 and further process 2 also preempt at sleep(1) then final value of shared
variable came as 4 rather it should be 5 as process 1 decrease it by 1 and process 2
increase it by 1hence it should come 5
Producer-Consumer problem
The Producer-Consumer problem is a classical multi-process synchronization
problem, that is we are trying to achieve synchronization between more than one
process.
The task of the Producer is to produce the item, put it into the memory buffer,
and again start producing items. Whereas the task of the Consumer is to consume
the item from the memory buffer.
Let's understand what is the problem?
Below are a few points that considered as the problems occur in Producer-
Consumer
o The producer should produce data only when the buffer is not full. In case it
is found that the buffer is full, the producer is not allowed to store any data
into the memory buffer.
o Data can only be consumed by the consumer if and only if the memory
buffer is not empty. In case it is found that the buffer is empty, the
consumer is not allowed to use any data from the memory buffer.
o Accessing memory buffer should not be allowed to producer and consumer
at the same time.
Producer Code
Consumer Code
Int in=0,out=0;
buffer[in]=itemp; itemc=buffer[out]
count++; count--;
} }
} }
Process synchronization is not achieved if the producer or consumer code is
preempt and will generate the race condition.
Entry section
boolean TestAndSet (boolean &target){
1. While(lock==1);
boolean rv = target; 2. Lock=1;
target = true;
return rv;
3. Critical section;
}
Exit Code
while(1){ 4. Lock=0;
while (TestAndSet(lock));
critical section
lock = false; Case1:
remainder section P1 P2
} 1,2,3,4 1,2,3,4
Mutual exclusion, progress
Case2:
P1 P2
1 pre 2,3 1,2,3
No mutual exclusion: as more than 2 process can enterthe
critical section
2. Swap :
Swap algorithm is a lot like the TestAndSet algorithm. Instead of directly setting
lock to true in the swap function, key is set to true and then swapped with lock. So,
again, when a process is in the critical section, no other process gets to enter it as
the value of lock is true. Mutual exclusion is ensured. Again, out of the critical
section, lock is changed to false, so any process finding it gets t enter the critical
section. Progress is ensured. However, again bounded waiting is not ensured for the
very same reason.
Swap Pseudocode –
// Shared variable lock initialized to false
// and individual key initialized to false;
boolean lock;
Individual key;
while (1){
key = true;
while(key)
swap(lock,key);
critical section
lock = false;
remainder section
}
3. Unlock and Lock :
Unlock and Lock Algorithm uses TestAndSet to regulate the value of lock but it
adds another value, waiting[i], for each process which checks whether or not a
process has been waiting. A ready queue is maintained with respect to the process in
the critical section. All the processes coming in next are added to the ready queue
with respect to their process number, not necessarily sequentially. Once the ith
process gets out of the critical section, it does not turn lock to false so that any
process can avail the critical section now, which was the problem with the previous
algorithms. Instead, it checks if there is any process waiting in the queue. The queue
is taken to be a circular queue. j is considered to be the next process in line and the
while loop checks from jth process to the last process and again from 0 to (i-1)th
process if there is any process waiting to access the critical section. If there is no
process waiting then the lock value is changed to false and any process which
comes next can enter the critical section. If there is, then that process’ waiting value
is turned to false, so that the first while loop becomes false and it can enter the
critical section. This ensures bounded waiting. So the problem of process
synchronization can be solved through this algorithm.
Unlock and Lock Pseudocode –
// Shared variable lock initialized to false
// and individual key initialized to false
boolean lock;
Individual key;
Individual waiting[i];
while(1){
waiting[i] = true;
key = true;
while(waiting[i] && key)
key = TestAndSet(lock);
critical section
j = (i+1) % n;
while(j != i && !waiting[j])
j = (j+1) % n;
if(j == i)
lock = false;
else
waiting[j] = false;
remainder section
}
int turn=0;
Process 1: Process 2:
While(turn!=0); While(turn!=1);
Turn=1; Turn=0;
1. Mutual Exclusion is satisfied
2. Progress is not satisfied as the turn is initialized with value 0 hence
process 2 can not be enetered into it until process 1 enters first in
critical section.
3. Bound and wait satisfied: as once the p1 is executed then p1 can not
enter the critical section.
Semaphores in Process
Synchronization
To get rid of the problem of wasting the wake-up signals, Dijkstra proposed an
approach which involves storing all the wake-up calls. Dijkstra states that, instead
of giving the wake-up calls directly to the consumer, producer can store the wake-
up call in a variable. Any of the consumers can read it whenever it needs to do so.
Semaphore is the variables which stores the entire wake up calls that are being
transferred from producer to consumer. It is a variable on which read, modify and
update happens automatically in kernel mode.
Semaphore cannot be implemented in the user mode because race condition may
always arise when two or more processes try to access the variable
simultaneously. It always needs support from the operating system to be
implemented.
According to the demand of the situation, Semaphore can be divided into two
categories.
1. Counting Semaphore
2. Binary Semaphore or Mutex
Counting Semaphore
There are the scenarios in which more than one processes need to execute in
critical section simultaneously. However, counting semaphore can be used when
we need to have more than one process in the critical section at the same time.
A process which wants to enter in the critical section first decrease the semaphore
value by 1 and then check whether it gets negative or not. If it gets negative then
the process is pushed in the list of blocked processes (i.e. q) otherwise it gets
enter in the critical section.
When a process exits from the critical section, it increases the counting
semaphore by 1 and then checks whether it is negative or zero. If it is negative
then that means that at least one process is waiting in the blocked state hence, to
ensure bounded waiting, the first process among the list of blocked processes will
wake up and gets enter in the critical section.
The processes in the blocked list will get waked in the order in which they slept. If
the value of counting semaphore is negative then it states the number of
processes in the blocked state while if it is positive then it states the number of
slots available in the critical section.
Struct Bsemaphore
{
enum Value(0,1); //value is enumerated data type which can only have two values 0
or 1.
Queue type L;
}
/* L contains all PCBs corresponding to process
Blocked while processing down operation unsuccessfully.
*/
Down (Bsemaphore S)
{
if (s.value == 1) // if a slot is available in the
//critical section then let the process enter in the queue.
{
S.value = 0; // initialize the value to 0 so that no other process can read it as 1.
}
else
{
put the process (PCB) in S.L; //if no slot is available
//then let the process wait in the blocked queue.
sleep();
}
}
Up (Bsemaphore S)
{
if (S.L is empty) //an empty blocked processes list implies that no process
//has ever tried to get enter in the critical section.
{
S.Value =1;
}
else
{
Select a process from S.L;
Wakeup(); // if it is not empty then wake the first process of the blocked queue.
}
}
Producer Consumer Problem using
Semaphores
Problem Statement – We have a buffer of fixed size. A producer can
produce an item and can place in the buffer. A consumer can pick items
and can consume them. We need to ensure that when a producer is
placing an item in the buffer, then at the same time consumer should not
consume any item. In this problem, buffer is the critical section.
To solve this problem, we need two counting semaphores – Full and
Empty. “Full” keeps track of number of items in the buffer at any given time
and “Empty” keeps track of number of unoccupied slots.
Initialization of semaphores –
mutex = 1
Full = 0 // Initially, all slots are empty. Thus full slots are 0
Empty = n // All slots are empty initially
Solution for Producer –
do{
//produce an item
wait(empty);
wait(mutex);
//place in buffer
signal(mutex);
signal(full);
}while(true)
wait(full);
wait(mutex);
signal(mutex);
signal(empty);
// consumes item
}while(true)
If one of the people tries editing the file, no other person should be reading or
writing at the same time, otherwise changes will not be visible to him/her.
However if some person is reading the file, then others may read it at the same
time.
Precisely in OS we call this situation as the readers-writers problem
Problem parameters:
do {
// writer requests for critical section
wait(wrt);
} while(true);
Reader process:
do {
readcnt--;
} while(true);
Thus, the semaphore ‘wrt‘ is queued on both readers and writers in a manner such
that preference is given to readers if writers are also there. Thus, no reader is
waiting simply because a writer has requested to enter the critical section.
Dining Philosophers Problem (DPP)
More Detail
The dining philosophers problem states that there are 5 philosophers sharing a
circular table and they eat and think alternatively. There is a bowl of rice for each of
the philosophers and 5 chopsticks. A philosopher needs both their right and left
chopstick to eat. A hungry philosopher may only eat if there are both chopsticks
available.Otherwise a philosopher puts down their chopstick and begin thinking
again.
The dining philosopher is a classic synchronization problem as it demonstrates a
large class of concurrency control problems.
Solution of Dining Philosophers Problem
A solution of the Dining Philosophers Problem is to use a semaphore to represent a
chopstick. A chopstick can be picked up by executing a wait operation on the
semaphore and released by executing a signal semaphore.
The structure of the chopstick is shown below −
semaphore chopstick [5];
Initially the elements of the chopstick are initialized to 1 as the chopsticks are on
the table and not picked up by a philosopher.
The structure of a random philosopher i is given as follows −
do {
wait( chopstick[i] );
wait( chopstick[ (i+1) % 5] );
. .
. EATING THE RICE
.
signal( chopstick[i] );
signal( chopstick[ (i+1) % 5] );
.
. THINKING
.
} while(1);
In the above structure, first wait operation is performed on chopstick[i] and
chopstick[ (i+1) % 5]. This means that the philosopher i has picked up the
chopsticks on his sides. Then the eating function is performed.
After that, signal operation is performed on chopstick[i] and chopstick[ (i+1) % 5].
This means that the philosopher i has eaten and put down the chopsticks on his
sides. Then the philosopher goes back to thinking.
Difficulty with the solution
The above solution makes sure that no two neighboring philosophers can eat at the
same time. But this solution can lead to a deadlock. This may happen if all the
philosophers pick their left chopstick simultaneously. Then none of them can eat
and deadlock occurs.
Some of the ways to avoid deadlock are as follows −
There should be at most four philosophers on the table.
An even philosopher should pick the right chopstick and then the left chopstick
while an odd philosopher should pick the left chopstick and then the right
chopstick.
A philosopher should only be allowed to pick their chopstick if both are
available at the same time.
Deadlock System model
Overview :
A deadlock occurs when a set of processes is stalled because each process is holding
a resource and waiting for another process to acquire another resource. In the
diagram below, for example, Process 1 is holding Resource 1 while Process 2
acquires Resource 2, and Process 2 is waiting for Resource 1.
System Model :
For the purposes of deadlock discussion, a system can be modeled as a
collection of limited resources that can be divided into different categories and
allocated to a variety of processes, each with different requirements.
Memory, printers, CPUs, open files, tape drives, CD-ROMs, and other resources
are examples of resource categories.
By definition, all resources within a category are equivalent, and any of the
resources within that category can equally satisfy a request from that category. If
this is not the case (i.e. if there is some difference between the resources within a
category), then that category must be subdivided further. For example, the term
“printers” may need to be subdivided into “laser printers” and “color inkjet
printers.”
Some categories may only have one resource.
The kernel keeps track of which resources are free and which are allocated, to
which process they are allocated, and a queue of processes waiting for this
resource to become available for all kernel-managed resources. Mutexes or
wait() and signal() calls can be used to control application-managed resources
(i.e. binary or counting semaphores. )
When every process in a set is waiting for a resource that is currently assigned to
another process in the set, the set is said to be deadlocked.
Operations :
In normal operation, a process must request a resource before using it and release it
when finished, as shown below.
1. Request –
If the request cannot be granted immediately, the process must wait until the
resource(s) required to become available. The system, for example, uses the
functions open(), malloc(), new(), and request ().
2. Use –
The process makes use of the resource, such as printing to a printer or reading
from a file.
3. Release –
The process relinquishes the resource, allowing it to be used by other processes.
3. No preemption –
Once a process holds a resource (i.e. after its request is granted), that resource
cannot be taken away from that process until the process voluntarily releases it.
4. Circular Wait –
There must be a set of processes P0, P1, P2,…, PN such that every P[I] is
waiting for P[(I + 1) percent (N + 1)]. (It is important to note that this condition
implies the hold-and-wait condition, but dealing with the four conditions is
easier if they are considered separately).
Deadlock Prevention And Avoidance
Deadlock Characteristics
As discussed in the previous post, deadlock has following characteristics.
1. Mutual Exclusion
2. Hold and Wait
3. No preemption
4. Circular wait
Deadlock Prevention
We can prevent Deadlock by eliminating any of the above four conditions.
2. The process will make a new request for resources after releasing the current set
of resources. This solution may lead to starvation.
Eliminate No Preemption
Preempt resources from the process when resources required by other high priority
processes.
Eliminate Circular Wait
Each resource will be assigned with a numerical number. A process can request the
resources increasing/decreasing. order of numbering.
For Example, if P1 process is allocated R5 resources, now next time if P1 ask for
R4, R3 lesser than R5 such request will not be granted, only request for resources
more than R5 will be granted.
Deadlock Avoidance
Deadlock avoidance can be done with Banker’s Algorithm.
Banker’s Algorithm
Bankers’s Algorithm is resource allocation and deadlock avoidance algorithm
which test all the request made by processes for resources, it checks for the safe
state, if after granting request system remains in the safe state it allows the request
and if there is no safe state it doesn’t allow the request made by the process.
Inputs to Banker’s Algorithm:
2. In the above diagram, resource 1 and resource 2 have single instances. There is a
cycle R1 → P1 → R2 → P2. So, Deadlock is Confirmed.
Deadlock Recovery :
A traditional operating system such as Windows doesn’t deal with deadlock
recovery as it is a time and space-consuming process. Real-time operating systems
use Deadlock recovery.
1. Killing the process –
Killing all the processes involved in the deadlock. Killing process one by one.
After killing each process check for deadlock again keep repeating the process
till the system recovers from deadlock. Killing all the processes one by one helps
a system to break circular wait condition.
2. Resource Preemption –
Resources are preempted from the processes involved in the deadlock,
preempted resources are allocated to other processes so that there is a possibility
of recovering the system from deadlock. In this case, the system goes into
starvation.