process synchronization23
process synchronization23
Synchronization
BY : Parveen Kaur
● Co-operating Processes
● Concurrent Processes
● Need for process synchronization
● Critical Section Problem
● Semaphores
● Monitors
● Hierarchy of processes
● Dinning Philosopher Problem
● Reader-Writer Problem
● Producer-Consumer Problem
● Classical two process and n-process solution
● Hardware primitives for Synchronization
Processes
●Independent process cannot affect or be affected
by the execution of another process
●Cooperating process can affect or be affected by
the execution of another process
●Interprocess communication needed
●Share data with one another by:
●Message Passing Shared Memory
Processes contd..
●Concurrent process execute (run) by system at
same time.
●Cooperating process can be concurrent.
●If the are using the same data. Here need of
synchronization between them is arises.
●Process Synchronization
●The main objective of process synchronization is to
ensure that multiple processes access shared
resources without interfering with each other, and to
prevent the possibility of inconsistent data due to
concurrent access.
●Synchronization is necessary to ensure data
consistency and integrity, and to avoid the risk of
deadlocks and other synchronization problems (race
condition).
Race Condition
● When more than one process is executing the same code
or accessing the same memory or any shared variable in
that condition there is a possibility that the output or the
value of the shared variable is wrong.
● All the processes doing the race to say that my output is
correct this condition known as a race condition.
● Both are concurrent processes: (P1: C=A+B P2: C=A-B)
critical section
● Value of C is what?
● (race between two process to update value of C)
● Several processes access and process the manipulations
over the same data concurrently, then the outcome
depends on the particular order in which the access takes
place.
● P1: C=A+B P2: A=A-B
Race Condition contd..
●A race condition is a situation that may occur inside a
critical section. This happens when the result of
multiple thread execution in the critical section differs
according to the order in which the threads execute.
● A+B*C+B/D-A
●(A+B)*(C+B)/(D-A)
●Race conditions in critical sections can be avoided if
the critical section is treated as an atomic instruction.
●Also, proper thread synchronization using locks or
atomic variables can prevent race conditions.
●Read, Write (locks) discuss later.
Critical Section
●Critical Section: It is code segment (part of program) which tries to
access shared resources. That resource may be any resource in a
computer like a memory location, Data structure, CPU or any IO device.
●The critical section is given as follows:
do{
Entry Section
Critical Section
Exit Section
Remainder Section
} while (TRUE);
●Entry sections handles the entry into the critical section. It acquires the
resources needed for execution by the process. The exit section
handles the exit from the critical section. It releases the resources and
also informs the other processes that critical section is free.
●Atomic action is required in a critical section i.e. only one process can
execute in its critical section at a time. All the other processes have to
wait to execute in their critical sections.
Critical Section Contd..
● The critical section problem needs a solution to synchronize the
different processes.
●Following conditions must be satisfied−
● Primary conditions:
●Mutual Exclusion: Mutual exclusion implies that only one process
can be inside the critical section at any time. If any other processes
require the critical section, they must wait until it is free.
●Progress: Progress means that if a process is not using the critical
section, then it should not stop any other process from accessing it.
In other words, any process can enter a critical section if it is free.
● Secondary conditions:
●Bounded Waitings: Bounded waiting means that each process must
have a limited waiting time. It should not wait endlessly to access the
critical section.
●Architectural Neutrality: Our mechanism must be architectural
natural. It means that if our solution is working fine on one
architecture then it should also run on the other ones as well.
Solutions To The Critical Section
●Peterson Solution
●Peterson’s solution is widely used solution.Two process
solution (Software-based)
●When a process is executing in a critical state, then the other
process only executes the rest of the code, and the opposite
can happen (make sure that only a single process runs in the
critical section at a specific time).
● The two processes share two variables:
– int turn;
– Boolean flag[2]
●The variable turn indicates whose turn it is to enter the critical
section.
●The flag array is used to indicate if a process is ready to enter
the critical section. flag[i]=true implies that process Pi is ready!
Critical Section Solution Contd..
do {
flag[i] = TRUE;
turn = j;
while ( flag[j] && turn == j);
{wait ;}Pi
CRITICAL SECTION (Pj)
flag[i] = FALSE;
REMAINDER SECTION
} while (TRUE);
Synchronization Hardware
●Many systems provide hardware support for critical section code
●Uni-processors – could disable interrupts (interrupt disabling).
● Currently running code would execute without preemption.
1.Process leaves control of CPU when it is interrupted.
2.Solution is:
1.To have each process disable all interrupts just after
entering to the critical section.
2.Re-enable interrupts after leaving critical section
Interrupt Disabling
Repeat
Disable interrupts
C.S
Enable interrupts
Remainder section
• Not feasible for multiprocessor environment. -> time consuming, delay
entry into critical section (message passing to all processors) ->decrease
system efficiency.
Synchronization Hardware
●Modern machines provide special atomic
hardware instructions
●Atomic = non-interruptable
●Two common hardware instructions that
execute atomically
– Test-and-Set: Either test memory word
and set value(modify)
– Swap:swap contents of two memory words
●these atomic instructions can be efficiently
used to implement critical section code.
●Hardware solutions are implemented by
kernel.
Synchronization Hardware
● Definition of Test and Set :
● Return the current value flag and set it true.
boolean TestAndSet(boolean *target)
{
boolean rv = *target;
*target = true;
return rv;
}
● This is atomic -> either completely executed or
nothing (no preemption)
while(True)
{
while(TestAndSet(&Lock));
Critical Section
Lock=False;
}
Synchronization Hardware
● Consider: process P1 and P2:
● Initially P1 Lock=False —> True
boolean TestAndSet(boolean *target(False)){
boolean rv = False;
*target = True;
return rv(False);}
while(True){
while(TestAndSet(&Lock=false)); condition false P1
Critical Section (P1 will enter in critical section)
Lock=False;}
● For P2 Lock=True
boolean TestAndSet(boolean *target(True)){
boolean rv = True;
*target = True;
return rv(True);}
while(True){
while(TestAndSet(&Lock=True)); condition True P2
Critical Section (P2 stuck in while loop—> busy waiting executed)
Lock=False; }
Synchronization Hardware
● Definition of Swap: Swap the two values, no return value
● Key and Lock is initialized.
● void Swap(boolean *a, boolean *b) {
boolean temp = *a;
*a = *b;
*b = temp;
}
• Shared data (initialized to false):
boolean lock; /*global variable
• Process Pi
do {
key = true;
while (key == true)
Swap(lock, key);
critical section
lock = false;
remainder section
}
Synchronization Hardware
● Consider Process P1 and P2
● For P1: Key = True and Lock= False
● void Swap(boolean &Key, boolean &Lock);
● Key=False and Lock=True
while(true){
key = true;
while (key == true) {Swap(lock, key); }. P1 lock=true and key=false
critical section (P1 will go into critical section)
lock = false; remainder section }
For P2: Key= True but lock=true
● Void Swap(boolean &Key, boolean &Lock) ; Key=true & Lock= true
while(true){
key = true;
while (key == true) {Swap(lock, key); } P2 lock=true and key=true (stuck) (P2
will go into critical section until P1 completes its critical section and set Lock=false)
critical section
lock = false;
remainder section
}
Mutex Locks
● Synchronization hardware not simple method, so strict software method
known as Mutex Locks was also introduced.
● In the entry section of code, a LOCK is obtained over the critical
resources used inside the critical section. In the exit section that lock is
released.
do {
acquire lock
CRITICAL SECTION
release lock
REMAINDER
SECTION
} while (true)
● acquire(){
while(! Available); /*busy waiting*/. (Also called Spin lock)
available= false;
}
● release(){
available=true;
}
Semaphore
●It is a synchronization tool that is used to solve critical section problem.
●Behave similarly to mutex lock but more sophisticatedly for process synchronization.
●An integer variable semaphore (S) is used/accessed using following functions only:
●wait() (originally called P() (Dutch proberen ) / Degrade()
●signal() Originally calledV() (Dutch verhogen) / Upgrade()
●Can only be accessed via two indivisible (atomic) operations:
wait (S) {
while (S <= 0); // no-op. /* busy waiting*/
S--;
}
signal (S) {
S++;
}
●The operations wait() and signal() are atomic which means that if a process P is executing
either wait() or signal() then no other process can preempt P until it finishes wait()/signal().
●Working of wait():The initial value of the Semaphore variable is ‘1’. 1 means that the
resource is free and the process can enter the critical section. If the value of S is ‘0’ this
means that some other process is in its critical section and thus, the current process
should wait.
Semaphore Solution contd..
●wait(): it is called before the critical section. When a process calls
wait(), it checks the value of S. If the value is less than or equal to ‘0’,
then the process performs no operations. Hence, the process gets stuck
in the while loop and is not able to come out of the wait() function. So,
it is not able to enter critical section. But if the value of S is ‘1’, then it
comes out of the while loop, decrements the value of S to 0 and enters
the critical section.
●Working of signal(): Once the process has finished the critical
section part, it calls signal(). Within the signal() function, the process
increments the value of S. Finally, giving a signal to a waiting process
to enter in its critical section.
S=1
while(true){
Wait(S){
while(S<=0); // busy waiting
if(S=1){Critical section; S=S-1;}
}
signal(S){ S=S+1; }
}
Semaphore Solution contd..
●Binary Semaphores
●Binary Semaphores can only have one of two values: 0 or 1. Because of
their capacity to ensure mutual exclusion, they are also known as mutex
locks.
●A single binary semaphore is shared between multiple processes.
●Used to provide Mutual Exclusion.
●Counting Semaphores
●Counting Semaphores can have any value and are not limited to a
certain area. They can be used to restrict access to a resource that has a
concurrent access limit.
●Initially, the counting semaphores are set to the maximum amount of
processes that can access the resource at a time. Hence, the counting
semaphore indicates that a process can access the resource if it has a value
greater than 0. If it is set to 0, no other process can access the resource.
●Hence,When a process wants to use that resource, it first checks to see if
the value of the counting semaphore is more than zero.
●It is used to control access to a resource that has multiple instances.
●For example, there are 5 Printers. This means there are 5 instances of the
resource Printer. Now 5 process can simultaneously print. So, the initial
value of the semaphore variable should be 5 i.e., equal to the number of
instances available.
Semaphore contd..
●Advantages of Semaphore:
●Allows only one process into the critical section. They follow
the mutual exclusion principle strictly and are much more
efficient than some other methods of synchronization.
●Semaphores are machine independent.
●No resource wastage because of busy waiting in semaphores as
processor time is not wasted unnecessarily to check if a
condition is fulfilled to allow a process to access the critical
section.
●Disadvantages of Semaphore:
●Are complicated so the wait and signal operations must be
implemented in the correct order to prevent deadlocks.
●Semaphores may lead to a priority inversion where low
priority processes may access the critical section first and high
priority processes later.
●Also leads to occurrence of starvation.
Semaphore Implementation
●Using Busy Waiting :
●When one process is in the critical section,
others have to wait in entry section
●Busy waiting - process cycles through the wait() loop
waiting for semaphore to be released
●Also called Spinlock
● Disadvantages of Busy Waiting:
●When a process is in C.S. and any other process
that wants to enter C.S. loops continuously in
entry section.
● Waste CPU cycle
●Better is to put the waiting process in a queue
and make a context switch to a process that is
Semaphore Implementation
● Using Waiting Queue:
● Instead of waiting, a process blocks itself.
● Two operations:
● block – place the process invoking the operation on the
waiting queue. (Entry section during Wait()).
● wakeup – remove one of processes in the waiting queue
and place it in the ready queue. (Exit section during
Signal()).
● Each semaphore has a value and a waiting queue:
type def struct{
int value;
struct process*list;
} Semaphore;
● value (the semaphore value of type integer)
● list of processes waiting on the semaphore (e.g. PCBs Pointer list)
Semaphore Implementation
● Two operations:
●block – place the process invoking the wait() operation on the
appropriate waiting queue.
●wakeup – upon signal() remove one of processes in the waiting
queue and place it in the ready queue
● Implementation of wait:
wait (Semaphore *S) {
S->value--;
if (value < 0) {
add this process to S->list
block();
}
}
• Implementation of signal:
signal (Semaphore *S) {
S->value++;
if (value <= 0) {
remove a process P from S->list
wakeup(P);
}
Deadlock and Starvation
● Deadlock – two or more processes are waiting indefinitely for an
event that can be caused by only one of the waiting processes.
● Let S and Q be two semaphores initialized to 1
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
: :
: :
signal (S); signal (Q);
signal (Q); signal (S);
● Starvation – indefinite blocking
● A process may never be removed from the semaphore
queue in which it is suspended .
● e.g waiting queues are implemented in LIFO
order.
Classical Problems of Synchronization
2. Bounded Buffer:
1. Limited buffer size
Consumer
while (true) {
while (count == 0) // buffer empty
{; // do nothing }
nextConsumed = buffer[out];
count- -;
/* consume the item in nextConsumed
}
Producer Consumer Problem
●Suppose we have a buffer of N slots, each can hold
one item
●Use three semaphores to synchronize the producer
and consumer
●mutex: Binary Semaphore is to take lock on the
buffer.
●full: Counting Semaphore to denote the number
of occupied slots in buffer.
●empty: Counting Semaphore to denote the
number of empty slots in buffer.
● Initialization:
● mutex initialized to the value 1
● full initialized to 0
● empty initialized to N
Producer Consumer Problem
● The producer process:
while (true) {
wait (empty);
// produce an item
wait (mutex);
// add the item to the buffer
signal (mutex);
signal (full);
}
condition x, y;
dp.pickup (i)
EAT
dp.putdown (i)
Solution to Dining Philosophers (cont)
monitor DP
{
enum { THINKING; HUNGRY, EATING) state [5] ;
condition self [5];
initialization_code() {
for (int i = 1; i <= 5; i++)
state[i] = THINKING;
}
}
Process Hierarchy
Precedence Graph
●Precedence Graph is a directed acyclic
graph which is used to show the execution
level of several processes in operating
system. It consists of nodes and edges. Nodes
represent the processes and the edges
represent the flow of execution.
● Properties of Precedence Graph :
● It is a directed graph.
● It is an acyclic graph.
● Nodes of graph correspond to individual statements of
program code.
● Edge between two nodes represents the execution
order.
● A directed edge from node A to node B shows that
statement A executes first and then Statement B
executes.
Precedence Graph
● S1 : a = x + y;
● S2 : b = z + 1;
● S3 : c = a - b;
● S4 : w = c + 1;
Assignment
1. Consider a semaphore s, initialized with value 10. What should be the value
of S after executing 6 times P() and 8 time V() function on S?
2. Consider a semaphore S, initialized with value 27. Which of the following
options gives the final value of S=12?
a. Execution of 12 P() and 15 V()
b. Execution of 15 P()
c. Execution of 23 P() and 8 V()
d. Execution of 21 P() and 6 V()
3. Consider a semaphore S, initialized with value 1. Consider 10 processes
P1,P2…P10. All processes have same code as given below but, one process
P10 has signal(S) in place of wait(S). If all processes to be executed only
once, then maximum number of processes which can be in critical section
together?
while(True){
wait(S)
C.S
signal(S)
}