0% found this document useful (0 votes)
5 views

process synchronization23

The document discusses process synchronization, focusing on the need for synchronization in concurrent and cooperating processes to prevent issues like race conditions and ensure data integrity. It covers critical section problems, solutions such as semaphores and mutex locks, and classical synchronization problems like the Producer-Consumer and Dining Philosophers problems. Additionally, it highlights the importance of atomic operations and the implementation of synchronization mechanisms to manage access to shared resources.
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

process synchronization23

The document discusses process synchronization, focusing on the need for synchronization in concurrent and cooperating processes to prevent issues like race conditions and ensure data integrity. It covers critical section problems, solutions such as semaphores and mutex locks, and classical synchronization problems like the Producer-Consumer and Dining Philosophers problems. Additionally, it highlights the importance of atomic operations and the implementation of synchronization mechanisms to manage access to shared resources.
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 50

Process

Synchronization

BY : Parveen Kaur
● Co-operating Processes
● Concurrent Processes
● Need for process synchronization
● Critical Section Problem
● Semaphores
● Monitors
● Hierarchy of processes
● Dinning Philosopher Problem
● Reader-Writer Problem
● Producer-Consumer Problem
● Classical two process and n-process solution
● Hardware primitives for Synchronization
Processes
●Independent process cannot affect or be affected
by the execution of another process
●Cooperating process can affect or be affected by
the execution of another process
●Interprocess communication needed
●Share data with one another by:
●Message Passing Shared Memory
Processes contd..
●Concurrent process execute (run) by system at
same time.
●Cooperating process can be concurrent.
●If the are using the same data. Here need of
synchronization between them is arises.
●Process Synchronization
●The main objective of process synchronization is to
ensure that multiple processes access shared
resources without interfering with each other, and to
prevent the possibility of inconsistent data due to
concurrent access.
●Synchronization is necessary to ensure data
consistency and integrity, and to avoid the risk of
deadlocks and other synchronization problems (race
condition).
Race Condition
● When more than one process is executing the same code
or accessing the same memory or any shared variable in
that condition there is a possibility that the output or the
value of the shared variable is wrong.
● All the processes doing the race to say that my output is
correct this condition known as a race condition.
● Both are concurrent processes: (P1: C=A+B P2: C=A-B)
critical section
● Value of C is what?
● (race between two process to update value of C)
● Several processes access and process the manipulations
over the same data concurrently, then the outcome
depends on the particular order in which the access takes
place.
● P1: C=A+B P2: A=A-B
Race Condition contd..
●A race condition is a situation that may occur inside a
critical section. This happens when the result of
multiple thread execution in the critical section differs
according to the order in which the threads execute.
● A+B*C+B/D-A
●(A+B)*(C+B)/(D-A)
●Race conditions in critical sections can be avoided if
the critical section is treated as an atomic instruction.
●Also, proper thread synchronization using locks or
atomic variables can prevent race conditions.
●Read, Write (locks) discuss later.
Critical Section
●Critical Section: It is code segment (part of program) which tries to
access shared resources. That resource may be any resource in a
computer like a memory location, Data structure, CPU or any IO device.
●The critical section is given as follows:
do{
Entry Section
Critical Section
Exit Section
Remainder Section
} while (TRUE);

●Entry sections handles the entry into the critical section. It acquires the
resources needed for execution by the process. The exit section
handles the exit from the critical section. It releases the resources and
also informs the other processes that critical section is free.
●Atomic action is required in a critical section i.e. only one process can
execute in its critical section at a time. All the other processes have to
wait to execute in their critical sections.
Critical Section Contd..
● The critical section problem needs a solution to synchronize the
different processes.
●Following conditions must be satisfied−
● Primary conditions:
●Mutual Exclusion: Mutual exclusion implies that only one process
can be inside the critical section at any time. If any other processes
require the critical section, they must wait until it is free.
●Progress: Progress means that if a process is not using the critical
section, then it should not stop any other process from accessing it.
In other words, any process can enter a critical section if it is free.
● Secondary conditions:
●Bounded Waitings: Bounded waiting means that each process must
have a limited waiting time. It should not wait endlessly to access the
critical section.
●Architectural Neutrality: Our mechanism must be architectural
natural. It means that if our solution is working fine on one
architecture then it should also run on the other ones as well.
Solutions To The Critical Section
●Peterson Solution
●Peterson’s solution is widely used solution.Two process
solution (Software-based)
●When a process is executing in a critical state, then the other
process only executes the rest of the code, and the opposite
can happen (make sure that only a single process runs in the
critical section at a specific time).
● The two processes share two variables:
– int turn;
– Boolean flag[2]
●The variable turn indicates whose turn it is to enter the critical
section.
●The flag array is used to indicate if a process is ready to enter
the critical section. flag[i]=true implies that process Pi is ready!
Critical Section Solution Contd..
do {
flag[i] = TRUE;
turn = j;
while ( flag[j] && turn == j);
{wait ;}Pi
CRITICAL SECTION (Pj)
flag[i] = FALSE;
REMAINDER SECTION
} while (TRUE);
Synchronization Hardware
●Many systems provide hardware support for critical section code
●Uni-processors – could disable interrupts (interrupt disabling).
● Currently running code would execute without preemption.
1.Process leaves control of CPU when it is interrupted.
2.Solution is:
1.To have each process disable all interrupts just after
entering to the critical section.
2.Re-enable interrupts after leaving critical section
Interrupt Disabling
Repeat
Disable interrupts
C.S
Enable interrupts
Remainder section
• Not feasible for multiprocessor environment. -> time consuming, delay
entry into critical section (message passing to all processors) ->decrease
system efficiency.
Synchronization Hardware
●Modern machines provide special atomic
hardware instructions
●Atomic = non-interruptable
●Two common hardware instructions that
execute atomically
– Test-and-Set: Either test memory word
and set value(modify)
– Swap:swap contents of two memory words
●these atomic instructions can be efficiently
used to implement critical section code.
●Hardware solutions are implemented by
kernel.
Synchronization Hardware
● Definition of Test and Set :
● Return the current value flag and set it true.
boolean TestAndSet(boolean *target)
{
boolean rv = *target;
*target = true;
return rv;
}
● This is atomic -> either completely executed or
nothing (no preemption)
while(True)
{
while(TestAndSet(&Lock));
Critical Section
Lock=False;
}
Synchronization Hardware
● Consider: process P1 and P2:
● Initially P1 Lock=False —> True
boolean TestAndSet(boolean *target(False)){
boolean rv = False;
*target = True;
return rv(False);}
while(True){
while(TestAndSet(&Lock=false)); condition false P1
Critical Section (P1 will enter in critical section)
Lock=False;}
● For P2 Lock=True
boolean TestAndSet(boolean *target(True)){
boolean rv = True;
*target = True;
return rv(True);}
while(True){
while(TestAndSet(&Lock=True)); condition True P2
Critical Section (P2 stuck in while loop—> busy waiting executed)
Lock=False; }
Synchronization Hardware
● Definition of Swap: Swap the two values, no return value
● Key and Lock is initialized.
● void Swap(boolean *a, boolean *b) {
boolean temp = *a;
*a = *b;
*b = temp;
}
• Shared data (initialized to false):
boolean lock; /*global variable

• Process Pi
do {
key = true;
while (key == true)
Swap(lock, key);
critical section
lock = false;
remainder section
}
Synchronization Hardware
● Consider Process P1 and P2
● For P1: Key = True and Lock= False
● void Swap(boolean &Key, boolean &Lock);
● Key=False and Lock=True
while(true){
key = true;
while (key == true) {Swap(lock, key); }. P1 lock=true and key=false
critical section (P1 will go into critical section)
lock = false; remainder section }
For P2: Key= True but lock=true
● Void Swap(boolean &Key, boolean &Lock) ; Key=true & Lock= true
while(true){
key = true;
while (key == true) {Swap(lock, key); } P2 lock=true and key=true (stuck) (P2
will go into critical section until P1 completes its critical section and set Lock=false)
critical section
lock = false;
remainder section
}
Mutex Locks
● Synchronization hardware not simple method, so strict software method
known as Mutex Locks was also introduced.
● In the entry section of code, a LOCK is obtained over the critical
resources used inside the critical section. In the exit section that lock is
released.
do {
acquire lock
CRITICAL SECTION
release lock
REMAINDER
SECTION
} while (true)

● acquire(){
while(! Available); /*busy waiting*/. (Also called Spin lock)
available= false;
}
● release(){
available=true;
}
Semaphore
●It is a synchronization tool that is used to solve critical section problem.
●Behave similarly to mutex lock but more sophisticatedly for process synchronization.
●An integer variable semaphore (S) is used/accessed using following functions only:
●wait() (originally called P() (Dutch proberen ) / Degrade()
●signal() Originally calledV() (Dutch verhogen) / Upgrade()
●Can only be accessed via two indivisible (atomic) operations:
wait (S) {
while (S <= 0); // no-op. /* busy waiting*/
S--;
}

signal (S) {
S++;
}
●The operations wait() and signal() are atomic which means that if a process P is executing
either wait() or signal() then no other process can preempt P until it finishes wait()/signal().
●Working of wait():The initial value of the Semaphore variable is ‘1’. 1 means that the
resource is free and the process can enter the critical section. If the value of S is ‘0’ this
means that some other process is in its critical section and thus, the current process
should wait.
Semaphore Solution contd..
●wait(): it is called before the critical section. When a process calls
wait(), it checks the value of S. If the value is less than or equal to ‘0’,
then the process performs no operations. Hence, the process gets stuck
in the while loop and is not able to come out of the wait() function. So,
it is not able to enter critical section. But if the value of S is ‘1’, then it
comes out of the while loop, decrements the value of S to 0 and enters
the critical section.
●Working of signal(): Once the process has finished the critical
section part, it calls signal(). Within the signal() function, the process
increments the value of S. Finally, giving a signal to a waiting process
to enter in its critical section.
S=1
while(true){
Wait(S){
while(S<=0); // busy waiting
if(S=1){Critical section; S=S-1;}
}
signal(S){ S=S+1; }
}
Semaphore Solution contd..
●Binary Semaphores
●Binary Semaphores can only have one of two values: 0 or 1. Because of
their capacity to ensure mutual exclusion, they are also known as mutex
locks.
●A single binary semaphore is shared between multiple processes.
●Used to provide Mutual Exclusion.
●Counting Semaphores
●Counting Semaphores can have any value and are not limited to a
certain area. They can be used to restrict access to a resource that has a
concurrent access limit.
●Initially, the counting semaphores are set to the maximum amount of
processes that can access the resource at a time. Hence, the counting
semaphore indicates that a process can access the resource if it has a value
greater than 0. If it is set to 0, no other process can access the resource.
●Hence,When a process wants to use that resource, it first checks to see if
the value of the counting semaphore is more than zero.
●It is used to control access to a resource that has multiple instances.
●For example, there are 5 Printers. This means there are 5 instances of the
resource Printer. Now 5 process can simultaneously print. So, the initial
value of the semaphore variable should be 5 i.e., equal to the number of
instances available.
Semaphore contd..
●Advantages of Semaphore:
●Allows only one process into the critical section. They follow
the mutual exclusion principle strictly and are much more
efficient than some other methods of synchronization.
●Semaphores are machine independent.
●No resource wastage because of busy waiting in semaphores as
processor time is not wasted unnecessarily to check if a
condition is fulfilled to allow a process to access the critical
section.
●Disadvantages of Semaphore:
●Are complicated so the wait and signal operations must be
implemented in the correct order to prevent deadlocks.
●Semaphores may lead to a priority inversion where low
priority processes may access the critical section first and high
priority processes later.
●Also leads to occurrence of starvation.
Semaphore Implementation
●Using Busy Waiting :
●When one process is in the critical section,
others have to wait in entry section
●Busy waiting - process cycles through the wait() loop
waiting for semaphore to be released
●Also called Spinlock
● Disadvantages of Busy Waiting:
●When a process is in C.S. and any other process
that wants to enter C.S. loops continuously in
entry section.
● Waste CPU cycle
●Better is to put the waiting process in a queue
and make a context switch to a process that is
Semaphore Implementation
● Using Waiting Queue:
● Instead of waiting, a process blocks itself.
● Two operations:
● block – place the process invoking the operation on the
waiting queue. (Entry section during Wait()).
● wakeup – remove one of processes in the waiting queue
and place it in the ready queue. (Exit section during
Signal()).
● Each semaphore has a value and a waiting queue:
type def struct{
int value;
struct process*list;
} Semaphore;
● value (the semaphore value of type integer)
● list of processes waiting on the semaphore (e.g. PCBs Pointer list)
Semaphore Implementation
● Two operations:
●block – place the process invoking the wait() operation on the
appropriate waiting queue.
●wakeup – upon signal() remove one of processes in the waiting
queue and place it in the ready queue
● Implementation of wait:
wait (Semaphore *S) {
S->value--;
if (value < 0) {
add this process to S->list
block();
}
}
• Implementation of signal:
signal (Semaphore *S) {
S->value++;
if (value <= 0) {
remove a process P from S->list
wakeup(P);
}
Deadlock and Starvation
● Deadlock – two or more processes are waiting indefinitely for an
event that can be caused by only one of the waiting processes.
● Let S and Q be two semaphores initialized to 1
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
: :
: :
signal (S); signal (Q);
signal (Q); signal (S);
● Starvation – indefinite blocking
● A process may never be removed from the semaphore
queue in which it is suspended .
● e.g waiting queues are implemented in LIFO
order.
Classical Problems of Synchronization

● These problems are considered as large class of


concurrency-control . Used to test each
synchronization solution. However, actual
implementations of these solutions use mutex locks
with place of binary semaphores.
● Bounded-Buffer Problem
● Readers and Writers Problem
● Dining-Philosophers Problem
Bounded-Buffer Problem
● Producer Consumer Problem
● There can be two situations:
1. Producer Produces Items at Fastest Rate Than Consumer Consumes
(Then Some items will be lost)
Eg. Computer  Producer, Printer  Consumer
2. Producer Produces Items at Lowest Rate Than Consumer Consumes.
Solution:
To avoid mismatch of items Produced or Consumed
 Take Buffer
Idea is: Instead of sending items from Producer to Consumer directly
 Store items into buffer
Producer Consumer Problem
● Buffer Can be:
1. Un-bounded Buffer:
1. No buffer size limit
2. Any no. of items can be stored
3. Producer can produce on any rate, there will always be
space in buffer
2. Bounded Buffer:
1. Limited buffer size
If rate of Production > rate of Consumption:
Some items will be unconsumed in buffer
If rate of Production < rate of Consumption:
At some time buffer will be empty
Producer Consumer Problem
● Buffer Can be:
1. Un-bounded Buffer:
1. No buffer size limit
2. Any no. of items can be stored
3. Producer can produce on any rate, there will always be
space in buffer

2. Bounded Buffer:
1. Limited buffer size

● Bounded - Buffer is solution for producer


consumer problem.
Producer Consumer Problem
Producer :
while (true) {
/* produce an item and put in nextProduced */
while (count == BUFFER_SIZE); // do nothing
buffer [in] = nextProduced;
count++;
} Limited buffer size

Consumer
while (true) {
while (count == 0) // buffer empty
{; // do nothing }
nextConsumed = buffer[out];
count- -;
/* consume the item in nextConsumed
}
Producer Consumer Problem
●Suppose we have a buffer of N slots, each can hold
one item
●Use three semaphores to synchronize the producer
and consumer
●mutex: Binary Semaphore is to take lock on the
buffer.
●full: Counting Semaphore to denote the number
of occupied slots in buffer.
●empty: Counting Semaphore to denote the
number of empty slots in buffer.
● Initialization:
● mutex initialized to the value 1
● full initialized to 0
● empty initialized to N
Producer Consumer Problem
● The producer process:
while (true) {
wait (empty);
// produce an item
wait (mutex);
// add the item to the buffer
signal (mutex);
signal (full);
}

● The consumer process:


while (true) {
wait (full);
wait (mutex);
// remove an item from buffer
signal (mutex);
signal (empty);
// consume the removed item
}Use three semaphores to synchronize the producer and consumer
Readers- Writers Problem
●A data set is shared among a number of concurrent
processes
●Readers – only read the data set; they do not
perform any updates
●Writers – can both read and write
●Problem – allow multiple readers to read at the same
time
●Constraint: only one single writer can access the
shared data at the same time
●Solution:
●If the writer is accessing the data, then all the
other writers and readers will be blocked.
●If any reader is reading, then other readers
can read but writers will be blocked
Readers- Writers Problem
●Variable declaration for solution:
●mutex: Binary Semaphore to provide Mutual
Exclusion
● wrt: Binary Semaphore to restrict readers and
writers if writing is going on.
●readcount: Integer variable, denotes number
of active readers.(Shared variable).
●Initialization:
● mutex initialized to 1
● wrt initialized to 1
● readcount initialized to 0
Readers- Writers Problem
● The writer process:
while (true) {
wait (wrt);
// writing is performed
signal (wrt);
} Semaphore wrt initialized to 1
● The reader process:
while (true) {
wait (mutex); //for shared variable
readcount++;
if (readcount == 1) wait (wrt);
signal (mutex);
// reading is performed
wait (mutex);
readcount--;
if (readcount == 0) signal (wrt);
signal (mutex);
}
Dining-Philosophers Problem
● Five philosophers, either thinking or eating
● to eat, two chopsticks are required
● taking one chopstick at a time
● Shared data
● Solution:
Binary semaphore chopstick[5];
Five semaphores chopstick [5] ; all initialized to 1
Dining-Philosophers Problem
● The structure of Philosopher i:
while (true) {
wait (chopstick[i] ); acquire chopstick
wait (chopstick[(i + 1) % 5]);
// eat

signal (chopstick[i]); release chopstick


signal (chopstick[(i + 1) % 5]);
// think
}
● Deadlock will be occurred.
● Possible solutions to the deadlock problem
● Allow at most four philosophers to be sitting simultaneously at
the table.
● Allow a philosopher to pick up her chopsticks only if both
chopsticks are available (note that she must pick them up in
a critical section).
● Use an asymmetric solution; that is,
●odd philosopher: left first, and then right
●an even philosopher: right first, and then left
● Besides deadlock, any satisfactory solution to the DPP problem
must avoid the problem of starvation.
Monitors
●Synchronization Tool:
●Definition: Abstract Data Type for
handling/defining shared resources
● Comprises:
● Shared Private Data –
●The resource – Cannot be accessed from
outside
● Procedures that operate on the data –
●Gateway to the resource – Can only act on
data local to the monitor
● Synchronization primitives –
●Among threads/processes that access the
procedures.
Structure of a Monitor Monitor
Monitor monitor_name For example:
Monitor stack
{ {
// shared variable declarations int top;
procedure P1(. . . .) { void
push(any_t *) {
....} ....}
procedure P2(. . . .) { any_t *
pop() {
....} ....}
. . procedure PN(. . . .)
{ initialization_code() {
....} ....}
initialization_code(. . . .) { }
....}
}
Monitor Semantics
●Monitors guarantee mutual exclusion
● Only one thread can execute a
monitor procedure at any time.
● – “in the monitor”
●If second thread invokes a monitor
procedure at that time
● – It will block and wait for entry to
the monitor
●— Need for a wait queue
Schematic view of a Monitor
Monitor Synchronization
●The monitor construct is not sufficiently powerful for modeling
some synchronization schemes.
●So, we need to define additional synchronization mechanisms.
●These mechanisms are provided by the condition construct.
●A programmer can define one or more variables of type
condition:

condition x, y;

●The only operations that can be invoked on a condition variable


are wait () and signal().
●Two operations on a condition variable:
●x.wait () – a process that invokes the operation is suspended until
another process invokes x.signal()
●x.signal () – resumes one of processes (if any) that invoked x.wait ()

There could be different conditions for which a process could be waiting


Monitor with Condition Variables
Solution to Dining Philosophers (cont)
● The distribution of the chopsticks is controlled by the
monitor DiningPhilosophers
● Each philosopher ‘ i ’ invokes the operations pickup() and
putdown() in the following sequence:

dp.pickup (i)

EAT

dp.putdown (i)
Solution to Dining Philosophers (cont)
monitor DP
{
enum { THINKING; HUNGRY, EATING) state [5] ;
condition self [5];

void pickup (int i) {


state[i] = HUNGRY;
test(i);
if (state[i] != EATING)
self [i].wait;
}
void putdown (int i) {
state[i] = THINKING;
// test left and right neighbors
test((i + 4) % 5);
test((i + 1) % 5);
}
Solution to Dining Philosophers (cont)

void test (int i) {


if ( (state[(i + 4) % 5] != EATING) && (state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING) )
{
state[i] = EATING ;
self[i].signal() ;
}
}

initialization_code() {
for (int i = 1; i <= 5; i++)
state[i] = THINKING;
}
}
Process Hierarchy
Precedence Graph
●Precedence Graph is a directed acyclic
graph which is used to show the execution
level of several processes in operating
system. It consists of nodes and edges. Nodes
represent the processes and the edges
represent the flow of execution.
● Properties of Precedence Graph :
● It is a directed graph.
● It is an acyclic graph.
● Nodes of graph correspond to individual statements of
program code.
● Edge between two nodes represents the execution
order.
● A directed edge from node A to node B shows that
statement A executes first and then Statement B
executes.
Precedence Graph
● S1 : a = x + y;
● S2 : b = z + 1;
● S3 : c = a - b;
● S4 : w = c + 1;
Assignment
1. Consider a semaphore s, initialized with value 10. What should be the value
of S after executing 6 times P() and 8 time V() function on S?
2. Consider a semaphore S, initialized with value 27. Which of the following
options gives the final value of S=12?
a. Execution of 12 P() and 15 V()
b. Execution of 15 P()
c. Execution of 23 P() and 8 V()
d. Execution of 21 P() and 6 V()
3. Consider a semaphore S, initialized with value 1. Consider 10 processes
P1,P2…P10. All processes have same code as given below but, one process
P10 has signal(S) in place of wait(S). If all processes to be executed only
once, then maximum number of processes which can be in critical section
together?
while(True){
wait(S)
C.S
signal(S)
}

You might also like