0% found this document useful (0 votes)
3 views

Process Synchronization

The document discusses common synchronization problems in concurrent programming, including race conditions, deadlocks, and starvation. It explains the Producer-Consumer problem, detailing how semaphores can be used for process synchronization, and presents a solution to the Dining Philosopher problem using semaphores to avoid deadlock and ensure mutual exclusion. The document emphasizes the importance of careful semaphore implementation to manage resource access among processes.

Uploaded by

Mahmud Abbas
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Process Synchronization

The document discusses common synchronization problems in concurrent programming, including race conditions, deadlocks, and starvation. It explains the Producer-Consumer problem, detailing how semaphores can be used for process synchronization, and presents a solution to the Dining Philosopher problem using semaphores to avoid deadlock and ensure mutual exclusion. The document emphasizes the importance of careful semaphore implementation to manage resource access among processes.

Uploaded by

Mahmud Abbas
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 25

Process

Synchronization
Common Synchronization Problems

Race Condition: A race condition occurs when two or more processes (or threads) access shared
data simultaneously and the outcome depends on the order of execution.

Deadlock: Deadlock occurs when two or more processes are each waiting for the other to release
a resource, causing the system to freeze.

Starvation: happens when a process is perpetually denied access to a resource because other
processes are continuously favored
Producer Consumer
Problem
Before Starting an explanation of code, first, understand the few terms used in
the above code:

1."in" used in a producer code represent the next empty buffer

2."out" used in consumer code represent first filled buffer

3.count keeps the count number of elements in the buffer

4.count is further divided into 3 lines code represented in the block in both
the producer and consumer code.
Producer Consumer Problem Cont.

int count = 0;
void producer(void)
{
Producer Code
int itemP;
while(1)
{ If we talk about Producer code first:
Produce_item(item P)
while(count == n); --Rp is a register which keeps the value
buffer[in] = item P; of m[count]
in = (in + 1)mod n
count = count + 1; --Rp is incremented (As element has
} been added to buffer)
}
--an Incremented value of Rp is stored
Memory Management back to m[count]
Load Rp, m[count]
Increment Rp
Store m[count], Rp
Producer Consumer Problem Cont.

int count;
void consumer(void)
{ Consumer Code
int itemC;
while(1) Similarly, if we talk about Consumer code
{ next:
while(count == 0);
itemC = buffer[ out]; --Rc is a register which keeps the value of
out = (out + 1) mod n; m[count]
count = count - 1;
} --Rc is decremented (As element has been
} removed out of buffer)

--the decremented value of Rc is stored


Memory Management back to m[count].
Load Rc, m[count]
Decrement Rc
Store m[count], Rc
Let's start with the producer who wanted to produce an element " F "

Buffer[in] = itemP → Buffer[5] = F. ( F is inserted now)

in = (in + 1) mod n → (5 + 1)mod 8→ 6, therefore in = 6; (next


empty buffer)
Suppose just after Increment and before the execution of third line (store
m[count], Rp) Context Switch occurs and code jumps to consumer code. .
Now starting consumer who
wanted to consume the first After removal of A, Buffer look like this
element " A " Where out = 1, and in = 6

itemC = Buffer[out]→ itemC = A ( since out is 0)

out = (out + 1) mod n → (0 + 1)mod 8→ 1, therefore out =


1( first filled position)
Since count = count - 1; is divided into three parts:

Load Rc, m[count] → will copy count value which is 5 to register Rp.

Decrement Rc → will decrement Rc to 4.

store m[count], Rc → count = 4.

Now the current value of count is 4


Suppose after this Context Switch occurs back to the leftover part of
producer code. . .

Now the current value of count is 6, which is wrong as


Buffer has only 5 elements, this condition is known as
Race Condition and Problem is Producer-Consumer
Problem.

Race Condition: A race condition occurs when multiple


processes (or threads) access shared data in an
unpredictable manner, causing the system to reach an
incorrect or inconsistent state. In this case, the producer
has incremented count without actually updating the
buffer, and when the context switch happens, the count
value is mistakenly updated to 6, even though only 5
items have been added to the buffer.
Semaphores…
A semaphore is an integer variable, shared among multiple processes. The main aim of using a
semaphore is process synchronization and access control for a common resource in a concurrent
environment.

function signal(S):
function wait(S): // This function represents the 'signal' or 'release'
// This function represents the 'wait' operation for a semaphore.
operation for a semaphore.
if there are processes sleeping on S:
if S > 0: // Wake up a blocked process
// Do not block the process select a process to wake up
S <- S - 1 wake up the selected process
return return
else: else:
// Block the calling process // No process is waiting on S
sleep S <- S + 1
return
There are two types of semaphores:
● Binary semaphore
● Counting Semaphore

A binary semaphore can have only two integer values: 0 or 1. It’s simpler to implement and
provides mutual exclusion. We can use a binary semaphore to solve the critical section problem.
Some experienced readers might confuse binary semaphores with a mutex. There’s a common
misconception that they can be used interchangeably. But in fact, a semaphore is a signaling
mechanism where on the other hand, a mutex is a locking mechanism. So, we need to know that
binary semaphore is not a mutex.
A deadlock occurs when a
group of processes is
blocked in a state waiting
for some other member’s
action.

To avoid possible deadlocks,


we need to be careful about
how we implement the
semaphores
In the above example, we guarantee
mutual exclusion in critical section
access.
Instead of busy waiting, the waiting
process is sleeping as it’s waiting for
its turn on the critical section.
Then the signal operation is
performed, kernel adds a sleeping
process to the ready queue. If the
kernel decides to run the process, it
will continue into the critical section.
Initially, . As the producer puts an
item into the buffer, it increases the
semaphore by a signal operation. On
the contrary, when the consumer
consumes an item, by wait operation,
the semaphore is decreased.
When a consumer uses the last item
in the buffer, it’s put to sleep by the
last wait operation.
Dining Philosopher Problem Using Semaphores
Each Process is represented by:

process P[i]
while true do
{
THINK;
PICKUP(CHOPSTICK[i], CHOPSTICK[i+1 mod 5]);
EAT;
PUTDOWN(CHOPSTICK[i], CHOPSTICK[i+1 mod 5])
}
Var successful: boolean;
repeat
successful:= false;
if left neighbor is waiting for
while (not successful)
his right fork
if both forks are available then
then
lift the forks one at a time;
activate (left neighbor);
successful:= true;
if right neighbor is waiting for
if successful = false his left fork
then then
block(Pi); activate( right neighbor);
{eat} {think}
forever
put down both forks;
In this solution, when philosopher
decides to eat, it first waits on semaphore
. Then after acquiring it, the philosopher
waits on semaphore . When both
chopsticks are acquired, it can eat.
When the philosopher is full, it releases
the chopsticks in the same order, by
calling the signal operation on the
respective semaphore.
One common solution to the Dining Philosopher Problem uses semaphores, a
synchronization mechanism that can be used to control access to shared
resources.

In this solution, each fork is represented by a semaphore, and a philosopher must


acquire both the semaphore for the fork to their left and the semaphore for the
fork to their right before they can begin eating.

If a philosopher cannot acquire both semaphores, they must wait until they
become available.
semaphores forks[n] = {1, 1, 1, ..., 1}; // All forks are available initially
mutex := Semaphore(1); // Mutex for critical section

procedure philosopher(i: integer):


repeat
think(); // Philosopher is thinking
wait(mutex); // Enter critical section to pick up forks
wait(forks[i]); // Try to pick up left fork
wait(forks[(i+1)%n]); // Try to pick up right fork
signal(mutex); // Exit critical section

eat(); // Philosopher eats


signal(forks[i]); // Put down left fork
signal(forks[(i+1)%n]); // Put down right fork
until false;
The steps for the Dining Philosopher Problem solution using semaphores are as follows
1. Initialize the semaphores for each fork to 1 (indicating that they are available).
2. Initialize a binary semaphore (mutex) to 1 to ensure that only one philosopher can attempt to pick up a fork
at a time.
3. For each philosopher process, create a separate thread that executes the following code:
● While true:
○ Think for a random amount of time.
○ Acquire the mutex semaphore to ensure that only one philosopher can attempt to pick up a fork
at a time.
○ Attempt to acquire the semaphore for the fork to the left.
○ If successful, attempt to acquire the semaphore for the fork to the right.
○ If both forks are acquired successfully, eat for a random amount of time and then release both
semaphores.
○ If not successful in acquiring both forks, release the semaphore for the fork to the left (if
4. Run the philosopher threads concurrently.
By using semaphores to control access to the forks, the Dining
Philosopher Problem can be solved in a way that avoids deadlock
and starvation. The use of the mutex semaphore ensures that only
one philosopher can attempt to pick up a fork at a time, while the
use of the fork semaphores ensures that a philosopher can only eat if
both forks are available.

You might also like