0% found this document useful (0 votes)
8 views

Module-3 Process Synchronization

Module 3 covers process synchronization, focusing on the critical section problem and various solutions, including Peterson's solution and semaphore mechanisms. It discusses the requirements for mutual exclusion, progress, and bounded waiting, as well as the role of synchronization hardware in ensuring data consistency among cooperating processes. The document also details the implementation of semaphores and their use in managing concurrent processes effectively.

Uploaded by

Ambika Venkatesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Module-3 Process Synchronization

Module 3 covers process synchronization, focusing on the critical section problem and various solutions, including Peterson's solution and semaphore mechanisms. It discusses the requirements for mutual exclusion, progress, and bounded waiting, as well as the role of synchronization hardware in ensuring data consistency among cooperating processes. The document also details the implementation of semaphores and their use in managing concurrent processes effectively.

Uploaded by

Ambika Venkatesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 45

MODULE-3

Process Synchronization

Department of CSE- Data Science


Contents

 Introduction

 The critical section problem

 Peterson’s solution

 Synchronization hardware

 Semaphores

 Classical problems of synchronization

Department of CSE- Data Science


Introduction

 A cooperating process is one that can affect or be affected by other processes executing in
the system.
Cooperating processes can either

directly share a logical address space be allowed to share data only


(that is, both code and data) through files or messages.

 Concurrent access to shared data may result in data inconsistency.


 Various mechanisms is required to ensure the orderly execution of cooperating processes
that share a logical address space, so that data consistency is maintained

Department of CSE- Data Science


Critical section problem

 Consider a system consisting of n processes {P0, Pl, ... , Pn-1}.


 Each process has critical section segment of code
– Process may be changing common variables, updating table, writing file, etc

– When one process in critical section, no other may be in its critical section
 Critical section problem is to design protocol to solve this
 General structure of process pi is

Department of CSE- Data Science


 Each process must request permission to enter its critical section.
 The section of code implementing this request is the entry section.
 The critical section may be followed by an exit section.
 The remaining code is the reminder section.
 A solution to the critical-section problem must satisfy the following three requirements:

1. Mutual Exclusion

2. Progress

3. Bounded Waiting

Department of CSE- Data Science


1. Mutual Exclusion - If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections.
2. Progress - If no process is executing in its critical section and there exist some
processes that wish to enter their critical section, then the selection of the processes
that will enter the critical section next cannot be postponed indefinitely.
3. Bounded Waiting - There exists a bound, or limit, on the number of times that other
processes are allowed to enter their critical sections after a process has made a request
to enter its critical section and before that request is granted.

Department of CSE- Data Science


If both the processes
If the process is
occur at the same
accessed the x
time, then the
shared variable
compiler would be
one after other,
in a confusion to
then we are
choose which
going to be in a
variable value i.e. -
good position.
10 or 30. This state
If Process 1 is
faced by the
alone executed,
variable x is Data
then the value
Inconsistency. These
of x is denoted
problems can also
as x = 30;
If Process 2 is alone executed, then the value of x isbe solved by
denoted as x = -10; Hardware Lock
The shared variable x changes to -10 from 30
Peterson’s Solution
 This is a classic software-based solution to the critical-section problem. There are no
guarantees that Peterson's solution will work correctly on modern computer architectures
 Peterson's solution provides a good algorithmic description of solving the critical-section
problem.
 It illustrates some of the complexities involved in designing software that addresses the
requirements of mutual exclusion, progress, and bounded waiting.
 Peterson's solution is restricted to two processes that alternate execution between their
critical sections and remainder sections.
 The processes are numbered P0 and P1 or Pi and Pj, where j = 1-i

Department of CSE- Data Science


 Peterson's solution requires the two processes to share two data items:
int turn;
boolean flag[2];
 turn: The variable turn indicates whose turn it is to enter its critical section.
 Ex: if turn == i, then process Pi is allowed to execute in its critical section
 flag: The flag array is used to indicate if a process is ready to enter its critical section.
 Ex: if flag [i] is true, this value indicates that Pi is ready to enter its critical section.
 To enter the critical section, process Pi first sets flag [i] to be true and then sets turn to the
value j, thereby asserting that if the other process wishes to enter the critical section, it can
do so.
 If both processes try to enter at the same time, turn will be set to both i and j at roughly the
same time.
 Only one of these assignments will last; the other will occur but will be overwritten
immediately.

Department of CSE- Data Science


Structure of process Pi and Pj

do {
do {
flag[j] = TRUE;
flag[i] = TRUE;
turn = i;
turn = j;
while (flag[i] && turn == i);
while (flag[j] && turn == j);
critical section
critical section flag[j] = false;
flag[i] = false;
remainder section
remainder section } while (TRUE);
} while (TRUE);

Department of CSE- Data Science


 To prove that solution is correct, then we need to show that
1. Mutual exclusion is preserved
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met
 To prove property 1, we note that each Pi enters its critical section only if either flag [j] ==

false or turn == i. Also note that, if both processes can be executing in their critical

sections at the same time, then flag [0] ==flag [1] ==true.
 These two observations imply that Po and P1 could not have successfully executed their

while statements at about the same time, since the value of turn can be either 0 or 1 but

cannot be both.

Department of CSE- Data Science


 Hence, one of the processes -say, Pj-must have successfully executed the while statement,

whereas Pi had to execute at least one additional statement ("turn== j").


 However, at that time, flag [j] == true and turn == j, and this condition will persist as long

as Pj is in its critical section; as a result, mutual exclusion is preserved.


 To prove properties 2 and 3, we note that a process Pi can be prevented from entering the
critical section only if it is stuck in the while loop with the condition flag [j] ==true and
turn=== j; this loop is the only one possible.
 If Pj is not ready to enter the critical section, then flag [j] ==false, and Pi can enter its
critical section.
 If Pj has set flag [j] to true and is also executing in its while statement, then either
turn === i or turn === j. If turn == i, then Pi will enter the critical section.

Department of CSE- Data Science


 If turn== j, then Pj will enter the critical section. However, once Pj exits its critical
section, it will reset flag [j] to false, allowing Pi to enter its critical section.
 If Pj resets flag [j] to true, it must also set turn to i. Thus, since Pi does not change the
value of the variable turn while executing the while statement, Pi will enter the critical
section (progress) after at most one entry by Pj (bounded waiting).

Department of CSE- Data Science


Synchronization Hardware

 Ssoftware-based solutions such as Peterson's are not guaranteed to work on modern


computer architectures
 Hardware features can make any programming task easier and improve system efficiency.
 To generalize the solution(s) expressed, each process when entering their critical section
must set some sort of lock, to prevent other processes from entering their critical sections
simultaneously, and must release the lock when exiting their critical section, to allow other
processes to proceed
 Obviously it must be possible to attain the lock only when no other process has already set a
lock.

Department of CSE- Data Science


 The critical-section problem could be solved simply in a uniprocessor environment if we
could prevent interrupts from occurring while a shared variable was being modified.
 In this manner, we could be sure that the current sequence of instructions would be
allowed to execute in order without preemption.
 No other instructions would be run, so no unexpected modifications could be made to the
shared variable. This is often the approach taken by nonpreemptive kernels.

Department of CSE- Data Science


 Unfortunately, this solution is not as feasible in a multiprocessor environment.
 Disabling interrupts on a multiprocessor can be time consuming, as the message is
passed to all the processors. This message passing delays entry into each critical
section, and system efficiency decreases.
 Many modern computer systems therefore provide special hardware instructions that
allow us either to test and modify the content of a word or to swap the contents of two
words atomically that is, as one uninterruptible unit.
 These atomic operations. operations are guaranteed to operate as a single instruction,
without interruption. One such operation is the "Test and Set", which simultaneously
sets a boolean lock variable and returns its previous value

Department of CSE- Data Science


 The definition of the TestAndSet () instruction:
boolean TestAndSet (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
 Mutual-exclusion implementation with TestAndSet ().
do {
while ( TestAndSet(&lock ))
; // do nothing
// critical section
lock = FALSE;
// remainder section
} while (TRUE);

Department of CSE- Data Science


 Definition of swap()
void Swap (boolean *a, boolean *b)
{
boolean temp = *a;
*a = *b;
*b = temp:
}
 Like the TestAndSet () instruction, it is executed atomically. If the machine supports
the Swap() instruction, then mutual exclusion can be provided as follows.
 A global Boolean variable lock is declared and is initialized to false. In addition, each
process has a local Boolean variable key.

Department of CSE- Data Science


 Although these algorithms satisfy the mutual exclusion requirement, but unfortunately
do not guarantee bounded waiting.
 If there are multiple processes trying to get into their critical sections, there is no
guarantee of what order they will enter, and any one process could have the bad luck to
wait forever until they got their turn in the critical section.
 we present another algorithm using the TestAndSet () instruction that satisfies all the
critical-section requirements. The common data structures are initialized to false.

Department of CSE- Data Science


 To prove that the mutualexclusion
requirement is met, we note that process Pi
can enter its critical section only if either
waiting [i] == false or key == false.
 The value of key can become false only if the
TestAndSet () is executed.
 The first process to execute the TestAndSet ()
will find key== false; all others must wait.
 The variable waiting [i] can become false only
if another process leaves its critical section;
only one waiting [i] is set to false, maintaining
the mutual-exclusion requirement.

Department of CSE- Data Science


 A process exiting the critical section either sets lock
to false or sets waiting[j] to false. Both allow a
process that is waiting to enter its critical section to
proceed.
 To prove that the bounded-waiting requirement is
met, we note that, when a process leaves its critical
section, it scans the array waiting in the cyclic
ordering (i + 1, i + 2, ... , n -1, 0, ... , i -1).
 It designates the first process in this ordering that is
in the entry section (waiting[j] ==true) as the next
one to enter the critical section.
 Any process waiting to enter its critical section will
thus do so within n - 1 turns.
Department of CSE- Data Science
Semaphore

 Semaphore is a technique to manage concurrent process by using a simple


integer value known as semaphore.
 Semaphore is simply a variable which is non-negative and shared between
threads. The variable is used to solve the critical section problem and to
achieve process synchronization in multiprogramming environment.
 A semaphore S is an integer variable that, apart from initialization, is
accessed only through two standard atomic operations:
wait() and signal().

Department of CSE- Data Science


 Definition of wait()  Definition of signal ():
 wait( ) -> P [from Dutch word  signal( ) -> V [from Dutch word
proberen, which means “to test”] verhogen, which means “to
increment”]
wait (S) {
while S <= 0 signal (S) {
; // no-op S++;
S--; }
}

 All modifications to the integer value of the semaphore in the wait () and signal()
operations must be executed indivisibly.
When one process modifies the semaphore value, no other process can simultaneously
modify that same semaphore value.

Department of CSE- Data Science


Binary Semaphore
 The value of a binary semaphore can range only between 0 and 1.
 In some systems, binary semaphores are known as mutex locks, as they are locks that
provide mutual exclusion
 Binary semaphores to deal with the critical-section problem for multiple processes.
Counting semaphore
 Counting semaphores can be used to control access to a given resource consisting of a
finite number of instances.
 The semaphore is initialized to the number of resources available.
 Each process that wishes to use a resource performs a wait() operation on the semaphore.
 When a process releases a resource, it performs a signal() operation.
 When the count for the semaphore goes to 0, all resources are being used. After that,
processes that wish to use a resource will block until the count becomes greater than 0.

Department of CSE- Data Science


Semaphore as General Synchronization Tool
 Provides mutual exclusion
Semaphore mutex; // initialized to 1
do {
wait (mutex);
// Critical Section
signal (mutex);
// remainder section
} while (TRUE);

Department of CSE- Data Science


Semaphore Implementation

 Must guarantee that no two processes can execute wait () and signal () on the same

semaphore at the same time


Semaphore Implementation with no Busy waiting
 With each semaphore there is an associated waiting queue.
 Each entry in a waiting queue has two data items:
1. value (of type integer)
2. pointer to next record in the list
 Two operations:
1. block : place the process invoking the operation on the appropriate waiting queue.
2.Wakeup: remove one of processes in the waiting queue and place it in the ready
queue.

Department of CSE- Data Science


Analogy for Binary Semaphore :Wait()

Department of CSE- Data Science


Analogy for Binary Semaphore :Signal()

Department of CSE- Data Science


Implementation of wait:  When a process executes the wait () operation
wait(S) and finds that the semaphore value is not
{ positive, it must wait.
value--;  However, rather than engaging in busy

if (value < 0) waiting, the process can block itself.


 The block operation places a process into a
{
waiting queue associated with the semaphore,
add this process to waiting queue
and the state of the process is switched to the
block();
waiting state.
}  Then control is transferred to the CPU
} scheduler, which selects another process to
execute.

Department of CSE- Data Science


 Implementation of signal:  A process that is blocked, waiting on a
Signal(S)
{ semaphore S, should be restarted when
value++; some other process executes a signal()
if (value <= 0) operation.
{  The process is restarted by a wakeup ()
remove a processP from the waiting
queue operation, which changes the process
wakeup(P); from the waiting state to the ready state.
}  The process is then placed in the ready
}
queue.
 The CPU may or may not be switched
from the running process to the newly
ready process, depending on the CPU-
scheduling algorithm.

Department of CSE- Data Science


Disadvantages of Semaphores
 The main disadvantage of the semaphore definition requires busy waiting.
 While a process is in its critical section, any other process that tries to enter its
critical section must loop continuously in the entry code.
 Busy waiting wastes CPU cycles that some other process might be able to use
productively.
 This type of semaphore is also called a spinlock because the process "spins" while
waiting for the lock.

Department of CSE- Data Science


Deadlock and Starvation
 Deadlock –two or more processes are waiting indefinitely for an event that can be caused
by only one of the waiting processes
 consider a system consisting of two processes, Po and P1, each accessing two semaphores,
S and Q, set to the value 1:
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
... ...
signal (S); signal (Q);
signal (Q); signal (S);
 Suppose that Po executes wait (S) and then P1 executes wait (Q).
 When Po executes wait (Q), it must wait until P1 executes signal (Q).
 Similarly, when P1 executes wait (S), it must wait until Po executes signal(S). Since these
signal() operations cannot be executed, Po and P1 are deadlocked.

Department of CSE- Data Science


 Another problem related to deadlocks is indefinite blocking or starvation, a situation in
which processes wait indefinitely within the semaphore.
 Indefinite blocking may occur if we remove processes from the list associated with a
semaphore in LIFO (last-in, first-out) order.

Priority Inversion
 Scheduling problem when lower-priority process holds a lock needed by higher-priority
process.
 Solution: priority-inheritance protocol
‣ all processes that are accessing resources needed by a higher-priority process inherit the
higher priority until they are finished with the resources in question.
‣ When they are finished, their priorities revert to their original values.

Department of CSE- Data Science


Classical Problems of Synchronization

 Classical problems used to test newly-proposed synchronization schemes

1. Bounded-Buffer Problem

2. Readers and Writers Problem

3. Dining-Philosophers Problem

Department of CSE- Data Science


1. Bounded-Buffer Problem

 This problem is generalized in terms of the Producer


Consumer problem, where a finite buffer pool is used to
exchange messages between producer and consumer
processes.
 We assume that the pool consists of n buffers, each
capable of holding one item.
 The mutex semaphore provides mutual exclusion for
accesses to the buffer pool and is initialized to the value 1.
 The empty and full semaphores count the number of
empty and full buffers.
 The semaphore empty is initialized to the value n; the
semaphore full is initialized to the value 0.

Department of CSE- Data Science


The structure of the producer process The structure of the consumer process

do { do {
wait (full);
// produce an item in nextp wait (mutex);

wait (empty); // remove an item from buffer to nextc


wait (mutex);
signal (mutex);
// add the item to the buffer signal (empty);

signal (mutex); // consume the item in nextc


signal (full);
} while (TRUE);
} while (TRUE);

Department of CSE- Data Science


2. Readers-Writers Problem

 A database is to be shared among several concurrent processes.


 Some of these processes may want only to read the database, whereas others may want
to update (that is, to read and write) the database.
 We distinguish between these two types of processes by referring to the former as
Readers and to the latter as Writers.
 If two readers access the shared data simultaneously, no adverse affects will result.
 If a writer and some other thread (either a reader or a writer) access the database
simultaneously, problems may ensue.
 To ensure that these difficulties do not arise, we require that the writers have exclusive
access to the shared database.
 This synchronization problem is referred to as the readers-writers problem.

Department of CSE- Data Science


Solution to the Readers-Writers Problem using Semaphores
 We will make use of two semaphores and an integer variable:
1. mutex, a semaphore (initialized to 1) which is used to ensure mutual exclusion when
readcount is updated i.e. when any reader enters or exit from the critical section.
2. wrt, a semaphore (initialized to 1) common to both reader and writer processes.
3. readcount, an integer variable (initialized to 0) that keeps track of how many
processes are currently reading the object.

Department of CSE- Data Science


• The structure of a reader process
 The structure of a writer do
{
process wait (mutex) ;
readcount ++ ;
do
if (readcount == 1)
{ wait (wrt) ;
signal (mutex)
wait (wrt) ;
// reading is performed

// writing is performed wait (mutex) ;


readcount - - ;
signal (wrt) ; if (readcount == 0)

} while (TRUE); signal (wrt) ;


signal (mutex) ;
} while (TRUE);
Department of CSE- Data Science
3. Dining-Philosophers Problem

 The Dining Philosophers Problem is a classic synchronization and concurrency problem


that was formulated by E.W. Dijkstra in 1965.
 It illustrates challenges in resource allocation and the need for synchronization in a multi-
process or multi-threaded environment.
 The scenario involves a set of philosophers sitting around a dining table. Each philosopher
alternates between thinking and eating.
 To eat, a philosopher must use two adjacent forks (one on the left and one on the right).
The challenge is to design a solution that avoids deadlock (where no philosopher can
finish eating) and ensures that philosophers can safely share the forks without conflicts.

Department of CSE- Data Science


key aspects of the problem:
• Philosophers and Forks:
– There are N philosophers sitting around a circular
dining table.
– There are N forks, one placed between each pair of
adjacent philosophers.
• Actions:
– A philosopher can either think or eat.
– To eat, a philosopher needs both the fork on their left
and the fork on their right.
• Constraints:
– Philosophers must not starve (i.e., wait indefinitely for
a fork).
– Deadlock should be avoided (no philosopher should be
prevented from eating by a circular waiting scenario).

Department of CSE- Data Science


One simple solution is to represent each fork/chopstick with a semaphore.
 A philosopher tries to grab a fork/chopstick by executing a wait () operation on that
semaphore.
 He releases his fork/chopsticks by executing the signal () operation on the
appropriate semaphores.
 Thus, the shared data are :
semaphore chopstick[5];
where all the elements of chopstick are initialized to 1.

Department of CSE- Data Science


 The structure of Philosopher i:
do {
wait ( chopstick[i] );
wait ( chopstick[ (i + 1) % 5] );

// eat

signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );

// think
} While (true)
 Although this solution guarantees that no two neighbors are eating simultaneously, it could still create a

deadlock.
 Suppose that all five philosophers become hungry simultaneously and each grabs their left chopstick.

All the elements of chopstick will now be equal to 0.


 When each philosopher tries to grab his right chopstick, he will be delayed forever.
Department of CSE- Data Science
Department of CSE- Data Science
Department of CSE- Data Science

You might also like