0% found this document useful (0 votes)
26 views48 pages

Unit 3 Os

The document discusses problems that can arise from concurrent process execution and sharing of resources without coordination. It covers race conditions and describes the critical section problem, where processes need exclusive access to shared data. Solutions discussed include semaphores, which can be used to synchronize processes and avoid race conditions.

Uploaded by

Shriram Kulkarni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views48 pages

Unit 3 Os

The document discusses problems that can arise from concurrent process execution and sharing of resources without coordination. It covers race conditions and describes the critical section problem, where processes need exclusive access to shared data. Solutions discussed include semaphores, which can be used to synchronize processes and avoid race conditions.

Uploaded by

Shriram Kulkarni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

PROCESS COORDINATION

Chapter 3
PROBLEMS WITH CONCURRENT
EXECUTION
Concurrent processes (or threads) often need to share
data (maintained either in shared memory or files) and
resources
If there is no controlled access to shared data, some
processes will obtain an inconsistent view of this data
The action performed by concurrent processes will then
depend on the order in which their execution is
interleaved
AN EXAMPLE

Process P1 and P2 are running this


static char a;
same procedure and have access to
the same variable “a” void echo()
Processes can be interrupted {
anywhere cin >> a;
If P1 is first interrupted after user cout << a;
input and P2 executes entirely }
Then the character echoed by P1
will be the one read by P2 !!
RACE CONDITIONS
Situations like this where processes access the same data
concurrently and the outcome of execution depends on
the particular order in which the access takes place is
called a race condition
How must the processes coordinate (or synchronise) in
order to guard against race conditions?
THE CRITICAL SECTION PROBLEM
When a process executes code that manipulates shared data
(or resource), we say that the process is in it’s critical
section (CS) (for that shared data)
The execution of critical sections must be mutually
exclusive: at any time, only one process is allowed to
execute in its critical section (even with multiple CPUs)
Then each process must request the permission to enter
it’s critical section (CS)
THE CRITICAL SECTION PROBLEM
The section of code implementing this request is called the entry
section
The critical section (CS) might be followed by an exit section
The remaining code is the remainder section
The critical section problem is to design a protocol that the
processes can use so that their action will not depend on the order
in which their execution is interleaved (possibly on many
processors)
FRAMEWORK FOR ANALYSIS OF
SOLUTIONS

Each process executes at many CPU may be present


nonzero speed but no but memory hardware
assumption on the relative prevents simultaneous access
speed of n processes to the same memory location
General structure of a No assumption about order of
process: interleaved execution
For solutions: we need to
specify entry and exit
repeat sections
entry section
critical section
exit section
remainder section
forever
REQUIREMENTS FOR A VALID SOLUTION
TO THE CRITICAL SECTION PROBLEM
Mutual Exclusion
⚫ At any time, at most one process can be in its critical
section (CS)
Progress
⚫ Only processes that are not executing in their RS can
participate in the decision of who will enter next in the CS.
⚫ This selection cannot be postponed indefinitely
Hence, we must have no deadlock
REQUIREMENTS FOR A VALID SOLUTION TO
THE CRITICAL SECTION PROBLEM (CONT.)
Bounded Waiting
⚫ After a process has made a request to enter it’s CS, there is a
bound on the number of times that the other processes are
allowed to enter their CS
otherwise the process will suffer from starvation
WHAT ABOUT PROCESS FAILURES?
If all 3 criteria (ME, progress, bounded waiting) are satisfied,
then a valid solution will provide robustness against failure of a
process in its remainder section (RS)
⚫ since failure in RS is just like having an infinitely long RS
However, no valid solution can provide robustness against a
process failing in its critical section (CS)
⚫ A process Pi that fails in its CS does not signal that fact to other
processes: for them Pi is still in its CS
HARDWARE SOLUTIONS: INTERRUPT
DISABLING
On a uniprocessor: mutual
exclusion is preserved but
efficiency of execution is
degraded: while in CS, we Process Pi:
cannot interleave execution repeat
with other processes that are disable interrupts
in RS
critical section
On a multiprocessor: mutual
exclusion is not preserved enable interrupts
⚫ CS is now atomic but not remainder section
mutually exclusive forever
⚫ Generally not an acceptable
solution
HARDWARE SOLUTIONS: SPECIAL
MACHINE INSTRUCTIONS
Normally, access to a memory location excludes other
access to that same location
Extension: designers have proposed machines
instructions that perform 2 actions atomically
(indivisible) on the same memory location (ex: reading
and writing)
The execution of such an instruction is also mutually
exclusive (even with multiple CPUs)
They can be used to provide mutual exclusion but need
to be complemented by other mechanisms to satisfy the
other 2 requirements of the CS problem (and avoid
starvation and deadlock)
MUTUAL EXCLUSION MACHINE
INSTRUCTIONS
Advantages
⚫ Applicable to any number of processes on either a single
processor or multiple processors sharing main memory
⚫ It is simple and therefore easy to verify
⚫ It can be used to support multiple critical sections
MUTUAL EXCLUSION MACHINE
INSTRUCTIONS
Disadvantages
⚫ Busy-waiting consumes processor time
⚫ Starvation is possible when a process leaves a critical section
and more than one process is waiting.
⚫ Deadlock
If a low priority process has the critical region and a higher priority
process needs, the higher priority process will obtain the processor to
wait for the critical region
OS SOLUTIONS: SEMAPHORES

Synchronization tool (provided by the OS) that do not require


busy waiting
A semaphore S is an integer variable that, apart from
initialization, can only be accessed through 2 atomic and
mutually exclusive operations:
⚫ wait(S)
⚫ signal(S)
To avoid busy waiting: when a process has to wait, it will be put
in a blocked queue of processes waiting for the same event
SEMAPHORE’S OPERATIONS
wait(S):
S.count--;
if (S.count<0) {
block this process
place this process in S.queue
}
signal(S):
S.count++;
if (S.count<=0) {
remove a process P from S.queue
place this process P on ready
list
}
S.count must be initialized to a nonnegative
value (depending on application)
SEMAPHORES
When a process must wait for a semaphore S, it is blocked
and put on the semaphore’s queue
The signal operation removes (acc. to a fair policy like
FIFO) one process from the queue and puts it in the list of
ready processes
SEMAPHORES: OBSERVATIONS

When S.count >=0: the number of processes that can


execute wait(S) without being blocked = S.count
When S.count<0: the number of processes waiting on S is =
|S.count|
Atomicity and mutual exclusion: no 2 process can be in
wait(S) and signal(S) (on the same S) at the same time (even
with multiple CPUs)
Hence the blocks of code defining wait(S) and signal(S) are,
in fact, critical sections
SEMAPHORES CONTINUED…
Types of Semaphores
⚫ Counting semaphore: semaphores having
unlimited value space. Are utilized to facilitate
the resource access, semaphore count is the
quantity of accessible resource.
⚫ Binary semaphore: semaphore having limited
value 0 and 1. Simple to execute.
Oper
ating
Syste
m
Unit
4:
Proce
ss
Sync
hroni
BINARY SEMAPHORES
The semaphores we have studied are called counting (or
integer) semaphores
We have also binary semaphores
⚫ similar to counting semaphores except that “count” is
Boolean valued
⚫ counting semaphores can be implemented by binary
semaphores...
⚫ generally more difficult to use than counting semaphores (eg:
they cannot be initialized to an integer k > 1)
BINARY SEMAPHORES

waitB(S):
if (S.value = 1) {
S.value := 0;
} else {
block this process
place this process in S.queue
}

signalB(S):
if (S.queue is empty) {
S.value := 1;
} else {
remove a process P from S.queue
place this process P on ready list
}
PROBLEMS WITH SEMAPHORES
semaphores provide a powerful tool for enforcing mutual
exclusion and coordinate processes
But wait(S) and signal(S) are scattered among several
processes. Hence, difficult to understand their effects
Usage must be correct in all the processes
One bad (or malicious) process can fail the entire
collection of processes
USING SEMAPHORES FOR SOLVING
CRITICAL SECTION PROBLEMS

For n processes
Initialize S.count to 1 Process Pi:
repeat
Then only 1 process is allowed into wait(S);
CS (mutual exclusion) CS
To allow k processes into CS, we signal(S);
initialize S.count to k RS
forever
MUTEX LOCKS

● Same as Binary Semaphore.


● OS designers build software tools to solve critical
section problem
● Simplest is mutex lock
● Protect a critical section by first acquire() a lock
then release() the lock
● Boolean variable indicating if lock is available or not
● Calls to acquire() and release() must be atomic
● Usually implemented via hardware atomic instructions
● But this solution requires busy waiting
● This lock therefore called a spinlock
ACQUIRE() AND RELEASE()

acquire() {
while (!available)
; /* busy wait */
available = false;
}
release() {
available = true;
}
do {
acquire lock
critical section
release lock
remainder section
} while (true);
MONITORS
Are high-level language constructs that provide equivalent
functionality to that of semaphores but are easier to control

Found in many concurrent programming languages


Concurrent Pascal, Modula-3, uC++, Java...

Can be implemented by semaphores...


MONITOR

Is a software module containing:


⚫ one or more procedures
⚫ an initialization sequence
⚫ local data variables
Characteristics:
⚫ local variables accessible only by monitor’s procedures
⚫ a process enters the monitor by invoking one of it’s procedures
⚫ only one process can be in the monitor at any one time
MONITOR

The monitor ensures mutual exclusion: no need to program this


constraint explicitly
Hence, shared data are protected by placing them in the
monitor
⚫ The monitor locks the shared data on process entry
Process synchronization is done by the programmer by using
condition variables that represent conditions a process may
need to wait for before executing in the monitor
CONDITION VARIABLES
are local to the monitor (accessible only within the monitor)
can be access and changed only by two functions:
⚫ cwait(a): blocks execution of the calling process on condition (variable) a
the process can resume execution only if another process executes csignal(a)
⚫ csignal(a): resume execution of some process blocked on condition
(variable) a.
If several such process exists: choose any one
If no such process exists: do nothing
CLASSICAL PROBLEMS OF
SYNCHRONIZATION

Classical problems used to test newly-proposed


synchronization schemes
⚫ Bounded-Buffer Problem
⚫ Readers and Writers Problem
⚫ Dining-Philosophers Problem
THE PRODUCER/CONSUMER PROBLEM
A producer process produces information that is consumed
by a consumer process
⚫ Ex: a print program produces characters that are consumed by a
printer
We need a buffer to hold items that are produced and
eventually consumed
A common paradigm for cooperating processes
P/C: UNBOUNDED BUFFER

We assume first an unbounded buffer consisting of a linear


array of elements
in points to the next item to be produced
out points to the next item to be consumed
P/C: UNBOUNDED BUFFER
We need a semaphore S to perform mutual exclusion
on the buffer: only 1 process at a time can access the
buffer
We need another semaphore N to synchronize producer
and consumer on the number N (= in - out) of items in
the buffer
⚫ an item can be consumed only after it has been created
P/C: UNBOUNDED BUFFER
The producer is free to add an item into the buffer at any
time: it performs wait(S) before appending and signal(S)
afterwards to prevent customer access
It also performs signal(N) after each append to increment
N
The consumer must first do wait(N) to see if there is an
item to consume and use wait(S)/signal(S) to access the
buffer
SOLUTION OF P/C: UNBOUNDED BUFFER
Initialization:
S.count:=1;
N.count:=0;
in:=out:=0;
append(v):
b[in]:=v; Producer: Consumer:
in++; repeat repeat
produce v; wait(N);
take(): wait(S); wait(S);
w:=b[out]; append(v); w:=take();
out++; signal(S); signal(S);
return w; signal(N); consume(w);
forever forever
critical sections
P/C: UNBOUNDED BUFFER
Remarks:
⚫ Putting signal(N) inside the CS of the producer (instead of outside)
has no effect since the consumer must always wait for both
semaphores before proceeding
⚫ The consumer must perform wait(N) before wait(S), otherwise
deadlock occurs if consumer enter CS while the buffer is empty
Using semaphores is a difficult art...
P/C: FINITE CIRCULAR BUFFER OF SIZE K

can consume only when number N of (consumable) items is at


least 1 (now: N!=in-out)
can produce only when number E of empty spaces is at least 1
P/C: FINITE CIRCULAR BUFFER OF SIZE
K
As before:
⚫ we need a semaphore S to have mutual exclusion on buffer
access
⚫ we need a semaphore N to synchronize producer and
consumer on the number of consumable items
In addition:
⚫ we need a semaphore E to synchronize producer and
consumer on the number of empty spaces
BOUNDED-BUFFER PROBLEM

n buffers,
each can hold one item
Semaphore mutex initialized to the value 1
Semaphore full initialized to the value 0
Semaphore empty initialized to the value n
BOUNDED BUFFER PROBLEM (CONT.)

The structure of the producer process

do {
...
/* produce an item in next_produced */
...
wait(empty);
wait(mutex);
...
/* add next produced to the buffer */
...
signal(mutex);
signal(full);
} while (true);
BOUNDED BUFFER PROBLEM (CONT.)

● The structure of the consumer process

Do {
wait(full);
wait(mutex);
...
/* remove an item from buffer to next_consumed */
...
signal(mutex);
signal(empty);
...
/* consume the item in next consumed */
...
} while (true);
READERS-WRITERS PROBLEM
A data set is shared among a number of concurrent
processes
⚫ Readers – only read the data set; they do not perform any updates
⚫ Writers – can both read and write
Problem – allow multiple readers to read at the same time
⚫ Only one single writer can access the shared data at the same time
Several variations of how readers and writers are considered
– all involve some form of priorities
Shared Data
⚫ Data set
⚫ Semaphore rw_mutex initialized to 1
⚫ Semaphore mutex initialized to 1
⚫ Integer read_count initialized to 0
READERS-WRITERS PROBLEM (CONT.)

The structure of a writer process

do {
wait(rw_mutex);
...
/* writing is performed */
...
signal(rw_mutex);
} while (true);
READERS-WRITERS PROBLEM (CONT.)
The structure of a reader process
do {
wait(mutex);
read_count++;
if (read_count == 1)
wait(rw_mutex);
signal(mutex);
...
/* reading is performed */
...
wait(mutex);
read count--;
if (read_count == 0)
signal(rw_mutex);
signal(mutex);
} while (true);
READERS-WRITERS PROBLEM
VARIATIONS

First variation – no reader kept waiting unless


writer has permission to use shared object
Second variation – once writer is ready, it
performs the write ASAP
Both may have starvation leading to even more
variations
Problem is solved on some systems by kernel
providing reader-writer locks
THE DINING PHILOSOPHERS
PROBLEM
5 philosophers who
only eat and think
each need to use 2
forks for eating
we have only 5 forks
A classical synchron.
problem
Illustrates the difficulty
of allocating resources
among process without
deadlock and starvation
THE DINING PHILOSOPHERS PROBLEM

Each philosopher is a
process Process Pi:
One semaphore per fork: repeat
think;
⚫ fork: array[0..4] of wait(fork[i]);
semaphores wait(fork[i+1 mod 5]);
⚫ Initialization: eat;
fork[i].count:=1 for i:=0..4 signal(fork[i+1 mod 5]);
signal(fork[i]);
A first attempt: forever
Deadlock if each
philosopher start by picking
his left fork!
THE DINING PHILOSOPHERS
PROBLEM
A solution: admit only 4 Process Pi:
philosophers at a time that repeat
think;
tries to eat wait(T);
Then 1 philosopher can wait(fork[i]);
always eat when the other 3 wait(fork[i+1 mod 5]);
are holding 1 fork eat;
signal(fork[i+1 mod 5]);
Hence, we can use another signal(fork[i]);
semaphore T that would signal(T);
limit at 4 the number of forever
philosophers “sitting at the
table”
Initialize: T.count:=4

You might also like