6 Process Synchronization
6 Process Synchronization
Race Condition
When several processes access and manipulate the same data
concurrently and the outcome of the execution depends on the
particular order in which the access takes place, is called a
race condition.
To guard against the race condition , we need to ensure that
only one process at a time can be manipulating the shared
data.
To make such a guarantee, we require that the processes be
synchronized in some way.
Suppose that two processes A and B have access to a
shared variable “Balance”:
PROCESS A: PROCESS B:
Balance = Balance - 100 Balance = Balance – 200
while (true) {
flag[i] = TRUE;
turn = j;
while ( flag[j] && turn == j);
CRITICAL SECTION
flag[i] = FALSE;
REMAINDER SECTION
}
Synchronization Hardware
we can state that any solution to the critical-section problem
requires a simple tool—a lock. Race conditions are prevented
by requiring that critical regions be protected by locks.
That is, a process must acquire a lock before entering a
critical section; it releases the lock when it exits the critical
section.
Many systems provide hardware support for critical section
code
Uniprocessors – could disable interrupts
Currently running code would execute without preemption
this solution is not as feasible in a multiprocessor
environment.
Disabling interrupts on a multiprocessor can be time
consuming, as the Modern machines provide special atomic
hardware instructions message is passed to all the
processors. This message passing delays entry into each
critical section, and system efficiency decreases.
Atomic = non-interruptable
Either test memory word and set value
Or swap contents of two memory words
TestAndSet Instruction
while (true) {
while ( TestAndSet (&lock ))
; /* do nothing
// critical section
lock = FALSE;
// remainder section
}
Swap Instruction
Definition:
// critical section
lock = FALSE;
// remainder section
}
Semaphore
A semaphore S is an integer variable that, apart from
initialization, is accessed only through two standard atomic
operations: wait () and signal ().
Originally called P() and V()
The value of the semaphore S is the number of units of the
resource that are currently available. The P operation wastes
time or sleeps until a resource protected by the semaphore
becomes available. The V operation is the inverse: it makes a
resource available again after the process has finished using it.
wait(): Decrements the value of semaphore variable by 1. If the
value becomes negative, the process executing wait() is
blocked, i.e., added to the semaphore's queue.
signal(): Increments the value of semaphore variable by 1. After
the increment, if the value is negative, it transfers a blocked
process from the semaphore's queue to the ready queue.
wait (S) {
while S <= 0
; // no-op
S--; }
Semaphore as General Synchronization
Tool
Implementation of wait:
wait (S){
value--;
if (value < 0) {
add this process to waiting queue
block(); }
}
Implementation of signal:
Signal (S){
value++;
if (value <= 0) {
remove a process P from the waiting
queue
wakeup(P); }
}
Classical Problems of
Synchronization
Readers-Writers Problem
In computer science, the readers-writers problems are
examples of a common computing problem in concurrency.
The problem deals with situations in which many threads must
access the same shared memory at one time, some reading
and some writing, with the natural constraint that no process
may access the data for reading or writing while another
process is in the act of writing to it.
In particular, it is allowed for two readers to access the share
at the same time.
This is useful in many systems in order to optimize
performance–if many threads want to read a data structure
there is no need to limit them to accessing the data one at a
time.
However, synchroniztion still must be insured in the case of a
writer–only one writer should be able to write at a time, and no
reader should run concurrently with the writer.
Clearly, using a simple set of locks like we have described
previously will be inefficient
Three types of solutions
perating System Concepts – 7th Edition, Feb 8, 2005 6.23 Silberschatz, Galvin and Gagne
Dining-Philosophers Problem
Shared data
Bowl of rice (data set)
Semaphore chopstick [5] initialized to 1
Dining-Philosophers Problem
The Dining Philosophers Problem is an illustrative
example of a common computing problem in
concurrency.
The dining philosophers problem describes a group of
philosophers sitting at a table doing one of two things
Eating or thinking. While eating, they are not thinking,
and while thinking, they are not eating. The
Philosophers sit at a circular table with a large bowl of
spaghetti in the center.
A chopstick is placed in between each philosopher,
thus each philosopher has one chopstick to his or her
left and one chopstick to his or her right.
As spaghetti is difficult to serve and eat with a single
chopstick, it is assumed that a philosopher must eat
with two chopsticks. The philosopher can only use the
chopstick on his or her immediate left or right.
The philosophers never speak to each other, which
creates a dangerous possibility of deadlock.
Deadlock could occur if every philosopher holds a left
chopstick and waits perpetually for a right chopstick
(or vice versa). Originally used as a means of
illustrating the problem of deadlock, this system
reaches deadlock
when there is a ’cycle of unwarranted requests’. In this
case philosopher P1 waits for the chopstick grabbed by
philosopher P2 who is waiting for the chopstick of
philosopher P3 and so forth, making a circular chain
Solution to the Dining Philosophers
problem
Void philosopher(void)
{ use semaphore S[N]
While ( true)
{
thinking();
take_fork(i); //take left fork
Take _fork((i+1)%N) ; // take right fork, here N is number of
philosopher
Eat();
Put_fork(i); // put left fork
Put_fork((i+1)%N); // put right fork
}
}
perating System Concepts – 7th Edition, Feb 8, 2005 6.28 Silberschatz, Galvin and Gagne
Monitors
Monitors: A high-level data abstraction tool that automatically
generates atomic operations on a given data structure. A
monitor has:
Shared data.
A set of atomic operations on that data.
A set of condition variables.
A monitor is a language construct. Compare this with
semaphores, which are usually an OS construct
A monitor can be viewed as a class that encapsulates a set of
shared data as well as the operations on that data (e.g. the
critical sections).
A monitor is a collection of procedures, variables, and data
structures grouped together. Processes can call the monitor
procedures but cannot access the internal data structures.
Only one process at a time may be be active in a monitor.
Active in a monitor means in ready queue or CPU with the
program counter somewhere in a monitor method.
A monitor contains a lock and a set of condition variables.
The lock is used to enforce mutual exclusion.
The condition variables are used as wait queues so that
other threads can sleep while the lock is held.
Thus condition variables make it so that if a thread can
safely go to sleep and be guaranteed that when they wake
up they will have control of the lock again.
The representation of a monitor type cannot be used
directly by the various processes. Thus, a procedure
defined within a monitor can access only those variables
declared locally within the monitor and its formal
parameters.
Similarly, the local variables of a monitor can be accessed
by only the local procedures.
monitor class Account {
private int balance := 0
invariant balance >= 0