0% found this document useful (0 votes)
15 views50 pages

Chapter 5

The document discusses synchronization techniques for concurrent processes including the critical section problem, synchronization hardware, semaphores, and classical synchronization problems. It describes algorithms for solving the critical section problem for two and n processes including using turn variables, flags, and the bakery algorithm. Implementation of semaphores and wait and signal operations are also covered.

Uploaded by

abebemako302
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views50 pages

Chapter 5

The document discusses synchronization techniques for concurrent processes including the critical section problem, synchronization hardware, semaphores, and classical synchronization problems. It describes algorithms for solving the critical section problem for two and n processes including using turn variables, flags, and the bakery algorithm. Implementation of semaphores and wait and signal operations are also covered.

Uploaded by

abebemako302
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 50

Chapter 5

Process Synchronization
outline

• Background
• The Critical-Section Problem
• Synchronization Hardware
• Semaphores
• Classical Problems of Synchronization
• Critical Regions
• Monitors
• Atomic Transactions
Background
• Concurrent access to shared data may result in data inconsistency.
• Maintaining data consistency requires mechanisms to ensure the
orderly execution of cooperating processes.
• Suppose that we modify the producer-consumer code by adding a
variable counter, initialized to 0 and incremented each time a new item
is added to the buffer.
• The new scheme is illustrated by the following:
Cont’d

• Shared data
typedef .... item;
item buffer[N];
int in, out, counter;
in = 0;
out = 0;
counter = 0;
Cont’d
• Producer process
while(true) {
...
produce an item in nextp
...
while(counter == n)
no-op;
buffer[in] = nextp;
in = (in+1)%n;
counter = counter + 1;
}
Cont’d
• Consumer process
while(true) {
while(counter == 0)
no-op;
nextc = buffer[out];
out = (out+1)%n;
counter = counter - 1;
...
consume the item in nextc
...
}
• The statements:
counter = counter + 1;
counter = counter - 1;
• must be xecuted atomically.
The Critical-Section Problem
• n processes all competing to use some shared data
• Each process has a section of code code, called its critical section, in
which the shared data is accessed.
• Problem - ensure that when one process is executing in its critical
section, no other process is allowed to execute in its critical section.
• Structure of process Pi
while(true) {
entry section
critical section
exit section
remainder section
}
Cont’d
• A solution to the critical-section problem must satisfy the
following three requirements:
1. Mutual Exclusion. If process Pi is executing in its critical
section, then no other processes can be executing in their
critical sections.
2. Progress. If no process is executing in its critical section and
there are some processes that wish to enter their critical
section, then the selection of the processes that will enter the
critical section next cannot be postponed indefinitely.
3. Bounded Waiting. A bound must exist on the number of
times that other processes are allowed to enter their critical
sections after a process has made a request to enter its critical
section and before that request is granted.
• Assumption that each process is executing at a nonzero speed.
• No assumption concerning relative speed of the n processes.
Cont’d
• Initial attempts to solve the problem.
• Only 2 processes, P0 and P1
• General structure of process Pi (other process Pj )
while(true) {
entry section
critical section
exit section
remainder section
}
• Processes may share some common variables to synchronize their
actions
Algorithm 1
• Shared variables:
int turn;
initially turn = 0
turn = i -> Pi can enter its critical section
• Process Pi
while(true) {
while(turn!=i) no-op;
critical section
turn = j;
remainder section
}
• Satisfies mutual exclusion, but not progress.
Algorithm 2
• Shared variables
bool flag[2];
initially flag[0] = flag[1] = false.
flag[i] = true -> Pi ready to enter its critical section
• Process Pi
while(true) {
flag[i] = true;
while(flag[j]) no-op;
critical section
flag[i] = false;
remainder section
until false;
• Does not satisfy progress because:
• If the two processes set their flags to true at the same time, then they
will both wait forever.
Algorithm 3

• Combined shared variables of algorithms 1 and 2.


• Process Pi
while(true) {
flag[i] = true;
turn = j;
while (flag[j] && turn==j) no-op;
critical section
flag[i] := false;
remainder section
}
• Meets all three requirements; solves the critical section problem for two
processes
Bakery Algorithm - Critical section for n
processes
• Before entering its critical section, process receives a number. Holder of
the smallest number enters the critical section.
• If processes Pi and Pj receive the same number, if i < j , then Pi is served
first; else Pj is served first.
• The numbering scheme always generates numbers in increasing order of
enumeration.
• Example: 1,2,3,3,3,3,4,5...
Bakery Algorithm
• Shared data –
bool choosing[n];
int number[n];
• initially:
for(i=0;i<n;i++) {
choosing[i] = false;
number[i] = 0;
}
while(true) {
choosing[i] = true;
max=0;
for(j=0;j<n;j++)
if(max<number[j]) max = number[j];
number[i] = max + 1;
choosing[i] = false;
for (j = 0; j < n; j++) {
while (choosing[j]);
while (number[j] != 0 &&
(number[j] < number[i] ||
(number[j] == number[i] && j < i)));
}
critical section
number[i] = 0;
remainder section
}
Synchronization Hardware
• Test and modify the content of a word atomically.
bool TestandSet (bool *target) {
bool t=*target; /* all this is */
*target=true; /* done by one */
return t; /* machine instruction */
}
void Exchg(bool *a,bool *b) {
bool temp=*a; /* all this is */
*a=*b; /* done by one */
*b=temp; /* machine instruction */
}
Cont’d

• Mutual exclusion algorithm


• Shared data:
bool lock=false;
Process Pi
while(true) {
while TestandSet(lock) no-op;
critical section
lock = false;
remainder section
}
or

• Shared data:
bool lock=false;
Process Pi
bool key;
while(true) {
key = true;
do {
Exchg(lock,key);
}while(key);
critical section
lock = false;
remainder section
}
Semaphore - synchronization tool that does not
require busy waiting
• Semaphore S
• integer variable introduced by Dijkstra can only be accessed via two
indivisible (atomic) operations
wait(S): S = S - 1; if S < 0 then block(S)
signal(S): S = S + 1; if S <= 0 then wakeup(S)
• sometimes wait and signal are called down and up or P and V
• block(S) - results in suspension of the process invoking it (sometimes
called sleep).
• wakeup(S) - results in resumption of exactly one process that has
invoked block(S)
Cont’d
• Example: critical section for n processes
• Shared variables
semaphore mutex=1;
Process Pi
while(true) {
wait(mutex);
critical section
signal(mutex);
remainder section }
• Implementation of the wait and signal operations so that they must
execute atomically.
• Uniprocessor:
• Disable interrupts around the code segment implementing the wait and signal
operations.
• Multiprocessor:
• If no special hardware provided, use a correct software solution to the critical-
section problem, where the critical sections consist of the wait and signal
operations.
• Use special hardware if available, i.e., TestandSet:
Implementation of wait(S) operation with the TestandSet instruction:

• Shared variables
boolean lock = false;
• Code for wait(S)
while (TestandSet(lock));
S = S - 1;
if (S < 0) {
lock = false;
block(S);
} else
lock = false;
• Code for signal(S)
while (TestandSet(lock));
S = S + 1;
if (S 0)
wakeup(S);
lock = false;
• Race condition exists!
Cont’d
• Better Code for wait(S)
while (TestandSet(lock1));
while (TestandSet(lock));
S = S - 1;
if (S < 0) {
lock = false;
block(S);
} else
lock = false;
lock1 = false;
lock1 serialises the waits.
• Semaphore can be used as general synchronization tool:
• Execute B in Pj only after A executed in Pi
• Use semaphore flag initialized to 0
• Code:
Pi Pj
-- --
. .
. .
. .
A wait(flag)
signal(flag) B
Cont’d

• Deadlock - two or more processes are waiting indefinitely for an event


that can be caused by only one of the waiting processes.
• Let S and Q be two semaphores initialized to 1
P0 P1
wait(S) wait(Q)
wait(Q) wait(S)
. .
. .
. .
signal(S) signal(Q)
signal(Q) signal(S)
• Starvation - indefinite blocking
• A process is never be removed from the semaphore queue in which it is
suspended.
Two types of semaphores:

• Counting semaphore - integer value can range over an unrestricted


domain.
• Binary semaphore - integer value can range only between 0 and 1;
can be simpler to implement.
• Classical Problems of Synchronization
• Bounded-Buffer Problem
• Readers and Writers Problem
• Dining-Philosophers Problem
Bounded-Buffer Problem
• Shared data
typedef .... item;
item buffer[n];
semaphore full=0, empty=n, mutex=1;
item nextp, nextc;
• Producer process
while(true) {
...
produce an item in nextp
...
wait(empty); /* wait while buffer is full */
wait(mutex);
...
add nextp to buffer
...
signal(mutex);
signal(full); /* one more in buffer */
}
Cont’d
• Consumer process
while(true) {
wait(full); /* wait while no data */
wait(mutex);
...
remove an item from buffer to nextc
...
signal(mutex);
signal(empty); /* one less in buffer */
...
consume the item in nextc
...
}
Readers-Writers Problem
• A number of processes, some reading data, some writing. Any number
of processes can
• read at the same time, but if a writer is writing then no other process
must be able to access the data.
• Shared data
semaphore mutex=1, wrt=1;
int readcount=0;
• Writer process
wait(wrt);
...
writing is performed
...
signal(wrt);
Cont’d
• Reader process
wait(mutex);
readcount = readcount + 1;
if (readcount == 1) wait(wrt);
signal(mutex);
...
reading is performed
...
wait(mutex);
readcount = readcount - 1;
if (readcount == 0) signal(wrt);
signal(mutex);
Dining-Philosophers Problem
A Problem posed by Dijkstra in 1965
Possible solution to the problem:

• void philosopher(int no) {


while(1) {
...think....
take_fork(no); /* get the left fork */
take_fork((no+1) % N); /* get the right fork */
....eat.....
put_fork(no); /* put left fork down */
put_fork((no+1) % N); /* put down right fork */
}
}
• "take_fork" waits until the specified fork is available and then grabs it.
• Unfortunately this solution will not work... what happens if all the
philosophers grab their left fork at the same time.
Better solution

• Shared data
int p[N]; /* status of the philosophers */
semaphore s[N]=0; /* semaphore for each philosopher */
semaphore mutex=1; /* semaphore for mutual exclusion */
• Code
#define LEFT(n) (n+N-1)%N /* Macros to give left */
#define RIGHT(n) (n+1)%N /* and right around the table */
void test(int no) { /* can philosopher 'no' eat */
if ((p[no] == HUNGRY) &&
(p[LEFT(no)] != EATING) &&
(p[RIGHT(no)] != EATING) ) {
p[no]=EATING;
signal(s[no]); /* if so then eat */
}
}
Cont’d
void take_forks(int no) { /* get both forks */
wait(mutex); /* only one at a time here please */
p[no]=HUNGRY; /* I'm Hungry */
test(no); /* can I eat? */
signal(mutex);
wait(s[no]); /* wait until I can */ }
void put_forks(int no) { /* put the forks down */
wait(mutex); /* only one at a time here */
p[no]=THINKING; /* let me think */
test(LEFT(no)); /* see if my neighbours can now eat */
test(RIGHT(no));
signal(mutex); }
void philosopher(int no) {
while(1) {
...think....
take_forks(no); /* get the forks */
....eat.....
put_forks(no); /* put forks down */
}
return NULL;}
High-level synchronization constructs
Monitors
• High-level synchronization construct that allows the safe sharing of an abstract
data type among concurrent processes. (Hoare and Brinch Hansen 1974)
• A collection of procedures, variables and data structures. Only one process can
be activein a monitor at any instant.
• monitor example
integer i;
condition c;
procedure producer(x);
begin
.
end
procedure consumer(x);
begin
.
end
end monitor;
Cont’d

• To allow a process to wait within the monitor, a condition variable


must be declared, as:
condition x;
• Condition variables can only be used with the operations wait and
signal.
• The operation
wait(x);
• means that the process invoking this operation is suspended until
another process invokes
signal(x);
• The signal(x) operation resumes exactly one suspended process. If no process
is suspended, then the signal operation has no effect.
The producer consumer problem can be solved as follows using monitors:

• monitor ProducerConsumer
condition full, empty;
integer count;
procedure enter;
begin
if count = N then wait(full);
...enter item...
count := count + 1;
if count = 1 then signal(empty)
end;
procedure remove;
begin
if count = 0 then wait(empty);
...remove item...
count := count - 1;
if count = N - 1 then signal(full)
end;
count := 0;
end monitor;
Cont’d
• procedure producer;
begin
while true do
begin
...produce item...
ProducerConsumer.enter
end
end;
procedure consumer;
begin
while true do
begin
ProducerConsumer.remove;
...consume item...
end
end;
The dining philosophers problem can also be solved easily

• monitor dining-philosophers
status state[n];
condition self[n];
procedure pickup (i:integer);
begin
state[i] := hungry;
test (i);
if state[i] <> eating then wait(self[i]);
end;
procedure putdown (i:integer);
begin
state[i] := thinking;
test (i+4 mod 5);
test (i+1 mod 5);
end;
Cont’d
• procedure test (k:integer);
begin
if state[k+4 mod 5] <> eating
and state[k] = hungry
and state[k+1 mod 5] <> eating
then begin
state[k] := eating;
signal(self[k]);
end;
end;
begin
for i := 0 to 4
do state[i] := thinking;
end
end monitor
Cont’d
• procedure philosopher(no:integer);
begin
while true do
begin
...think....
pickup(no);
....eat.....
putdown(no)
end
end
• There are very few languages that support constructs such as monitors...
expect this to change. One language that does is Java. Here is a Java
class that can be used to solve the producer consumer problem.
Cont’d
class CubbyHole {
private int seq;
private boolean available = false;
public synchronized int get() {
while (available == false) {
try {
wait();
} catch (InterruptedException e) {} }
available = false;
notify();
return seq; }
public synchronized void put(int value) {
while (available == true) {
try {
wait();
} catch (InterruptedException e) {} }
seq = value;
available = true;
notify(); } }
Monitor implementation using semaphores

• What happens when a monitor signals a condition variable?


• A process waiting on the variable can't be active at the same time as the
signaling process, therefore: 2 choices.
1. Signaling process waits until the waiting process either leaves the
monitor or waits for another condition.
2. Waiting process waits until the signaling process either leaves the
monitor or waits for another condition.
• Variables
semaphore mutex=1, next=0;
int next-count=0;
• 'mutex' provides mutual exclusion inside the monitor.
• 'next' is used to suspend signaling processes.
• 'next-count' gives the number of processes suspended on 'next'.
Cont’d
• Each external procedure F will be replaced by
sem_wait(mutex);
...
body of F;
...
if (next-count > 0)
sem_signal(next);
else sem_signal(mutex);
• Mutual exclusion within a monitor is ensured. by 'mutex‘. For each condition variable x,
we have:
semaphore x-sem=0;
int x-count=0;
• The operation wait(x) can be implemented as:
x-count = x-count + 1;
if (next-count > 0)
sem_signal(next);
else sem_signal(mutex);
sem_wait(x-sem);
x-count = x-count - 1;
Cont’d
• The operation signal(x) can be implemented as:
if (x-count > 0) {
next-count = next-count + 1;
sem_signal(x-sem);
sem_wait(next);
next-count = next-count - 1; }
• Conditional-wait construct
cond_wait(x,c);
• 'c' is an integer expression evaluated when the wait operation is
executed.
• The value of c (priority number) is stored with the name of the process
that is suspended. When signal(x) is executed, the process with smallest
associated priority number is resumed next.
• Must check two conditions to establish the correctness of this system:
• User processes must always make their calls on the monitor in a correct
sequence.
• Must ensure that an uncooperative process does not ignore the
mutualexclusion gateway provided by the monitor, and try to access the shared
resource directly, without using the access protocols.
Atomic Transactions

• Transaction - program unit that must be executed atomically; that is,


either all the operations associated with it are executed to
completion, or none are performed.
• Must preserve atomicity despite possibility of failure.
• We are concerned here with ensuring transaction atomicity in an
environment where failures result in the loss of information on
volatile storage.
Log-Based Recovery
• Write-ahead log - all updates are recorded on the log, which is kept in
stable storage; log has following fields:
• transaction name
• data item name, old value, new value
• The log has a record of <Ti starts>, and either <Ti commits> if the
transactions commits, or <Ti aborts> if the transaction aborts.
• Recovery algorithm uses two procedures:
• undo(Ti) - restores value of all data updated by transaction Ti to the old values. It
is invoked if the log contains record <Ti starts>, but not <Ti commits>.
• redo(Ti ) - sets value of all data updated by transaction Ti to the new values. It is
invoked if the log contains both <Ti starts> and <Ti commits>.
Checkpoints - reduce recovery overhead

1. Output all log records currently residing in volatile storage onto stable
storage.
2. Output all modified data residing in volatile storage to stable storage.
3. Output log record <checkpoint> onto stable storage.
• Recovery routine examines log to determine the most recent transaction
Ti that started executing before the most recent checkpoint took place.
• Search log backward for first <checkpoint> record.
• Find subsequent <Ti start> record.
• redo and undo operations need to be applied to only transaction Ti and
all transactions Tj that started executing after transaction Ti .
Concurrent Atomic Transactions

• Serial schedule - the transactions are executed sequentially in some order.


• Example of a serial schedule in which T0 is followed by T1 :
T0 | T1
---------------|----------------
read(A) |
write(A) |
read(B) |
write(B) |
| read(A)
| write(A)
| read(B)
| write(B)
• Conflicting operations - Oi and Oj conflict if they access the same data item, and at
• least one of these operations is a write operation.
Cont’d
• Conflict serialisable schedule - schedule that can be transformed into a
serial schedule by a series of swaps of non-conflicting operations.
• · Example of a concurrent serialisable schedule:
T0 | T1
---------------|----------------
read(A) |
write(A) |
| read(A)
| write(A)
read(B) |
write(B) |
| read(B)
| write(B)
Cont’d
• Locking protocol governs how locks are acquired and released; data item
can be locked in following modes:
• Shared: If Ti has obtained a shared-mode lock on data item Q, then Ti can read
this item, but it cannot write Q.
• Exclusive: If Ti has obtained an exclusive mode lock on data item Q, then Ti can
both read and write Q.
• Two-phase locking protocol
• Growing phase: A transaction may obtain locks, but may not release any lock.
• Shrinking phase: A transaction may release locks, but may not obtain any new
locks.
• The two-phase locking protocol ensures conflict serializability, but does
not ensure freedom from deadlock.
Cont’d

• Timestamp-ordering scheme - transaction ordering protocol for determining


serialisability order.
• With each transaction Ti in the system, associate a unique fixed timestamp, denoted by TS(Ti ).
• If Ti has been assigned timestamp TS(Ti ), and a new transaction Tj enters the system, then TS(Ti ) <
TS(Tj ).
• Implement by assigning two timestamp values to each data item Q .
• W-timestamp(Q) - denotes largest timestamp of any transaction that executed write(Q)
successfully.
• R-timestamp(Q) - denotes largest timestamp of any transaction that executed read(Q) successfully.
• Example of a schedule possible under the time stamp protocol:
T0 | T1
---------|----------
read(B) |
| read(B)
| write(B)
read(A) |
| read(A)
| write(A)
• There are schedules that are possible under the two-phase locking protocol but are not
possible under the timestamp protocol, and vice versa.
• The timestamp-ordering protocol ensures conflict serializability; conflicting operations
are processed in timestamp order.

You might also like