0% found this document useful (0 votes)
20 views114 pages

Oslecture6-7 (Copy)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views114 pages

Oslecture6-7 (Copy)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 114

Process Synchronization

Outline

• Cooperating Processes
• Inter - process Communication
• The Bounded Buffer Producer-Consumer
Problem
• The Critical Section Problem
• Synchronization Hardware
• Semaphores
• Classical Problems of Synchronization
• Monitors
Cooperating Processes
• Concurrent Processes can be
- Independent process:
Any process that does not share any
data with any other process.
- Cooperating process:
Any process that shares data with other
processes.
Contd..
• Advantages of process cooperation:
1. Information Sharing
- Several user may be interested in the
same piece of information.
- For eg. Shared files
- Provide an environment to allow
concurrent access.
Contd..
2. Computation speedup
- If we want a particular task to run faster, then break it
into subtask.
- Each task will be executing in parallel with others.
3.Modularity
- Construct the system in modular fashion.
- Divide the system functions into separate process.
Contd..
4.Convenience
- An individual user may have many task
to work on at one time.
- For instance, a user may be editing,
printing etc.
Producer-Consumer Problem
• Paradigm for cooperating processes;
– producer process produces information that is
consumed by a consumer process.
• We need buffer of items that can be filled
by producer and emptied by consumer.
– Unbounded-buffer places no practical limit on the size of
the buffer. Consumer may wait, producer never waits.
– Bounded-buffer assumes that there is a fixed buffer size.
Consumer waits for new item, producer waits if buffer is
full.
– Producer and Consumer must synchronize.
Producer-Consumer Problem
Bounded-buffer - Shared
Memory Solution
• Shared data
var n;
type item = ….;
var buffer: array[0..n-1] of item;
in, out: 0..n-1;
in :=0; out:= 0; /* shared buffer = circular array */
/* Buffer empty if in == out */
/* Buffer full if (in+1) mod n == out */
/* no-op means ‘do nothing’ */
Bounded Buffer - Shared
Memory Solution
• Producer process - creates filled buffers
repeat

produce an item in nextp

while in+1 mod n = out do no-op;
buffer[in] := nextp;
in := in+1 mod n;
until false;
Bounded Buffer - Shared
Memory Solution
• Consumer process - Empties filled buffers
repeat
while in = out do no-op;
nextc := buffer[out] ;
out:= out+1 mod n;

consume the next item in nextc

until false
Bounded Buffer
• A solution that uses all N buffers is not that simple.
• Modify producer-consumer code by adding a variable
counter, initialized to 0, incremented each time a new
item is added to the buffer & decremented every time to
remove one item from the buffer.
• Shared data
type item = ….;
var buffer: array[0..n-1] of item;
in, out: 0..n-1;
counter: 0..n;
in, out, counter := 0;
Bounded Buffer
• Producer process - creates filled buffers
repeat

produce an item in nextp

while counter = n do no-op;
buffer[in] := nextp;
in := in+1 mod n;
counter := counter+1;
until false;
Bounded Buffer
• Consumer process - Empties filled buffers
repeat
while counter = 0 do no-op;
nextc := buffer[out] ;
out:= out+1 mod n;
counter := counter - 1;

consume the next item in nextc

until false;

• The statements
counter := counter + 1;
counter := counter - 1;
must be executed atomically.
Contd..
• The value of the variable counter is 5, the producer & consumer executes
statements concurrently
• T0: producer execute register1:=counter {register =5}
• T1: producer execute register1:=register1+1{register =6}
• T2: consumer execute register2:=counter {register =5}
• T3: consumer execute register2:=register2-1{register =4}
• T4: producer execute counter:=register1{counter =6}
• T5: consumer execute counter:=register2{counter =4}
• The value of the variable counter may be 4,5 or 6 but the only correct result
is counter = 5 which is generated correctly if producer & consumer execute
separately.
• We would arrive at this incorrect outcomes because we
allowed both processes to manipulate counter variable
concurrently.
Race Condition
• Race condition: The situation where
several processes access and manipulate
shared data concurrently. The final value
of the shared data depends upon which
process finishes last.
How to avoid Race Condition?
• To avoid race condition, we need to ensure that
only one process at a time can manipulate the
variable.
• For this we require some form of
synchronization of the processes.
• The problem of process synchronization arises
from the need to share resources. This sharing
requires co-ordination and co- operation to
ensure correct operation.
Interprocess Communication
(IPC)
• Mechanism for processes to communicate and
synchronize their actions.
– Via shared memory
– Via Messaging system - processes communicate without
resorting to shared variables.

• Messaging system and shared memory not


mutually exclusive -
» can be used simultaneously within a single OS or a single
process.

• IPC facility provides two operations.


» send(message) - message size can be fixed or variable
» receive(message)
Cooperating Processes via
Message Passing
• If processes P and Q wish to
communicate, they need to:
– establish a communication link between them
– exchange messages via send/receive

• Fixed vs. Variable size message


– Fixed message size - straightforward physical
implementation, programming task is difficult due to
fragmentation
– Variable message size - simpler programming, more
complex physical implementation.
Direct Communication
• Sender and Receiver processes must
name each other explicitly:
– send(P, message) - send a message to process P
– receive(Q, message) - receive a message from process
Q

• Properties of communication link:


– Links are established automatically.
– A link is associated with exactly one pair of
communicating processes.
– Exactly one link between each pair.
– Link may be unidirectional, usually bidirectional.
Producer-Consumer using IPC
• Producer
type item = …..;
var nextp ,nextc: item
begin
repeat

produce an item in nextp;

send(consumer, nextp);
until false;
Cont..
• Consumer
repeat
receive(producer, nextc);

consume item from nextc;

until false;
Indirect Communication
• Messages are directed to and received from
mailboxes (also called ports)
» Unique ID for every mailbox.
» Processes can communicate only if they share a mailbox.
Send(A, message) /* send message to mailbox A */
Receive(A, message) /* receive message from mailbox A
*/

• Properties of communication link


» Link established only if processes share a common
mailbox.
» Link can be associated with more than two processes.
» Pair of processes may share several communication links
» Links may be unidirectional or bidirectional
Indirect Communication using
mailboxes
Mailboxes (cont.)
• Operations
» create a new mailbox
» send/receive messages through mailbox
» destroy a mailbox
• Issue: Mailbox sharing
» P1, P2 and P3 all share mailbox A.
» P1 sends message to A,while both P2 and P3 execute
receive() from A,Which process will receive the message
sent by P1?
• Possible Solutions
» disallow links between more than 2 processes
» allow only one process at a time to execute receive
operation
» allow system to arbitrarily select receiver and then notify
sender.
Message Buffering
• Link has some capacity - determine the number
of messages that can reside temporarily in it.
• Queue of messages attached to link
– Zero-capacity : The queue has a maximum length of
zero; the link cannot have any messages waiting in
it.
» sender must block until the recipient receives
the message.
– Bounded capacity :The queue has finite length of n
messages
» sender waits if link is full
– Unbounded capacity : Infinite queue length
» sender never waits
Message Problems - Exception
Conditions
– Process Termination
• Problem: P(sender) terminates, Q(receiver) blocks
forever.
– Solutions:
» System terminates Q.
» System notifies Q that P has terminated.
» Q has an internal mechanism(timer) that determines how long to
wait for a message from P.
• Problem: P(sender) sends message, Q(receiver)
terminates. In automatic buffering, P sends message
until buffer is full or forever. In no-buffering scheme, P
blocks forever.
– Solutions:
» System notifies P
» System terminates P
» P and Q use acknowledgement with timeout
Message Problems - Exception
Conditions
• Lost Messages
» OS guarantees retransmission
» sender is responsible for detecting it using timeouts
» sender gets an exception

• Scrambled Messages
– Message arrives from sender P to receiver Q, but
information in message is corrupted due to noise in
communication channel.
– Solution
» need error detection mechanism, e.g. CHECKSUM
» need error correction mechanism, e.g.
retransmission
Background
• Concurrent access to shared data may
result in data inconsistency.
• Maintaining data consistency requires
mechanisms to ensure the orderly
execution of cooperating processes.
• Shared memory solution to the bounded-
buffer problem allows at most (n-1) items
in the buffer at the same time.
The Critical-Section Problem
• Consider a system consisting of N processes
{P0,P1,... Pn-1}
– Structure of process Pi ---- Each process has a code segment,
called the critical section, in which the shared data is accessed.
repeat
entry section /* enter critical section */
critical section /* access shared variables */
exit section /* leave critical section */
remainder section /* do other work */
until false

• Problem
– Ensure that when one process is executing in its critical section,
no other process is allowed to execute in its critical section.
Types of Solution
• Software solutions
-Algorithms who’s correctness does not rely on any other
assumptions
• Hardware solutions
- Rely on some special machine instructions.
• Operation system solutions
- Provide some functions and data structure to the
programmer.
Mutual Exclusion: Software approach

• Software approaches can be implemented for


concurrent processes that executes on a single-
processor or a multiprocessor machine with
shared main memory.
• Assume elementary mutual exclusion at the
memory access level.
• Simultaneous access to the same location in
main memory are serialized in some order.
Solution: Critical Section Problem -
Requirements
– Mutual Exclusion
– If process Pi is executing in its critical section,
then no other processes can be executing in
their critical sections.
– Progress
– If no process is executing in its critical section
and there exists some processes that wish to
enter their critical section, then only those
processes that are not executing in their
remainder section can participate in the
decision of which will enter its critical section
next , and this selection cannot be postponed
indefinitely.
Solution: Critical Section Problem -
Requirements
– Bounded Waiting
– A bound must exist on the number of times that
other processes are allowed to enter their
critical sections after a process has made a
request to enter its critical section and before
that request is granted
• Assume that each process executes at a
nonzero speed.
• No assumption concerning relative speed of the
n processes.
Dekker’s Algorithm
• Dijkstra reported an algorithm for mutual
exclusion for two processes that was
designed by the Dutch mathematican
Dekker.
Solution: Critical Section Problem --
Initial Attempt
• Only 2 processes, P0 and P1
• General structure of process Pi (Pj)
repeat
entry section
critical section
exit section
remainder section
until false
• Processes may share some common
variables to synchronize their actions.
Dekker’s Algorithm
• First attempt //process 1
//process 0// .
. .
. While (turn !=1)
While (turn !=0) // do nothing
// do nothing // critical section
// critical section Turn = 0;
Turn = 1;
Contd..
• Busy waiting
• Shared memory location(turn)
• Guarantees the mutual exclusion but 2 problems.
- processes must strictly alternate in their use of their critical
section.
- If one process fails, the other process permanently blocked,
whether a process fails inside or outside its critical section.
• We need state information about both
processes.
Contd..
• Second Attempt //process 1
//process 0// .
.
.
. While (flag[0])
While (flag[1]) // do nothing
// do nothing Flag[1]=true;
Flag[0]=true; // critical section
// critical section Flag[1] = false;
Flag[0] = false; .

.
Contd..
• Each process may examine the other’s flag but may
not alter it
flag[0] --- for 0 process
flag[1] --- for 1 process
• If a process fails inside its critical section or after
setting its flag to true just before entering its critical
section, then other process permanently blocked.
• No guarantee of mutual exclusion.
Contd..
• Eg. Consider following sequence:
1. P0 executes the while statement & finds flag[1] set to
false
2. P1 executes the while statement & finds flag[0] set to
false.
3. P0 sets flag[0] true & enter critical section.
4. P1 sets flag[1] true & enter critical section.
• Incorrect
Contd..
• Third Attempt //process 1
//process 0// .
.
.
. Flag[1] = true;
Flag[0] = true; (While (flag[0]));
While (flag[1]); // critical section
// critical section Flag[1] = false;
Flag[0] = false;
Contd..
• Interchange of two statements of second attempt.
• If one process fails inside its critical section then
other is blocked.
• If one process fails outside its critical section then
other not blocked.
• Guarantee of mutual exclusion.
• Eg. Po set flag[0] true then p1 can not enter in its
critical section.
Contd..
• If P1 is already in its critical section then P0
blocked by while statement until P1 has left its
critical section.
• Deadlock can be there as,
-Both processes set their flag true before
executing while statement, then each will think
that other is in its critical section , cause
deadlock.
Contd..
• Fourth Attempt
//process 1
//process 0// .
. .
. Flag[1] = true;
Flag[0] = true; While (flag[0])
While (flag[1]) {
{ Flag[1]= false;
Flag[0]= false; //delay
//delay Flag[1] = true;
Flag[0] = true; }
} // critical section
// critical section Flag[1] = false;
Flag[0] = false;
Contd..
• Deadlock occurs because each process can insist on
its right to enter its critical section in previous
attempt.
• Each process sets its flag to indicate its desire to enter
its critical section but is prepared to reset.
• Mutual exclusion is guranteed, but livelock problem
• Consider following sequences
1. P0 sets flag[0] to true
2. P1 sets flag[1] to true
Contd..
3. P0 checks flag[1]
4. P1 checks flag[0]
5. P0 sets flag[0] to false
6. P1 sets flag[1] to false
7. P0 sets flag[0] to true
8. P1 sets flag[1] to true
• This sequence could be extend indefinitely and neither process
could enter its critical section.
• But no deadlock it is livelock.
• Any alteration in relative speed of two process will break this
cycle
• Still not satisfactory.
Correct Solution
• Must impose an order on the activities of
the two processes.
• Use variable turn from first attempt.
• When turn = 0 & flag[1] false , p0 can
enter in its critical section.
Contd..
Boolean flag[2]; Flag[0]= true;
int turn; }
Void p0() //critical section
{
Turn = 1;
While(true)
{ Flag[0] = false;
Flag[0] = true; //remainder
While(flag[1]) }
If(turn = = 1) }
{
Flag[0]=false;
While(turn = = 1)
//do nothing
Contd..
Void p1() Turn = 0;
{ Flag[1]= false;
While(true) //remainder
{
}
Flag[1] = true;
While(flag[0]) }
If(turn = = 0) Void main( )
{ {
Flag[1]=false; flag[0] = false;
While(turn = = 0)
Flag[1] = false;
//do nothing
Flag[1] = true; Turn = 1;
} Parbegin (p0,p1);
//critical section
Peterson’s Algorithm
• Dekker’s approach is difficult & correctness is tricky to
prove.
• Global variable flag indicates the position of each process
w.r.to mutual exclusion & turn resolves conflicts.
• In p0, if flag[0] true then p1 can not enter critical section
• If p1 is already in critical section then p0 blocked from
entering its critical section.
• If p0 is blocked in while loop shows flag[1] true and
turn = 1
• P0 can enter when either flag[1] becomes 0 or turn = 0;
Contd..
• 3 cases
1.p1 has no interest in critical section because it
indicates flag[1] = false
2. p1 is waiting for its critical section because if
turn =1, p1 is able to enter its critical section.
3.P1 is monopolizing , impossible.
Contd..
Void p1 ( )
Boolean flag[2]; {
int turn; While(true)
{
Void p0 ( )
Flag[1] = true;
{
turn = 0;
While(true)
While(flag[0]&& turn = = 0)
{ //do nothing
Flag[0] = true; // critical section
Turn = 1; Flag[1]=false;
While(flag[1]&& turn = = 1) //remainder
//do nothing }}
// critical section Void main ()
Flag[0]=false; {
Flag[0]= false;
//remainder
Flag[1] = false;
}}
Perbegin(p0,p1)
}
Synchronization Mechanisms
• Disable Interrupts
• Test-and-Set instruction
• Exchange Instruction
• In a uni-processor environment, we could
disallow interrupts to occur while a shared
variable is being modified.
• This way, we could be sure that the current
sequence of instructions would execute without
any preemption.
Synchronization hardware
Mutual exclusion: hardware solution
In the uni-processor system, it is sufficient to prevent a
process from being interrupted.
While (true) {
/*disable interrupts*/
/*critical section*/
/*enable interrupts*/
/*remainder*/
Since CS can not be interrupted ME is guaranteed.
The efficiency decrease.
It can not work in multi-processor environments
- more than one process is executing at a time.
Hardware Solutions for
Synchronization
• Mutual exclusion solutions presented depend on
memory hardware having read/write cycle.
– If multiple reads/writes could occur to the same memory
location at the same time, this would not work.
– Processors with caches but no cache coherency cannot
use the solutions
• In general, it is impossible to build mutual
exclusion without a primitive that provides some
form of mutual exclusion.
• How can this be done in the hardware???
Synchronization Hardware
• Test and set Instruction
• Defination
boolean testset (int i)
{
if (i = = 0)
{
i=1;
return true;
}
else
return false;
}
Contd..

• The instruction test the value of i,


if the value =0 and returns true.
Otherwise the value is not changed and
false is returned.
• The entire test-set function is carried out
atomically i.e it is not subject to
interruption.
• It is uninterruptible.
Mutual Exclusion with test-and -set
Shared data:
Void main ( )
Boolean bolt = false;
{
Void p (int i)
bolt =false;
do{
Parbegin( p(1),p(2)…p(n))
While ( testAndset (bolt))
}
/* do nothing*/
/* critical section*/
bolt = false;
Remainder section
}
Cont..
• Shared variable bolt set to false.
• The only process Pi that enters CS that
finds bolt as false and sets it to true.
• A process that may enter in CS is one
finds bolt = 0.
• Makes bolt = 1 excludes other processes
to enter CS.
• Always, bolt+∑keyi = n
Cont..
• No other process can access the memory
location until the instruction is finished.
• It is used to put a “bolt” on memory word.
• Before operating on a shared resource, a
process must perform the following
actions:
Cont..
1) Examine the value of the bolt, (test)
2) Set the bolt to 1,(set)
3) If the original value was 1, go back to
step 1.
•T & S can be used to implement mutual
exclusion as follows:
Contd..
• The only process that may enter its CS is
one that finds bolt = 1.
• All other processes willing to enter CS go
into busy – waiting mode.
• When a process leaves its critical section,
it resets bolt to false
• At this point, one and only one of the
waiting process is granted access to its
CS.
Contd..
• Properties:
- used for uniprocessor and
multiprocessor.
- simple, easy to varify
- used to support multiple CS, defined by
its own variable.
Contd..
• Disadvantages:
- Busy waiting is employed.
- starvation is possible.
- Deadlock possible.
Exchange Instruction
Void exchange (int register, int memory)
{
int temp;
temp = memory;
Memory = register;
register = memory;
}
Semaphore
• A Semaphore S - integer variable
– used to represent number of abstract resources
• Can only be accessed via two indivisible
(atomic) operations
wait (S): while S < 0 do no-op
S := S-1;
signal (S): S := S+1;
– P or wait used to acquire a resource, decrements count
– V or signal releases a resource and increments count
– If P is performed on a count <= 0, process must wait for
V or the release of a resource.
Contd..
• Modifications to the integer value of the
semaphore must be executed indivisibly i.e
when one process modifies the semaphore
value, no other process can simultaneously
modify that same value.
• The testing (s<0) and decrementing of s must
be done without interruption(i.e.atomically).
• Semaphores can be used to solve a variety of
synchronization problems.
Example: Critical Section for n
Processes
– Shared variables
var mutex: semaphore
initially mutex = 1

– Process Pi
repeat
wait(mutex);
critical section
signal (mutex);
remainder section
until false
Implementation of Semaphores
• The main disadvantage of the previous mutual
exclusion solutions is that they all require busy
waiting. While a process is in its critical
section, any other process that tries to enter
must loop continuously in the entry code. Busy
waiting wastes CPU cycles.
• To overcome busy waiting, we can modify the
definition of the wait and signal semaphore
operations.
Cont..
• When a process executes the wait operation
and finds that the semaphore value is not
positive, it must wait. However, rather than busy
waiting,the process can block itself.
• The block operation places the process into a
waiting queue associated with the
semaphore,and the state of the process is
switched to the waiting state.Then the control is
transferred to the scheduler which selects
another process to execute.
Cont..
• A process that is blocked should be
restarted when some other process
executes a signal operation.
• The process is started by a wakeup
operation,which changes its state from
waiting to ready.
Semaphore Implementation
• Define a semaphore as a record
type semaphore = record
value: integer;
L: list of processes;
end;
• Assume two simple operations
• block suspends the process that invokes it.
• wakeup(P) resumes the execution of a blocked
process P.
Semaphore
Implementation(cont.)
– Semaphore operations are now defined as
wait (S): S.value := S.value -1;
if S.value < 0
then begin
add this process to S.L;
block;
end;

signal (S): S.value := S.value +1;


if S.value <= 0
then begin
remove a process P from S.L;
wakeup(P);
end;
Types of Semaphores
• Two types of semaphores
1.Counting Semaphore or (Spinlock
semaphore) - integer value can
range over an unrestricted domain.
2.Binary Semaphore - integer value can
range only between 0 and 1; simpler to
implement.
Counting Semaphore
Struct semaphore Void Signal (semaphore S)
{ {
Int count; S.Count = S.count +1;
Queue type queue; If (S.count < = 0 then
} {
Void wait(semaphore S) Remove a process from
{ S.queue
S.Count = S.count – 1; place process in ready queue
If (S.count < 0 then }
{ }
Add this process to S.queue
Block this process
}}
Binary Semaphore
Struct b_semaphore Add this process to s.queue
{ Block this process
Enum (zero,one); }}
Queue type queue; Void Signal B(b_semaphore S)
} {
Void wait B(b_semaphore S) If (S.queue is_empty())
{ S.Value = 1;
If (S.value == 1) else
S.Value = 0; {
else Remove a process from
{ S.queue
place process in ready queue
}}
Cont..
• Same expressive as that of counting
semaphore.
• Easier to implement
Classical Problems of
Synchronization
• The Producer-Consumer(Bounded Buffer)
Problem

• The Readers - Writers Problem

• The Dining-Philosophers Problem


•The Producer-Consumer(Bounded
Buffer) Problem Using Semaphores
• The producer places items in a shared buffer
that can hold n items.
• The consumer takes items from the buffer.
• The producer must be prevented from placing
an item in a full buffer and the consumer must
be prevented from removing items from an
empty buffer.
• By using 2 semaphores, we can synchronize
the processes as follows: Initially set S1 = n and
S2 = 0
Cont..
Producer Process Pp Consumer Process Pc
Produce: Consume:
. Wait (S2)
. ( Produce an item) .
Wait (S1) . (remove item from buffer)
. Signal(S1)
. (put item in buffer) .
Signal (S2) . (consume item )
goto Produce goto Consume
The Producer-Consumer Solution using
Semaphores
• Shared data
type item = ….;
var buffer: array[0..n-1] of item;
full, empty, mutex : semaphore;
nextp, nextc :item;
full := 0; empty := n; mutex := 1;
Cont..
• Producer process - creates filled buffers
repeat

produce an item in nextp { get an item to put in the
buffer}

wait (empty); {decrement no. of empty slots}
wait (mutex); {set mutex to 0; enter critical section}

add nextp to buffer {insert an item in the buffer}

signal (mutex); {set mutex to 1; leave critical section}
signal (full); {increment no. of items in buffer}
until false;
Cont..
• Consumer process - Empties filled buffers
Repeat
wait (full ); {decrement no. of items}
wait (mutex); {set mutex to 0; enter critical section}

remove an item from buffer to nextc
... {remove an item from the buffer}
signal (mutex); {set mutex to 1; leave critical section}
signal (empty); {increment no. of empty slots}

consume the next item in nextc {do something with
the item}

until false;
Readers-Writers Problem
• Data area shared among a number of
processes.
• The data area could be a file, block of main
memory, set of registers.
• A number. of processes that only read the data
area(readers).
• A number that only write to the data
area(writers).
• The conditions that must be satisfied are as
follows:
Readers-Writers Problem
• Shared Data
var mutex, wrt: semaphore =1;
readcount: integer = 0;
• Writer Process
wait(wrt); {decrement to 0 to get access to
database}

writing is performed {update database}
...
signal(wrt); {increment to 1 to release database}
Readers-Writers Problem
• Reader process
wait(mutex);
readcount := readcount +1;
if readcount = 1 then wait(wrt);
signal(mutex);
...
reading is performed
...
wait(mutex);
readcount := readcount - 1;
if readcount = 0 then signal(wrt);
signal(mutex);
Dining-Philosophers Problem
Contd...
-Five philosophers living together.
- The life of each philosophers of principally
of thinking & eating.
- The eating arrangements were simple.
- A round table
- 5 plates one for each
-Five (spoons)forks.
- Philosopher who wants to eat use 2 forks
on either side of plate.
Solution uses semaphore

- Each philosopher picks up first the fork on


the left & then the fork on the right.
- After the philosopher has finished eating,
the 2 forks are replaced on the table.
- If all the philosopher are hungry at the same
time, they all sit down,they pick up the fork
on their left & they all reach out for the
other work – Starve & deadlock
contd...

- To overcome the risk of deadlock, we could


buy 5 additional forks or teach the
philosopher to eat rice with just one fork
- we allow only 4 philosopher at a time into
the dining room.
Dining Philosophers Problem
Shared Data
var chopstick: array [0..4] of semaphore (=1 initially);

– Philosopher i :
repeat
wait (chopstick[i]);
wait (chopstick[i-1 mod 5]);

eat
...
signal (chopstick[i]);
signal (chopstick[i+1 mod 5]);

think

until false;
Critical Regions
• High-level synchronization construct
• A shared variable v of type T is declared
as:
var v: shared T
• Variable v is accessed only inside
statement
region v when B do S
where B is a boolean expression.
While statement S is being executed, no other
process can access variable v.
Critical Regions (cont.)
• Regions referring to the same shared
variable exclude each other in time.
• When a process tries to execute the region
statement, the Boolean expression B is
evaluated.
• If B is true, statement S is executed.
• If it is false, the process is delayed until B becomes
true and no other process is in the region
associated with v.
Example - Bounded Buffer
– Shared variables
var buffer: shared record
pool:array[0..n-1] of item;
count,in,out: integer;
end;
– Producer Process inserts nextp into the
shared buffer
region buffer when count < n
do begin
pool[in] := nextp;
in := in+1 mod n;
count := count + 1;
end;
Bounded Buffer Example
– Consumer Process removes an item from the
shared buffer and puts it in nextc
region buffer when count > 0
do begin
nextc := pool[out];
out := out+1 mod n;
count := count -1;
end;
Implementing Regions
• Region x when B do S
var mutex, first-delay, second-delay: semaphore;
first-count, second-count: integer;
• Mutually exclusive access to the critical
section is provided by mutex.
If a process cannot enter the critical section because
the Boolean expression B is false,
it initially waits on the first-delay semaphore;
moved to the second-delay semaphore before it is allowed
to reevaluate B.
Implementation
• Keep track of the number of processes
waiting on first-delay and second-delay,
with first-count and second-count
respectively.
• The algorithm assumes a FIFO ordering in
the queueing of processes for a
semaphore.
• For an arbitrary queueing discipline, a
more complicated implementation is
required.
Implementing Regions
wait(mutex);
while not B
do begin first-count := first-count +1;
if second-count > 0
then signal (second-delay);
else signal (mutex);
wait(first-delay);
first-count := first-count -1;
second-count := second-count + 1;
if first-count > 0 then signal (first-delay)
else signal (second-delay);
wait(second-delay);
second-count := second-count -1;
end;
if first-count > 0 then signal (first-delay);
else if second-count > 0
then signal (second-delay);
else signal (mutex);
Monitors
• High-level synchronization construct that allows
the safe sharing of an abstract data type among
concurrent processes.
type monitor-name = monitor
variable declarations
procedure entry P1 :(…);
begin … end;
Cont..
procedure entry P2(…);
begin … end;

procedure entry Pn (…);
begin…end;
begin
initialization code
end
Cont…
• To allow a process to wait within the monitor, a
condition variable must be declared, as
var x, y: condition
• Condition variable can only be used with the
operations wait and signal.
• The operation
x.wait;
means that the process invoking this opeation
is suspended until another process invokes
Cont..
x.signal;
• The x.signal operation resumes exactly one
suspended process. If no process is
suspended, then the signal operation has no
effect.
Schematic
Schematic viCewC
viCewC of
of aa monitor
monitor

Operating System
Concepts
Monitor
Monitor with
with condition
condition variables
variables

Operating System
Concepts
Cont..
type dining-philosophers = monitor
var state : array [0..4] of :(thinking, hungry, eating);
var self : array [0..4] of condition;
procedure entry pickup (i: 0..4);
begin
state[i] := hungry,
test (i);
if state[i]  eating then self[i], wait,
end;
Cont..
procedure entry putdown (i: 0..4);
begin
state[i] := thinking;
test (i+4 mod 5);
test (i+1 mod 5);
end;
Dining
Dining PhiloCsophers
PhiloCsophers (Cont.)
(Cont.)

procedure test(k: 0..4);


begin
if state[k+4 mod 5]  eating
and state[k] = hungry
and state[k+1 mod 5] ]  eating
then begin
state[k] := eating;
self[k].signal;
end;
end;
begin
for i := 0 to 4
do state[i] := thinking;
end. Operating System
Concepts
Monitor
Monitor Implementation
Implementation Using
Using Semaphores
Semaphores

Variables
var mutex: semaphore (init = 1)
next: semaphore (init = 0)
next-count: integer (init = 0)
Each external procedure F will be replaced by
wait(mutex);

body of F;

Operating System
Concepts
if next-count > 0
then signal(next)
if next-count >0
then signal(next)
else signal(mutex);else signal(mutex);
Mutual exclusion within a monitor is
Mutual exclusion
ensured. within a monitor is
ensured.
Monitor
Monitor Implementation
Implementation (Cont.)
(Cont.)

For each condition variable x, we have:


var x-sem: semaphore (init = 0)
x-count: integer (init = 0)
The operation x.wait can be implemented as:

x-count := x-count + 1;
if next-count >0
then signal(next)
else signal(mutex);
wait(x-sem);
x-count := x-count – 1;
Operating System
Concepts
Cont..
The operation x.signal can be implemented as:
if x-count > 0
then begin
next-count := next-count + 1;
signal(x-sem);
wait(next);
next-count := next-count – 1;
end;
Cont..
Conditional-wait construct: x.wait(c);
c – integer expression evaluated when the wait
opertion is executed.
value of c (priority number) stored with the
name of the process that is suspended.
when x.signal is executed, process with
smallest associated priority number is
resumed next.
Cont…
• Check two conditions to establish
correctness of system:
• User processes must always make their calls
on the monitor in a correct sequence.
• Must ensure that an uncooperative process
does not ignore the mutual-exclusion
gateway provided by the monitor, and try to
access the shared resource directly, without
using the access protocols.

You might also like