0% found this document useful (0 votes)
14 views28 pages

CSC 504-514 - Lecture 3 - Methods of Synchronization Nov 14 2023 CONTD .PPTM

Uploaded by

tobianimashaun99
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views28 pages

CSC 504-514 - Lecture 3 - Methods of Synchronization Nov 14 2023 CONTD .PPTM

Uploaded by

tobianimashaun99
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 28

CSC 504- Process Synhronization

(contd)
Definitions
 critical section: a section of code which reads/writes
shared data
 race condition: potential for interleaved execution of a
critical section by multiple threads =>
results are non-deterministic
 mutual exclusion: synchronization mechanism to avoid
race conditions by ensuring exclusive
execution of critical sections
 deadlock: permanent blocking of threads
 starvation: execution but no progress
Conventional solutions for ME
 Software reservation: a thread must register its
intent to enter CS and then wait until no other
thread has registered a similar intention before
proceeding
 Spin-locks using memory-interlocked instructions:
requires special hardware to ensure that a given
location can be read, modified and written without
interruption (i.e. TST: test&set instruction)
 Special variable-based mechanisms for ME:
semaphores, monitors, message passing, lock files
 they are equivalent !
The critical section problem

• When a process executes code that


manipulates shared data (or resource), we
say that the process is in it’s critical section
(CS) (for that shared data)
• The execution of critical sections must be
mutually exclusive: at any time, only one
process is allowed to execute in its critical
section (even with multiple CPUs)
• Then each process must request the
permission to enter it’s critical section (CS)
The critical section problem

• The section of code implementing this request is


called the entry section
• The critical section (CS) might be followed by an
exit section
• The remaining code is the remainder section
• The critical section problem is to design a protocol
that the processes can use so that their action will
not depend on the order in which their execution
is interleaved (possibly on many processors)
Framework for analysis of solutions
• Each process executes at • many CPU may be
nonzero speed but no present but memory
assumption on the hardware prevents
relative speed of n
simultaneous access
processes (may be SMP)

to the same memory
General structure of a
process: location
• No assumption about
repeat order of interleaved
entry section execution
critical section
exit section • For solutions: we need
remainder section to specify entry and
forever exit sections
Requirements for a valid solution to
the critical section problem
• Bounded Waiting
– After a process has made a request to enter it’s CS, there must be a
bound on the number of times that the other processes are allowed to
enter their CS
• otherwise the process will suffer from starvation
• Mutual Exclusion
– At any time, at most one process can be in its critical section (CS)
• Progress
– Only processes that are not executing in their RS (REMAINDER SECTION)
can participate in the decision of who will enter next in the CS.
– This selection cannot be postponed indefinitely
• Hence, we must have no deadlock
OS Solutions: Semaphores
(Section 5.6 Silberschautz)

• A semaphore S is an integer variable that,


apart from initialization, can only be accessed
through 2 atomic and mutually exclusive
operations:
– wait(S); decrement the samaphore
– signal(S), increment the semaphore
Counting Semaphore
Counting Semaphores
• A semaphore is a record (structure):
type semaphore = record
count: integer;
queue: list of processes
end;
var S: semaphore;
Semaphore’s operations
wait(S):
S.count--;
if (S.count<0) {
block this process
place this process in S.queue
}
signal(S):
S.count++;
if (S.count<=0) {
remove a process P from S.queue
place this process P on ready
list
}
S.count must be initialized to a nonnegative
value (depending on application)
Semaphores: observations
• When S.count >=0: the number of processes that
can execute wait(S) without being blocked =
S.count
• When S.count<0: the number of processes waiting
on S is = |S.count|
• Atomicity and mutual exclusion: no 2 processes
can be in wait(S) and signal(S) (on the same S) at
the same time (even with multiple CPUs)
• Hence the blocks of code defining wait(S) and
signal(S) are, in fact, critical sections
Using semaphores for solving critical section
problems

• For n processes:
Process Pi:
• Initialize S.count to 1 repeat
• Then only 1 process is wait(S);
CS
allowed into CS (mutual signal(S);
exclusion) RS
forever
• To allow k processes into CS,
we initialize S.count to k
Binary semaphores
• The semaphores we have studied are called
counting (or integer) semaphores
• We have also binary semaphores
– similar to counting semaphores except that
“count” is Boolean valued
– counting semaphores can be implemented by
binary semaphores...
– generally more difficult to use than counting
semaphores (e.g: they cannot be initialized to an
integer k > 1)
Binary semaphores

waitB(S):
if (S.value = 1) {
S.value := 0;
} else {
block this process
place this process in
S.queue
}
signalB(S):
if (S.queue is empty) {
S.value := 1;
} else {
remove a process P from S.queue
place this process P on ready list
}
Problems with semaphores

• semaphores provide a powerful tool for


enforcing mutual exclusion and coordinate
processes
• But wait(S) and signal(S) are scattered
among several processes. Hence, difficult to
understand their effects
• Usage must be correct in all the processes
• One bad (or malicious) process can fail the
entire collection of processes
Monitors

• Are high-level language constructs that provide


equivalent functionality to that of semaphores but
are easier to control

• Found in many concurrent programming


languages
• Concurrent Pascal, Modula-3, C++, Java...
Monitor
• Is a software module containing:
– one or more procedures
– an initialization sequence
– local data variables
• Characteristics:
– local variables are accessible only by monitor’s
procedures
– a process enters the monitor by invoking one of
it’s procedures
– only one process can be in the monitor at any one
time
Monitor

• The monitor ensures mutual exclusion: no


need to program this constraint explicitly
• Hence, shared data are protected by placing
them in the monitor
– The monitor locks the shared data on process
entry
• Process synchronization is done by the
programmer by using condition variables that
represent conditions a process may need to
wait for before executing in the monitor
Condition variables

• are local to the monitor (accessible only within


the monitor)
• can be accessed and changed only by two
functions:
– cwait(a): blocks execution of the calling process on
condition (variable) a
• the process can resume execution only if another process
executes csignal(a)
– csignal(a): resume execution of some process blocked
on condition (variable) a.
• If several such process exists: choose any one
• If no such process exists: do nothing
Monitor
• Awaiting processes are
either in the entrance
queue or in a condition
queue
• A process puts itself into
condition queue cn by
issuing cwait(cn)
• csignal(cn) brings into the
monitor 1 process in
condition cn queue
• Hence csignal(cn) blocks
the calling process and
puts it in the urgent queue
(unless csignal is the last
operation of the monitor
procedure)
Message Passing
• Is a general method used for interprocess
communication (IPC)
– for processes inside the same computer
– for processes in a distributed system
• Yet another means to provide process
synchronization and mutual exclusion
• We have at least two primitives:
– send(destination, message)
– received(source, message)
• In both cases, the process may or may not be
blocked
Synchronization in message passing

• For the sender: it is more natural not to be


blocked after issuing send(.,.)
– can send several messages to multiple dest.
– but sender usually expect acknowledgment of
message receipt (in case receiver fails)
• For the receiver: it is more natural to be
blocked after issuing receive(.,.)
– the receiver usually needs the info before
proceeding
– but could be blocked indefinitely if sender
process fails before send(.,.)
Synchronization in message passing
• Hence other possibilities are sometimes
offered
• Ex: blocking send, blocking receive:
– both are blocked until the message is received
– occurs when the communication link is unbuffered
(no message queue)
– provides tight synchronization (rendez-vous)
Addressing in message passing

• direct addressing:
– when a specific process identifier is used for
source/destination
– but it might be impossible to specify the source ahead
of time (ex: a print server)
• indirect addressing (more convenient):
– messages are sent to a shared mailbox which consists
of a queue of messages
– senders place messages in the mailbox, receivers pick
them up
Enforcing mutual exclusion with
message passing

• create a mailbox mutex


shared by n processes Process Pi:
var msg: message;
• send() is non blocking repeat
• receive() blocks when receive(mutex,msg);
mutex is empty CS
send(mutex,msg);
• Initialization: send(mutex,
RS
“go”); forever
• The first Pi who executes
receive() will enter CS.
Others will be blocked until
Pi resends msg.
The bounded-buffer P/C problem with message
passing

Producer:
var pmsg: message;
repeat
receive(mayproduce, pmsg);
pmsg:= produce();
send(mayconsume, pmsg);
forever

Consumer:
var cmsg: message;
repeat
receive(mayconsume, cmsg);
consume(cmsg);
send(mayproduce, null);
forever
Exercises

You might also like