0% found this document useful (0 votes)
38 views55 pages

Lecture-6 Synchronization

OS

Uploaded by

saqibzulfiqar375
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views55 pages

Lecture-6 Synchronization

OS

Uploaded by

saqibzulfiqar375
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

CSC 323 – Principles of Operating Systems

Instructor: Dr. M. Hasan Jamal


Lecture# 06: Synchronization

1
Before we start …: Exercise # 1
count is a global variable initialized to 5
Thread 1 Thread 2
void foo() void bar()
{ {
count++; count--;
} }

• After thread 1 and 2 finishes, what is the value of count?

2
Before we start …: Exercise # 1
• count++ could be implemented as: • count-- could be implemented as:
register1 = count register2 = count
register1 = register1 + 1 register2 = register2 – 1
count = register1 count = register2

• Consider this execution interleaving with “count = 5” initially:


• S0: producer execute register1 = count
• S1: producer execute register1 = register1 + 1 {register1 = 5}
• S2: consumer execute register2 = count
{register1 = 6}
{register2 = 5}
• S3: consumer execute register2 = register2 – 1
{register2 = 4}
• S4: producer execute count = register1
{count = 6 } 3
• S5: consumer execute count = register2 {count = 4}
Atomic Operations
• To understand a concurrent program, we need to know what the underlying
indivisible operations are!

• Atomic Operation: an operation that always runs to completion or not at all


• It is indivisible: it cannot be stopped in the middle and state cannot be modified by
someone else in the middle
• Fundamental building block – if no atomic operations, then have no way for threads to
work together

• On most machines, memory references and assignments (i.e., loads and stores)
of words are atomic
• Consequently – weird example that produces “4” on previous slide can’t happen

• Many instructions are not atomic 4


• Double-precision floating point store often not atomic
• VAX and IBM 360 had an instruction to copy a whole array
Synchronization Motivation
• Threads cooperate in multithreaded programs
• To share resources, access shared data structures
• To coordinate their execution

• For correctness and consistency, we need to control this cooperation


• Thread schedule is non-deterministic (i.e., behavior changes when re-run program)
• Scheduling is not under program control
• Threads interleave executions arbitrarily and at different rate
• Multi-word operations are not atomic
• Compiler/hardware instruction reordering

5
Shared Resources
• We initially focus on controlling access to shared resources

• Basic Problem:
• If two concurrent threads (processes) are accessing a shared variable, and that
variable is read/modified/written by those threads, then access to the variable must
be controlled to avoid erroneous behavior

• Over the next couple of lectures, we will look at:


• Mechanisms to control access to shared resources
• Locks, mutexes, semaphores, monitors, condition variables, etc.
• Pattern for coordinating accesses to shared resources
• Bounded buffer, producer-consumer, etc.
6
Classic Example: Bank Account Balance
• TODO: Implement a function to handle withdrawals from a bank account:
withdraw (account, amount) {
balance = get_balance(account);
balance = balance – amount;
put_balance(account, balance);
return balance;
}

• Suppose that you and your significant other share a bank account with a
balance of $1000.

• Then you each go to separate ATM machine and simultaneously withdraw $100 7
from the account.
Classic Example: Bank Account Balance
• We’ll represent the situation by creating a separate thread for each person to
do the withdrawals

• These threads run on the same bank server:

withdraw (account, amount) { withdraw (account, amount) {


balance = get_balance(account); balance = get_balance(account);
balance = balance – amount; balance = balance – amount;
put_balance(account, balance); put_balance(account, balance);
return balance; return balance;
} }

• What’s the problem with this implementation? 8


• Think about potential schedules of these two threads
Classic Example: Bank Account Balance
• The problem is that the execution of the two threads can be interleaved:

balance = get_balance(account);
balance = balance – amount;
Execution balance = get_balance(account);
sequence balance = balance – amount; Context Switch
seen by CPU
put_balance(account, balance);
put_balance(account, balance);

• What’s the balance of the account now?

• Is the bank happy with our implementation? 9


How Interleaved Can It Get?
• How contorted can the interleavings be?

• We’ll assume that the only atomic


operations are instructions
balance = get_balance(account);
• E.g., reads and writes of words
balance = get_balance(account);
• The hardware may not even give you that!
balance = balance – amount;

• We’ll assume that a context switch can balance = balance – amount;


occur at any time put_balance(account, balance);
put_balance(account, balance);

• We’ll assume that you can delay a


thread as long as you like as long as
10
it’s not delayed forever
Shared Resources
• Problem: Concurrent threads accessed a shared resource without any
synchronization
• Known as a race condition

• Race Condition: The situation where several processes access and


manipulate shared data concurrently. The final value of the shared data
depends upon which process finishes last.

• We need mechanism to control access to these shared resources in the face of


concurrency so we can reason about how the program will operate

• Our example was updating a shared bank account

• Also applies to any shared data structure 11


• Buffers, queues, lists, hash tables, etc.
When Are Resources Shared?
• Local variables are not shared (private)
• Refer to data on the stack
• Each thread has its own stack
• Never pass/share/store a pointer to a local variable on the
stack for thread T1 to another thread T2

• Dynamic objects and other heap objects are shared


• Allocated from heap with malloc/free or new/delete

• Global variables and static objects are shared


• Stored in the static data segment, accessible by any thread

12
Mutual Exclusion
• We want to use mutual exclusion to synchronize access to shared resources
allowing us to have larger atomic blocks

• Code that uses mutual exclusion to synchronize its execution is called a critical
section.

• The Critical Section Problem – ensure that when one thread is executing in
its critical section, no other thread is allowed to execute in its critical section.

while (TRUE) {
while (TRUE) {
entry section
critical section
critical section
remainder section
exit section 13
}
remainder section
Entry section will allow only one process to enter and
}
execute critical section code.
Critical-Section Requirements
• Mutual Exclusion – If thread Ti is executing in its critical section, then no other
threads can be executing in their critical sections

• Progress – If some thread T is not in the critical section, then T cannot prevent
some other thread S from entering the critical section. A thread in the critical
section will eventually leave it.

• Bounded Waiting – If some thread is waiting on the critical section, then T will
eventually enter the critical section (i.e., no starvation).
• Assume that each process executes at a nonzero speed
• No assumption concerning relative speed of the n processes

• Performance – The overhead of entering and exiting the critical section is


14
small with respect to work being done within it.
“Too Much Milk” Problem
• Great thing about OS’s – analogy between problems in OS & problems in real life
• Help you understand real life problems better
• But computers are much stupider than people
• Example: People need to coordinate:

15
“Too Much Milk” Problem
• What are the correctness properties for the “Too much milk” problem???
• Only one person buys milk at a time
• Someone buys milk if you need it
• Restrict ourselves to use only atomic load and store operations as building blocks

16
“Too Much Milk” Problem: Solution #1
• Leave a note before buying (a version of “lock”)
• Remove note after buying (a version of “unlock”)
• Don’t buy any milk if there is note (wait)

• Suppose a computer tries this (remember, only memory read/write are atomic):
Thread A Thread B
if (noMilk & NoNote){ if (noMilk & NoNote){
leave Note; leave Note;
buy milk; buy milk;
remove note; remove note;
} }
• Does it work?
• Still too much milk but only occasionally!
17
• Thread context switched after checking milk & note but before buying milk!
• Solution makes problem worse since fails intermittently
“Too Much Milk” Problem: Solution #2
• How about using labeled notes so we can leave note before checking the milk?
Thread A Thread B
leave note A; leave note B;
if (noNote B) { if (noNoteA){
if (noMilk){ if (noMilk){
buy Milk; buy Milk;
} }
} }
remove note A; remove note B;

• Does it work?
• Possible for neither thread to buy milk
• Context switches at exactly the wrong times can lead each to think that the other is
going to buy
• Extremely unlikely that this would happen, but will at worse possible time 18
• This kind of lockup is called “starvation!”
“Too Much Milk” Problem: Solution #3
Thread A Thread B
leave note A; leave note B;
X: while (note B){ Y: if (noNote A){
do nothing; if (noMilk){
} buy milk;
if (noMilk){ }
buy milk; }
} remove note B;
remove note A;

• Does it work?
• Yes. Both can guarantee that it is safe to buy, or other will buy, ok to quit
• At point X, either there is a note B or not:
• if no note B, safe for A to buy since B has either not started or quit
• otherwise A waits until there is no longer a note B, and either finds milk that B has
bought or buys it if needed
• At point Y either there is a note A or not:
19
• if no note A, safe for B to check & buy milk (Thread A not started yet)
• Otherwise, A is either checking & buying milk or waiting for B to quit, so B quits by
removing note B.
“Too Much Milk” Problem: Solution #3
• Is Solution #3 a good solution?

• It is too complicated – it’s hard to convince ourselves this solution works.

• It is asymmetrical – thread A and B are different. Thus, adding more threads


would require different code for each new thread and modifications to existing
threads.

• A is busy waiting – A is consuming CPU resources despite any useful work.

• The solution relies on loads and stores to be atomic.


20
Mechanisms For Building Critical Sections
• Atomic read/write: Can it be done?

• Locks: Primitives, minimal semantics, used to build others

• Semaphores: Basic, easy to get the hang of, but hard to program with

• Monitors: High-level, requires language support, operations implicit

21
Mutex with Atomic R/W: Try # 1
int turn = 0;
T0 T1

while (TRUE) { while (TRUE) {


while (turn != 0); while (turn != 1);
critical section critical section
turn = 1; turn = 0;
outside of critical section outside of critical section
} }

• This is called alternation

• Does it satisfy the mutual exclusion requirement?


• Yes
22
• Does it satisfy the progress requirement?
• No. T0 sets turn = 1 and T1 is delayed, T0 is stuck in while loop indefinitely.
Mutex with Atomic R/W: Peterson’s
Algorithm
int turn = 1;
Entry Section boolean Flag[2] = FALSE
T0 T1
while (TRUE) { while (TRUE) {
flag[0] = TRUE; flag[1] = TRUE;
turn = 1; turn = 0;
while (flag[1] && turn != 0); while (flag[0] && turn != 1);
critical section critical section
flag[0] = FALSE; flag[1] = FALSE;
outside of critical section outside of critical section
} }

Exit Section
• Does it satisfy the mutual exclusion requirement?
23
• Does it satisfy the progress requirement?
Mutex with Atomic R/W: Peterson’s
Algorithm
• A two-process solution
• Assume that the load and store machine-language instructions are atomic; that
is, cannot be interrupted
• The two processes share two variables:
• The variable turn indicates whose turn it is to enter the critical section.
• The flag array is used to indicate if a process is ready to enter the critical section.
flag[i] = true implies that process Pi is ready.

• Provable that the three critical section requirement are met:


1. Mutual exclusion is preserved as Pi enters critical section only if:
• either flag[j] = FALSE or turn i
2. Progress requirement is satisfied
24
3. Bounded waiting requirement is met
Mutex with Atomic R/W: Bakery Algorithm
• N process solution
• Before entering its critical section, a process receives a number (like in a bakery).
And the holder of the smallest number enters the critical section.
• The numbering scheme here always generates numbers in increasing order of
enumeration; i.e., 1,2,3,3,3,3,4,5...
• If processes Pi and Pj receive the same number, if i < j, then Pi is served first; else Pj
is served first (PID assumed unique).

• Choosing a number
• max (a0,…, an-1) is a number k, such that k ≥ ai for i = 0, …, n – 1

• Notation for lexicographical order (ticket #, PID #)


• (a,b) < (c,d) if a < c or if a == c and b < d
25
Mutex with Atomic R/W: Bakery Algorithm
boolean choosing[n] = FALSE; // Is the process choosing its
number?
Integer number[n] = 0; // The ticket number for each
process

while (TRUE) {
choosing[i] = TRUE; // Process i is choosing a number
number[i] = 1 + max(number[0], … , number[n – 1]);
choosing[i] = FALSE; // Process i has chosen its number

for (j = 0; j < n; j++) {


// Wait if the other process is choosing its number
while (choosing[j]);
// Wait if the other process has a lower ticket number, or the
same number with a lower process ID
while ((number[j] != 0) && ((number[j],j) < (number[i],i)));
} 26
critical section
number[i] = 0; // Process i is done with the critical section
remainder section
Locks
• A lock is an object in memory providing mutual exclusion to shared data with
two “atomic” operations

• acquire() – wait until lock is free, then take it to enter a C.S


• release() – release lock to leave a C.S, waking up any thread waiting for it.

• Threads pair calls to acquire and release


• Between acquire/release, the thread holds the lock
• acquire does not return until any previous holder releases
• What can happen if the calls are not paired?

• Lock can spin (a spinlock) or block (a mutex)


27
Locks
• Rules of using a lock:
• Always acquire the lock before accessing shared data
• Always release the lock after finishing with shared data
• Lock is initially free
• Do not lock again if already locked
• Do not unlock if not locked by you
• Do not spend large amounts of time in critical section

28
Using Locks
acquire(lock);
balance = get_balance(account);
balance = balance – amount;
withdraw (account, amount) {
acquire(lock); acquire(lock);
balance = get_balance(account);
balance = balance – amount; Critical put_balance(account, balance);
put_balance(account, balance); Section release(lock);
release(lock); balance = get_balance(account);
return balance; balance = balance – amount;
}
put_balance(account, balance);
release(lock);

• What happens when green tries to acquire the lock?


• Why is the “return” outside the critical section? Is this OK?
29
• What happens when a third thread calls acquire?
“Too Much Milk” Problem: Solution #4
• Implementing “Too Much Milk” with locks
Thread A Thread B
acquire(lock); acquire(lock);
if (noMilk){ if (noMilk){
buy milk; buy milk;
} }
release(lock); release(lock);

• The solution is clean and symmetric

• How do we make acquire() and release() atomic?


• if two threads are waiting for the lock and both see it’s free, only one succeeds to
grab the lock
30
• Once again, section of code between acquire(lock) and release(lock)
is called a “Critical Section”
Implementing Locks
• How do we implement locks? Here is one attempt:

struct lock {
int held = 0;
} busy-wait (spin wait) for
lock to be released
void acquire (lock) {
while (lock held);
lock held = 1;
} A context switch can occur
void release (lock) { here, causing a race condition
lock held = 0;
}

• This is called a spinlock because a thread spins waiting for the lock to be released
• Does this work? 31
• No. Two independent threads may both notice that a lock has been released and
thereby acquire it.
Implementing Locks
• The problem is that the implementation of locks has critical sections, too!

• How do we stop the recursion?

• The implementation of acquire/release must be atomic


• An atomic operation is one which executes as though it could not be interrupted
• Code that executes “all or nothing”

• How do we make them atomic?

• Need help from hardware


• Atomic instructions (e.g., test-and-set, compare-and-swap)
• Disable/enable interrupts (prevent context switches)
32
Atomic Instructions: Test-And-Set
• The semantics of test-and-set are:
• Record the old value bool test_and_set (bool *flag) {
bool old = *flag;
• Set the value to indicate available
*flag = TRUE;
• Return the old value return old;
}
• Hardware executes it atomically!

• When executing test-and-set on “flag”


• What is value of flag afterwards if it was initially False? True?
• What is the return result if flag was initially False? True?

• Often used to implement locks. The idea is that if the value is 0 (unlocked), it
can be set to 1 (locked), and the calling process obtains the lock. If it is already 33
1, the process knows the lock is held by another.
Using Test-And-Set
• Here is the lock implementation with test-and-set:

struct lock {
int held = 0;
}
void acquire (lock) {
while (test_and_set(&lock held));
}
void release (lock) {
lock held = 0;
}

• When will the while return? What is the value of held?

• What about multiprocessor? 34


Atomic Instructions: Compare-And-Swap
• Compare-and-swap is a more flexible atomic operation. It compares the value
of a memory location to a given expected value, and if they are equal, it swaps
the memory value with a new value. If the values are not equal, it leaves the
memory unchanged. The original value of the memory location is returned.

• CAS is commonly used in lock-free algorithms, such as in atomic counters or


lock-free queues. It ensures that changes are only made if the memory
location hasn't been modified by another thread in the meantime, providing
stronger concurrency control.

int compare_and_swap (int *value, int expected, int new_value) {


int temp = *value;
if (*value == expected)
*value = new_value; 35
return temp;
}
Using Compare-And-Swap
• Here is the lock implementation with compare-and-swap:

struct lock {
int held = 0;
}
void acquire (lock) {
while (compare_and_swap(&lock held, 0, 1) != 0);
}
void release (lock) {
lock held = 0;
}

• Does it solve the critical-section problem?


36
Bounded-waiting with Compare-And-Swap
while (true) {
waiting[i] = true;
key = 1;
while (waiting[i] && key == 1)
key = compare_and_swap(&lock,0,1);
waiting[i] = false;
/* critical section */
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = 0;
else
waiting[j] = false;
/* remainder section */ 37
}
Atomic Variables
• Typically, instructions such as compare-and-swap are used as building blocks
for other synchronization tools.

• One tool is an atomic variable that provides atomic (uninterruptible) updates on


basic data types such as integers and booleans.

• For example: Let sequence be an atomic variable and let increment() be


operation on the atomic variable sequence. Then increment(&sequence)
ensures sequence is incremented without interruption.

void increment (atomic_int *v) {


int temp;
do {
temp = *v;
38
} while (temp != (compare_and_swap(v,temp,temp+1));
}
Problems with Spinlocks
• The problems with spinlocks is that they are wasteful
• If a thread is spinning on a lock, then the thread holding the lock cannot make
progress (on a uniprocessor)

• How did the lock holder give up the CPU in the first place?
• Lock holder calls yield or sleep
• Involuntary context switch

• Only want to use spinlocks as primitives to build higher-level synchronization


constructs

39
Disabling Interrupts
• Another implementation of acquire/release is to disable interrupts

struct lock {
}
void acquire (lock) {
disable interrupts;
}
void release (lock) {
enable interrupts;
}

• Note that there is no state associated with the lock


40
• Can two threads disable interrupts simultaneously?
On Disabling Interrupts
• Disabling interrupts blocks notification of external events that could trigger a
context switch (e.g., timer)

• In a “real“ system, this is only available to the kernel


• Why?

• Disabling interrupts is insufficient on a multiprocessor


• Interrupts are only disabled on a per-core basis
• Back to atomic instructions

• Like spinlocks, only want to disable interrupts to implement higher-level


synchronization primitives 41
• Don’t want interrupts disabled between acquire and release
Summarize Where We Are
• Goal: Use mutual exclusion to protect critical sections of code that access
shared resources

• Method: Use locks (either spinlocks or disable interrupts)

• Problem: Critical sections (CS) can be long

Spinlocks acquire(lock)
• Threads waiting to acquire lock Disabling Interrupts
... • Disabling interrupts for long
spin in test-and-set loop
• Wastes CPU cycles Critical Section periods of time can miss or
• Longer the CS, the longer the ... delay important events (e.g.,
spin, greater the chance for lock release(lock) timer, I/O) 42
holder to be interrupted
Implementing Locks
• Block waiters, interrupts enabled in critical sections

struct lock { void release (lock) {


int held = 0; Disable interrupts;
queue Q; if (Q) remove waiting thread;
} unblock waiting thread;
void acquire (lock) { lock held = 0;
Disable interrupts; Enable interrupts;
while (lock held){ }
put current thread on lock Q;
block current thread;
} acquire(lock) Interrupts Disabled
lock held = 1; ...
Enable interrupts; Critical Section Interrupts Enabled
} ... 43
release(lock) Interrupts Disabled
Higher-Level Synchronization
• Spinlocks and disabling interrupts are useful only for very short and simple
critical sections
• Wasteful otherwise
• These primitives are “primitive” – don’t do anything besides mutual exclusion

• Need higher-level synchronization primitives that:


• Block waiters, leave interrupts enabled within the critical section
• Provide semantics beyond mutual exclusion

• All synchronization requires atomicity, so we’ll use our “atomic” locks as


primitives to implement them
• Two common high-level mechanisms
• Semaphores
• Monitors 44

• Use them to solve common synchronization problems


Semaphores

45
Blocking in Semaphores
• Associated with each semaphore is a queue of waiting threads

• When wait() is called by a thread:


• If semaphore is open, thread continues
• If semaphore is closed, thread blocks on queue

• Then signal() opens the semaphore:


• If a thread is waiting on the queue, the thread is unblocked
• If no threads are waiting on the queue, the signal is remembered for the next thread

46
Semaphore Types
• Mutex Semaphore (or binary Semaphore)
• Represents single access to a resource
• Guarantees mutually exclusion to a critical section
• It is initialized to free (value = 1)

• Counting Semaphore (or general Semaphore)


• Represents a resource with many units available, or a resource that allows certain
kinds of unsynchronized concurrent access (e.g., reading)
• Multiple threads can pass the semaphore
• # of threads determined by the semaphore “count” initialized to # of resources (N)
• Can be used for other synchronization problems; e.g., for resource allocation

47
Semaphores Implementation
• Must guarantee that no two processes can execute the wait() and
signal() on the same semaphore at the same time
• Thus, the implementation becomes the critical section problem where the
wait() and signal() code are placed in the critical section
• Busy waiting implementation is not a good solution as many applications may
spend lots of time in critical sections
• In a no-busy waiting implementation, a waiting queue is associated with each
semaphore and each entry in a waiting queue has two data items:
• Value (of type integer)
• Pointer to next record in the list
• Two operations:
• block – place the process invoking the operation on the appropriate waiting queue 48
• wakeup – remove one process from waiting queue and place it in the ready queue
Implementation With No Busy Waiting
• Waiting queue:

typedef struct { wait (semaphore *S) {


int value; S value--;
struct process *list; if (S value < 0) {
} semaphore; add this process to S list;
block();
}

signal (semaphore *S) {


S value++;
if (S value <= 0) {
remove a process P from S list;
wakeup(P); 49
}
Using Semaphores
• Use is similar to locks, but semantics are different

wait(S);
struct Semaphore {
balance = get_balance(account);
int value;
balance = balance – amount;
Queue Q;
} Threads wait(S);
withdraw (account, amount) { Block
wait(S);
wait(S);
balance = get_balance(account); put_balance(account, balance);
balance = balance – amount; Critical
Section signal(S);
put_balance(account, balance);
signal(S); ...
return balance; signal(S);
} ... 50
signal(S);
It is undefined which
thread runs after a signal
“Too Much Milk” Problem: Binary
Semaphores
• Implementing “Too Much Milk” with semaphores
Thread A Thread B
wait(S); wait(S);
if (noMilk) if (noMilk
buy milk; buy milk;
signal(S); signal(S);

• Semaphores can be used for three purposes:


• To ensure mutually exclusive execution of a critical section (as locks do)
• To control access to a shared pool of resources (using a counting semaphore)
• To cause one thread to wait for a specific action to be signaled from another thread.

51
Example: Counting Semaphores
• A library has 10 study rooms, to be used by one student at a time. At most 10
students can use the rooms concurrently. Additional students that need to use the
rooms need to wait until a room is free.
• Students must request a room from the front desk and return to the desk when
finished using a room. The clerk doesn’t keep track of which room is occupied or
who is using it. Upon a room request, the clerk decreases the count and upon room
release, the clerk increases this count. Front desk represents a semaphore, rooms
are the resources, and students represent processes. How can we code those
processes?
• Solution: One of the processes creates and initializes a semaphore to S = 10.

wait (S);

Each process has to be
….use one instance of the resource… 52
coded in this manner.

signal (S);
Using Semaphores: Other Synchronization
Problems
P0 P1
… …
S1; Assume we definitely want to
S2;
…. have S1 executed before S2.
….

semaphore x = 0; // initialized to 0
P0 P1
… …
S1; wait (x);
Solution: signal (x); S2;
…. ….
53
Semaphore Exercise
• X and Y are shared semaphores. The following 3 pseudo-coded threads are
started. What is the output and also mention the updated values of X and Y
after completion of every Thread? ? X = 0, Y = 1
Thread 1 Thread 2 Thread 3

wait(X) wait(X) wait(Y)


print “A" wait(Y) print “C"
signal(Y) Print “B" signal(X)
signal(X) signal(X)

wait(S) signal(S)
{ {
while (S ≤ 0); // busy S++; Answer:
54
wait } CAB
S--;
} X = 1, Y = 0
Semaphore Exercise
• Write a pseudo code to synchronize processes A, B, C and D by using
semaphores so that process B must finish executing before A starts, and
process A must finish before process C or D starts. Show your solution. You
should assume three semaphores X, Y and Z and all initialized to zero.

Solution:
X=Y=Z=0

Process A Process B Process C Process D

wait(X) Do works of B; wait(Y) wait(Y)


Do works of A; signal(X) Do works of C; Do works of D;
signal(Y) 55
signal(Y)

You might also like