0% found this document useful (0 votes)
24 views89 pages

Unit 2 Synchronization

Uploaded by

varun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views89 pages

Unit 2 Synchronization

Uploaded by

varun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 89

Synchronization

Background

• Processes can execute concurrently

– May be interrupted at any time, partially


completing execution
• Concurrent access to shared data may result in data
inconsistency
• Maintaining data consistency requires mechanisms to
ensure the orderly execution of cooperating processes
• concurrent or parallel execution can contribute to issues
involving the integrity of data shared by several processes
• Let’s consider an example of Bounded buffer producer-
consumer problem (next slide)
Code for Producer Process

while (true)
{
/* produce an item in next produced */
while (count == BUFFER SIZE) ; /* do nothing */
buffer[in] = next_ produced;
in = (in + 1) % BUFFER SIZE;
count++;
}
Code for Consumer Process

while (true)
{
while (count == 0) ; /* do nothing */
next_ consumed = buffer[out];
out = (out + 1) % BUFFER SIZE;
count--;
/* consume the item in next consumed */
}
• Although the producer and consumer routines shown above
are correct separately, they may not function correctly when
executed concurrently
• As an illustration, suppose that the value of the variable count
is currently 5 and that the producer and consumer processes
concurrently execute the statements “count++” and “count--”
• Following the execution of these two statements, the value of
the variable count may be 4, 5 or 6!
• The only correct result, though, is count == 5, which is
generated correctly if the producer and consumer execute
separately
• The value of count may be incorrect. It is shown as follows:
• The statement “count++” may be implemented in machine
language as follows:
register1 = count

register1 = register1 +1

count = register1
– Where register1 is one of the local CPU registers
• Similarly, the statement “count--” is implemented as follows:
register2 = count
register2 = register2 −1
count = register2
– Where again register2 is one of the local CPU registers

• The concurrent execution of “count++” and “count--” is


equivalent to a sequential execution in which the lower-level
statements presented previously are interleaved in some
arbitrary order
• One such interleaving is the following:

T0: producer execute register1 = count {register1 = 5}

T1: producer execute register1 = register1 +1 {register1 = 6}


T2: consumer execute register2 = count {register2 = 5}
T3: consumer execute register2 = register2 −1 {register2 = 4}
T4: producer execute count = register1 {count = 6}
T5: consumer execute count = register2 {count = 4}

• Notice that we have arrived at the incorrect state “count ==4”,


indicating that four buffers are full, when, infact, five buffers
are full
• We would arrive at this incorrect state because we allowed
both processes to manipulate the variable count concurrently
• A situation like this, where several processes access and
manipulate the same data concurrently and the outcome of
the execution depends on the particular order in which the
access takes place, is called a race condition
• To guard against the race condition above, we need to ensure
that only one process at a time can be manipulating the
variable count.
• To make such a guarantee , we require that the processes be
synchronized in some way
The Critical-Section Problem

• Consider a system consisting of n processes {P 0,P1,...,Pn−1}


• Each process has critical section segment of code
– Process may be changing common variables, updating table, writing file,
etc.

• When one process in critical section, no other process may be


in its critical section
– That is, no two processes are executing in their critical sections at the
same time

• The critical-section problem is to design a protocol that the


processes can use to synchronize their activity so as to
cooperatively share data
• Each process must ask permission to enter critical section
• The section of code implementing this request is the entry
section
• The critical section may be followed by an exit section
• The remaining code is the remainder section

• The general structure of a typical process is shown in the


next slide
• General structure of process Pi
• A solution to the critical-section problem must satisfy the
following three requirements:
1. Mutual exclusion: If process Pi is executing in its critical section,
then no other processes can be executing in their critical sections.

2. Progress: If no process is executing in its critical section and some


processes wish to enter their critical sections, then only those
processes that are not executing in their remainder sections can
participate in deciding which will enter its critical section next, and
this selection cannot be postponed indefinitely.

3. Bounded waiting: There exists a bound, or limit, on the number of


times that other processes are allowed to enter their critical
sections after a process has made a request to enter its critical
section and before that request is granted.
• We assume that each process is executing at a nonzero speed
• However, we can make no assumption concerning the relative
speed of the n processes
• Race Conditions
– At a given point in time, many kernel-mode processes may be active in
the operating system
– As a result, the code implementing an operating system (kernel code)
is subject to several possible race conditions
– Consider as an example a kernel data structure that maintains a list of
all open files in the system
– This list must be modified when a new file is opened or closed
– If two processes were to open files simultaneously, the separate
updates to this list could result in a race condition
– Another example is illustrated in Figure below
• In this situation, two processes, P0 and P1, are creating child
processes using the fork() system call
• In this example, there is a race condition on the variable
kernel variable next_available_pid which represents the value
of the next available process identifier
• Unless mutual exclusion is provided, it is possible the same
process identifier number could be assigned to two separate
processes
• Other kernel data structures that are prone to possible race
conditions include structures for maintaining memory
allocation, for maintaining process lists, and for interrupt
handling
• It is up to kernel developers to ensure that the operating
system is free from such race condition
• The critical-section problem could be solved simply in a
single-core environment if we could prevent interrupts from
occurring while a shared variable was being modified
• In this way, we could be sure that the current sequence of
instructions would be allowed to execute in order without
preemption
• No other instructions would be run, so no unexpected
modifications could be made to the shared variable
• Unfortunately, this solution is not as feasible in a
multiprocessor environment
• Disabling interrupts on a multiprocessor can be time
consuming, since the message is passed to all the processors
• This message passing delays entry into each critical section,
and system efficiency decreases
• Also there are effects on a system’s clock if the clock is kept
updated by interrupts
• Two general approaches are used to handle critical sections in
operating systems:
– preemptive kernels

– nonpreemptive kernels

• A preemptive kernel allows a process to be preempted while


it is running in kernel mode
• A nonpreemptive kernel does not allow a process running in
kernel mode to be preempted
– a kernel-mode process will run until it exits kernel mode, blocks, or
voluntarily yields control of the CPU

• Obviously, a non preemptive kernel is essentially free from


race conditions on kernel data structures, as only one process
is active in the kernel at a time
• It is not the same about preemptive kernels
• Preemptive kernels are especially difficult to design for SMP
architectures
• Why, then, would anyone favor a preemptive kernel over a
nonpreemptive one?
– more responsive, since there is less risk that a kernel-mode process
will run for an arbitrarily long period before relinquishing the
processor to waiting processes
– is more suitable for real-time programming, as it will allow a real-time
process to preempt a process currently running in the kernel
Peterson’s Solution

• Is a classic software-based solution to the critical-section


problem
• Peterson’s solution is restricted to two processes that
alternate execution between their critical sections and
remainder sections

• The processes are numbered P0 and P1

• For convenience, when presenting Pi we use Pj to denote the


other process; that is, j equals 1− i
• Peterson’s solution requires the two processes to share two
data items:
1. int turn;
2. boolean flag[2];

• The variable turn indicates whose turn it is to enter its


critical section
– That is, if turn == i, then process Pi is allowed to execute in its critical
section

• The flag array is used to indicate if a process is ready to enter


its critical section
– For example, if flag[i] is true, Pi is ready to enter its critical section
The structure of process Pi in Peterson’s solution

while (true){

flag[i] = true;
turn = j;
while (flag[j] && turn = = j);

/* critical section */

flag[i] = false;

/* remainder section */

}
The structure of process Pj in Peterson’s solution

while (true){

flag[j] = true;
turn = i;
while (flag[i] && turn = = i);

/* critical section */

flag[j] = false;

/* remainder section */

}
while (true){ while (true){

flag[i] = true; flag[j] = true;


turn = j; turn = i;
while (flag[j] && turn = = j); while (flag[i] && turn = = i);

/* critical section */ /* critical section */

flag[i] = false; flag[j] = false;

/* remainder section */ /* remainder section */

} }
• To enter the critical section, process Pi first sets flag[i] to be
true and then sets turn to the value j
– There by asserting that if the other process wishes to enter the critical
section, it can do so

• If both processes try to enter at the same time, turn will be


set to both i and j at roughly the same time
• Only one of these assignments will last; the other will occur
but will be overwritten immediately
– The eventual value of turn determines which of the two processes is
allowed to enter its critical section first
Correctness of Peterson’s Solution

• Provable that the three critical section requirements are met


1. Mutual exclusion is preserved

2. The progress requirement is satisfied

3. The bounded-waiting requirement is met

• To prove property 1, note that each Pi enters its critical


section only if either flag[j] == false or turn == i
• Also note that, if both processes can be executing in their
critical sections at the same time, then flag[0] == flag[1] ==
true
• These two observations imply that P0 and P1 could not have
successfully executed their while statements at about the same
time
– since the value of turn can be either 0 or 1 but cannot be both

• Hence, one of the processes—say, Pj must have successfully

executed the while statement, whereas Pi had to execute at


least one additional statement (“turn == j”)
• However, at that time, flag[j] == true and turn == j, and this
condition will persist as long as Pj is in its critical section; as a
result, mutual exclusion is preserved
• To prove properties 2 and 3, note that a process Pi can be
prevented from entering the critical section only if it is stuck
in the while loop with the condition flag[j] == true and
turn == j
– this loop is the only one possible

• If Pj is not ready to enter the critical section, then flag[j] ==

false, and Pi can enter its critical section

• If Pj has set flag[j] to true and is also executing in its while


statement, then either turn == i or turn == j
• If turn == i, then Pi will enter the critical section

• If turn == j, then Pj will enter the critical section

• However, once Pj exits its critical section, it will reset flag[j] to

false, allowing Pi to enter its critical section

• If Pj resets flag[j] to true, it must also set turn to i

• Thus, since Pi does not change the value of the variable turn

while executing the while statement, Pi will enter the critical

section (progress) after at most one entry by Pj (bounded


waiting)
• Peterson’s solution is not guaranteed to work on modern
computer architectures
– for the primary reason that, to improve system performance,
processors and/or compilers may reorder read and write operations
that have no dependencies

• For a single threaded application, this reordering is


immaterial as far as program correctness is concerned, as the
final values are consistent with what is expected
• But for a multithreaded application with shared data, the
reordering of instructions may render inconsistent or
unexpected results
Example

• Consider the following data that are shared between two


threads:
– boolean flag = false;

– int x = 0;

• where Thread 1 performs the statements


– while (!flag) ;

print x;

• and Thread 2 performs


• x = 100;

flag = true;
• The expected behavior is, of course, that Thread1 outputs the
value 100 for variable x
• However, as there are no data dependencies between the
variables flag and x, it is possible that a processor may reorder
the instructions for Thread 2 so that flag is assigned true
before assignment of x = 100
• In this situation, it is possible that Thread 1 would output 0 for
variable x
• Less obvious is that the processor may also reorder the
statements issued by Thread 1 and load the variable x before
loading the value of flag
• If this were to occur, Thread1 would output 0 for variable x
• How does this affect Peterson’s solution?
• Consider what happens if the assignments of the first two
statements that appear in the entry section of Peterson’s
solution are reordered
The structure of process Pi in Peterson’s solution

while (true){

flag[i] = true;
turn = j;
while (flag[j] && turn = = j);

/* critical section */

flag[i] = false;

/* remainder section */

}
• It is possible that both threads may be active in their critical
sections at the same time as shown in the figure below:

• The only way to preserve mutual exclusion is by using proper


synchronization tools
• Reordering of instructions in Process i
• turn = j
• while(flag[j] == True and turn == j);
• flag[i] = True

• This same thing can happen for process j.


• turn = i
• while(flag[i] == True and turn == i);
• flag[j] = True
Mutex Locks

• Operating-system designers build higher-level software tools


to solve the critical-section problem
• The simplest of these tools is the mutex lock
– In fact, the term mutex is short for mutual exclusion

• mutex lock is used to protect critical sections and thus


prevent race conditions
– That is, a process must acquire the lock before entering a critical
section
– it releases the lock when it exits the critical section
• The acquire() function acquires the lock, and the release()
function releases the lock, as illustrated in Figure below:
while (true) {

acquire lock

critical section

release lock

remainder section
}
• The definition of acquire() is as follows:
• acquire()

{
while (!available) ; /* busy wait */
available = false;
}
• The definition of release() is as follows:
• release()
{
available = true;
}
• Calls to either acquire() or release() must be performed
atomically
• The main disadvantage of the implementation given here is
that it requires busy waiting
• While a process is in its critical section, any other process
that tries to enter its critical section must loop continuously in
the call to acquire()
• This continual looping is clearly a problem in a real
multiprogramming system, where a single CPU core is shared
among many processes
• Busy waiting also wastes CPU cycles that some other process
might be able to use productively
• The type of mutex lock that is described here is also called a
spinlock because the process “spins” while waiting for the
lock to become available
• Spinlocks do have an advantage, however, in that no context
switch is required when a process must wait on a lock, and a
context switch may take considerable time
• In certain circumstances on multicore systems, spinlocks are
in fact the preferable choice for locking
• If a lock is to be held for a short duration, one thread can
“spin” on one processing core while another thread performs
its critical section on another core
• On modern multicore computing systems, spinlocks are
widely used in many operating systems
Semaphores

• Mutex locks are generally considered the simplest of


synchronization tools
• Semaphore is a more robust tool that can behave similarly to
a mutex lock but can also provide more sophisticated ways for
processes to synchronize their activities
• A semaphore S is an integer variable that, apart from
initialization, is accessed only through two standard atomic
operations: wait() and signal()
• Semaphores were introduced by the Dutch computer scientist
Edsger Dijkstra
• The wait() operation was originally termed P (from the Dutch

proberen, “to test”)


• signal() was originally called V (from verhogen, “to
increment”)
• The definition of wait() is as follows:

• wait(S) {

while (S <= 0) ; // busy wait


S--;
}
• The definition of signal() is as follows:
• signal(S) {
S++;
}
• All modifications to the integer value of the semaphore in the
wait() and signal() operations must be executed atomically
– That is, when one process modifies the semaphore value, no other
process can simultaneously modify that same semaphore value
• In addition, in the case of wait(S), the testing of the integer
value of S (S ≤ 0), as well as its possible modification (S--),
must be executed without interruption
• Operating systems often distinguish between counting and
binary semaphores
• The value of a counting semaphore can range over an
unrestricted domain
• The value of a binary semaphore can range only between 0
and 1
• Counting semaphores can be used to control access to a given
resource consisting of a finite number of instances
• The semaphore is initialized to the number of resources
available
• Each process that wishes to use a resource performs a wait()
operation on the semaphore (thereby decrementing the count)
• When a process releases a resource, it performs a signal()
operation (incrementing the count)
• When the count for the semaphore goes to 0, all resources
are being used
• After that, processes that wish to use a resource will block
until the count becomes greater than 0
Monitors
 A high-level abstraction that provides a convenient and effective
mechanism for process synchronization
 Abstract data type, internal variables only accessible by code within
the procedure
 Only one process may be active within the monitor at a time
 Pseudocode syntax of a monitor:

monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }

procedure P2 (…) { …. }

procedure Pn (…) {……}

initialization code (…) { … }


}

Operating System Concepts – 10th Edition 6.54 Silberschatz, Galvin and Gagne ©2018
Schematic view of a Monitor

Operating System Concepts – 10th Edition 6.55 Silberschatz, Galvin and Gagne ©2018
Monitor Implementation Using Semaphores

 Variables

semaphore mutex
mutex = 1

 Each procedure P is replaced by

wait(mutex);

body of P;

signal(mutex);

 Mutual exclusion within a monitor is ensured

Operating System Concepts – 10th Edition 6.56 Silberschatz, Galvin and Gagne ©2018
Condition Variables
 condition x, y;
 Two operations are allowed on a condition variable:
• x.wait() – a process that invokes the operation is suspended
until x.signal()
• x.signal() – resumes one of processes (if any) that invoked
x.wait()
 If no x.wait() on the variable, then it has no effect on the
variable

Operating System Concepts – 10th Edition 6.57 Silberschatz, Galvin and Gagne ©2018
Monitor with Condition Variables

Operating System Concepts – 10th Edition 6.58 Silberschatz, Galvin and Gagne ©2018
Usage of Condition Variable Example
 Consider P1 and P2 that need to execute two statements S1 and S2
and the requirement that S1 to happen before S2
• Create a monitor with two procedures F1 and F2 that are
invoked by P1 and P2 respectively
• One condition variable “x” initialized to 0
• One Boolean variable “done”
• F1:
S1;
done = true;
x.signal();
• F2:
if done = false
x.wait()
S2;

Operating System Concepts – 10th Edition 6.59 Silberschatz, Galvin and Gagne ©2018
Monitor Implementation Using Semaphores

 Variables

semaphore mutex; // (initially = 1)


semaphore next; // (initially = 0)
int next_count = 0; // number of processes waiting

inside the monitor

 Each function P will be replaced by

wait(mutex);

body of P;

if (next_count > 0)
signal(next)
else
signal(mutex);

 Mutual exclusion within a monitor is ensured

Operating System Concepts – 10th Edition 6.60 Silberschatz, Galvin and Gagne ©2018
Implementation – Condition Variables
 For each condition variable x, we have:

semaphore x_sem; // (initially = 0)


int x_count = 0;

 The operation x.wait() can be implemented as:

x_count++;
if (next_count > 0)
signal(next);
else
signal(mutex);
wait(x_sem);
x_count--;

Operating System Concepts – 10th Edition 6.61 Silberschatz, Galvin and Gagne ©2018
Implementation (Cont.)

 The operation x.signal() can be implemented as:

if (x_count > 0) {
next_count++;
signal(x_sem);
wait(next);
next_count--;
}

Operating System Concepts – 10th Edition 6.62 Silberschatz, Galvin and Gagne ©2018
Resuming Processes within a Monitor

 If several processes queued on condition variable x,


and x.signal() is executed, which process should
be resumed?
 FCFS frequently not adequate
 Use the conditional-wait construct of the form
x.wait(c)
where:
• c is an integer (called the priority number)
• The process with lowest number (highest priority) is
scheduled next

Operating System Concepts – 10th Edition 6.63 Silberschatz, Galvin and Gagne ©2018
Single Resource allocation

 Allocate a single resource among competing processes using priority


numbers that specifies the maximum time a process plans to use the
resource

R.acquire(t);
...
access the resurce;
...

R.release;

 Where R is an instance of type ResourceAllocator

Operating System Concepts – 10th Edition 6.64 Silberschatz, Galvin and Gagne ©2018
Single Resource allocation

 Allocate a single resource among competing processes using priority


numbers that specifies the maximum time a process plans to use the
resource
 The process with the shortest time is allocated the resource first
 Let R is an instance of type ResourceAllocator (next slide)
 Access to ResourceAllocator is done via:

R.acquire(t);
...
access the resurce;
...
R.release;

 Where t is the maximum time a process plans to use the resource

Operating System Concepts – 10th Edition 6.65 Silberschatz, Galvin and Gagne ©2018
A Monitor to Allocate Single Resource
monitor ResourceAllocator
{
boolean busy;
condition x;
void acquire(int time) {
if (busy)
x.wait(time);
busy = true;
}
void release() {
busy = false;
x.signal();
}
initialization code() {
busy = false;
}
}

Operating System Concepts – 10th Edition 6.66 Silberschatz, Galvin and Gagne ©2018
Single Resource Monitor (Cont.)

 Usage:
acquire
...
release
 Incorrect use of monitor operations
• release() … acquire()
• acquire() … acquire())
• Omitting of acquire() and/or release()

Operating System Concepts – 10th Edition 6.67 Silberschatz, Galvin and Gagne ©2018
Classical Problems of Synchronization

• Examples
– The Readers–Writers Problem

– The Dining-Philosophers Problem

• These problems are used for testing nearly every newly


proposed synchronization scheme
The Readers–Writers Problem

• Suppose that a database is to be shared among several


concurrent processes
• Some of these processes may want only to read the database,
whereas others may want to update (that is, read and write)
the database
• These two types of processes are distinguished by referring to
the former as readers and to the latter as writers
• Obviously, if two readers access the shared data
simultaneously, no adverse effects will result
• However, if a writer and some other process (either a reader
or a writer) access the database simultaneously, chaos may
ensue
• To ensure that these difficulties do not arise, we require that
the writers have exclusive access to the shared database while
writing to the database
• This synchronization problem is referred to as the readers–
writers problem
• The readers–writers problem has several variations, all
involving priorities
• The simplest one, referred to as the first readers–writers
problem, requires that no reader be kept waiting unless a
writer has already obtained permission to use the shared
object
– In other words, no reader should wait for other readers to finish
simply because a writer is waiting
• The second readers–writers problem requires that, once a
writer is ready, that writer perform its write as soon as
possible
– In other words, if a writer is waiting to access the object, no new
readers may start reading

• A solution to either problem may result in starvation


– In the first case, writers may starve

– In the second case, readers may starve


• In the solution to the first readers–writers problem, the
reader processes share the following data structures
– semaphore rw_mutex = 1;

– semaphore mutex = 1;

– int read_count = 0;

• The binary semaphores mutex and rw_mutex are initialized to


1
• read_count is a counting semaphore initialized to 0
• The semaphore rw_mutex is common to both reader and
writer processes
• The mutex semaphore is used to ensure mutual exclusion
when the variable read_count is updated
• The read_count variable keeps track of how many processes
are currently reading the object
• The semaphore rw_mutex functions as a mutual exclusion
semaphore for the writers
• It is also used by the first or last reader that enters or exits
the critical section
• It is not used by readers that enter or exit while other readers
are in their critical sections
• The code for a writer process is shown in Figure below:

while (true) {
wait(rw_mutex);
...
/* writing is performed */
...
signal(rw_mutex);
}
• The code for a reader process is shown in Figure below:
• While (true)
• {
• wait(mutex); // Reader wants to enter the critical section
• read_count++; // The number of readers has now increased
by 1
• // there is atleast one reader in the critical section
• // this ensure no writer can enter if there is even one reader
• // thus we give preference to readers here
• if (read_count==1)
– wait(rw_mutex);

• // other readers can enter while this current reader is


inside // the critical section
• signal(mutex);
• /* reading is performed */
• wait(mutex); // a reader wants to leave
• read count --;
• // that is, no reader is left in the critical section,
• if (read_count == 0)
• signal(rw_mutex); //writers can enter
• signal(mutex); // reader leaves
• }
The Dining-Philosophers Problem

• Consider five philosophers who spend their lives thinking and


eating
• The philosophers share a circular table surrounded by five
chairs, each belonging to one philosopher
• In the center of the table is a bowl of rice, and the table is laid
with five single chopsticks (see Figure below)
• When a philosopher thinks, she does not interact with her
colleagues
• From time to time, a philosopher gets hungry and tries to pick
up the two chopsticks that are closest to her
– the chopsticks that are between her and her left and right neighbours

• A philosopher may pick up only one chopstick at a time


• Obviously, she cannot pick up a chopstick that is already in the
hand of a neighbour
• When a hungry philosopher has both her chopsticks at the
same time, she eats without releasing the chopsticks
• When she is finished eating, she puts down both chopsticks
and starts thinking again
• It is a simple representation of the need to allocate several
resources among several processes in a deadlock-free and
starvation-free manner
• One simple solution is to represent each chopstick with a
semaphore
• A philosopher tries to grab a chopstick by executing a wait()
operation on that semaphore
• She releases her chopsticks by executing the signal()
operation on the appropriate semaphores
• Thus, the shared data are
– semaphore chopstick[5];

• where all the elements of chopstick are initialized to 1


• The structure of philosopher i is shown in Figure in the next
slide
The structure of Philosopher i

while (true){
wait (chopstick[i] );
wait (chopStick[ (i + 1) % 5] );

/* eat for awhile */

signal (chopstick[i] );
signal (chopstick[ (i + 1) % 5] );

/* think for awhile */

}
• Although this solution guarantees that no two neighbours are
eating simultaneously, it nevertheless must be rejected
because it could create a deadlock
• Suppose that all five philosophers become hungry at the same
time and each grabs her left chopstick
• All the elements of chopstick will now be equal to 0
• When each philosopher tries to grab her right chopstick, she
will be delayed forever
• Several possible remedies to the deadlock problem are the
following:
– Allow at most four philosophers to be sitting simultaneously at the
table
– Allow a philosopher to pick up her chopsticks only if both chopsticks
are available
– Use an asymmetric solution that is, an odd-numbered philosopher
picks up first her left chopstick and then her right chopstick, whereas
an even numbered philosopher picks up her right chopstick and then
her left chopstick
Monitor Solution to Dining Philosophers
monitor DiningPhilosophers
{
enum {THINKING; HUNGRY, EATING} state [5];
condition self [5];

void pickup (int i) {


state[i] = HUNGRY;
test(i);
if (state[i] != EATING) self[i].wait;
}
void putdown (int i) {
state[i] = THINKING;
// test left and right neighbors
test((i + 4) % 5);
test((i + 1) % 5);
}

Operating System Concepts – 10th Edition 7. Silberschatz, Galvin and Gagne ©2018
Solution to Dining Philosophers (Cont.)

void test (int i) {


if ((state[(i + 4) % 5] != EATING) &&
(state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING) ) {
state[i] = EATING ;
self[i].signal () ;
}
}

initialization_code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
}

Operating System Concepts – 10th Edition 7. Silberschatz, Galvin and Gagne ©2018
Solution to Dining Philosophers (Cont.)
 Each philosopher “i” invokes the operations pickup() and
putdown() in the following sequence:
DiningPhilosophers.pickup(i);
/** EAT **/

DiningPhilosophers.putdown(i);
 No deadlock, but starvation is possible

Operating System Concepts – 10th Edition 7. Silberschatz, Galvin and Gagne ©2018
Thank you

You might also like