Chapter2 Mutex
Chapter2 Mutex
2.1 Introduction
When processes share data, it is important to synchronize their access to the data so that updates are not
lost as a result of concurrent accesses and the data are not corrupted. This can be seen from the following
example. Assume that the initial value of a shared variable x is 0 and that there are two processes, P0 and
P1 such that each one of them increments x by the following statement in some high-level programming
language:
x=x+1
It is natural for the programmer to assume that the final value of x is 2 after both the processes have
executed. However, this may not happen if the programmer does not ensure that x = x + 1 is executed
atomically. The statement x = x + 1 may compile into the machine-level code of the form
Thus both processes load the value 0 into their registers and finally store 1 into x resulting in the “lost
update” problem.
To avoid this problem, the statement x = x + 1 should be executed atomically. A section of the code
that needs to be executed atomically is also called a critical region or a critical section. The problem of
ensuring that a critical section is executed atomically is called the mutual exclusion problem. This is one
of the most fundamental problems in concurrent computing and we will study it in detail.
17
18 CHAPTER 2. MUTUAL EXCLUSION PROBLEM
The mutual exclusion problem can be abstracted as follows. We are required to implement the interface
shown in Figure 2.1. A process that wants to enter the critical section (CS) makes a call to requestCS with
its own identifier as the argument. The process or the thread that makes this call returns from this method
only when it has the exclusive access to the critical section. When the process has finished accessing the
critical section, it makes a call to the method releaseCS.
public i n t e r f a c e Lock {
public void requestCS ( int p i d ) ; //may b l o c k
public void r e l e a s e C S ( int p i d ) ;
}
The entry protocol given by the method requestCS and the exit protocol given by the method
releaseCS should be such that the mutual exclusion is not violated.
To test the Lock, we use the program shown in Figure 2.2. This program tests the Bakery algorithm that
will be presented later. The user of the program may test a different algorithm for a lock implementation
by invoking the constructor of that lock implementation. The program launches N threads as specified by
arg[0]. Each thread is an object of the class MyThread. Let us now look at the class MyThread. This
class has two methods, nonCriticalSection and CriticalSection, and it overrides the run method of
the Thread class as follows. Each thread repeatedly enters the critical section. After exiting from the
critical section it spends an undetermined amount of time in the noncritical section of the code. In our
example, we simply use a random number to sleep in the critical and the noncritical sections.
Let us now look at some possible protocols, one may attempt, to solve the mutual exclusion problem.
For simplicity we first assume that there are only two processes, P0 and P1 .
import j a v a . u t i l . Random ;
public c l a s s MyThread extends Thread {
int myId ;
Lock l o c k ;
Random r = new Random ( ) ;
public MyThread ( int id , Lock l o c k ) {
myId = i d ;
this . lock = lock ;
}
void n o n C r i t i c a l S e c t i o n ( ) {
System . out . p r i n t l n ( myId + ” i s not i n CS” ) ;
U t i l . mySleep ( r . n e x t I n t ( 1 0 0 0 ) ) ;
}
void C r i t i c a l S e c t i o n ( ) {
System . out . p r i n t l n ( myId + ” i s i n CS ∗∗∗∗∗ ” ) ;
// c r i t i c a l s e c t i o n code
U t i l . mySleep ( r . n e x t I n t ( 1 0 0 0 ) ) ;
}
public void run ( ) {
while ( true ) {
l o c k . requestCS ( myId ) ;
CriticalSection ();
l o c k . r e l e a s e C S ( myId ) ;
nonCriticalSection ();
}
}
public s t a t i c void main ( S t r i n g [ ] a r g s ) throws E x c e p t i o n {
MyThread t [ ] ;
int N = I n t e g e r . p a r s e I n t ( a r g s [ 0 ] ) ;
t = new MyThread [N ] ;
Lock l o c k = new Bakery (N ) ; // or any o t h e r mutex a l g o r i t h m
f or ( int i = 0 ; i < N; i ++) {
t [ i ] = new MyThread ( i , l o c k ) ;
t [ i ] . start ();
}
}
}
———————————————————————————–
c l a s s Attempt1 implements Lock {
boolean openDoor = true ;
public void requestCS ( int i ) {
while ( ! openDoor ) ; // b u s y w a i t
openDoor = f a l s e ;
}
public void r e l e a s e C S ( int i ) {
openDoor = true ;
}
}
———————————————————————————–
enter the critical section, then one of them will succeed. However, it suffers from another problem. In
this protocol, both processes have to alternate with each other for getting the critical section. Thus, after
process P0 exits from the critical section it cannot enter the critical section again until process P1 has
entered the critical section. If process P1 is not interested in the critical section, then process P0 is simply
stuck waiting for process P1 . This is not desirable.
By combining the previous two approaches, however, we get Peterson’s algorithm for the mutual ex-
clusion problem in a two-process system. In this protocol, shown in Figure 2.6, we maintain two flags,
wantCS[0] and wantCS[1], as in Attempt2, and the turn variable as in Attempt3. To request the critical
section, process Pi sets its wantCS flag to true at line 6 and then sets the turn to the other process Pj at
line 7. After that, it waits at line 8 so long as the following condition is true:
Thus a process enters the critical section only if either it is its turn to do so or if the other process is not
interested in the critical section.
To release the critical section, Pi simply resets the flag wantCS[i] at line 11. This allows Pj to enter
the critical section by making the condition for its while loop false.
Intuitively, Peterson’s algorithm uses the order of updates to turn to resolve the contention. If both
processes are interested in the critical section, then the process that updated turn last, loses and is required
to wait.
We show that Peterson’s algorithm satisfies the following desirable properties:
1. Mutual exclusion: Two processes cannot be in the critical section at the same time.
2.2. PETERSON’S ALGORITHM 21
1 c l a s s P e t e r s o n A l g o r i t h m implements Lock {
2 boolean wantCS [ ] = { f a l s e , f a l s e } ;
3 int t u r n = 1 ;
4 public void requestCS ( int i ) {
5 int j = 1 − i ;
6 wantCS [ i ] = true ;
7 turn = j ;
8 while ( wantCS [ j ] && ( t u r n == j ) ) ;
9 }
10 public void r e l e a s e C S ( int i ) {
11 wantCS [ i ] = f a l s e ;
12 }
13 }
2. Progress: If one or more processes are trying to enter the critical section and there is no process
inside the critical section, then at least one of the processes succeeds in entering the critical section.
3. Starvation-freedom: If a process is trying to enter the critical section, then it eventually succeeds in
doing so.
We first prove that mutual exclusion is satisfied by Peterson’s algorithm by the method of contradiction.
Suppose, if possible, both processes P0 and P1 are in the critical section for some execution. Each of the
processes Pi must have set the variable turn to 1 − i. Without loss of generality, assume that P1 was the
last process to set the variable turn. This means that the value of turn was 0 when P1 checked the entry
condition for the critical section. Since P1 entered the critical section in spite of turn being 0, it must have
read wantCS[0] to be false. Therefore, we have the following sequence of events:
P0 sets turn to 1, P1 sets turn to 0, P1 reads wantCS[0] as false. However, P0 sets the turn variable to 1
after setting wantCS[0] to true. Since there are no other writes to wantCS[0], P1 reading it as false gives
us the desired contradiction.
We give a second proof of mutual exclusion due to Dijkstra. This proof does not reason on the sequence
of events; it uses an assertional proof. For the purposes of this proof, we introduce auxiliary variables
trying[0] and trying[1]. Whenever P0 reaches line 8, trying[0] becomes true. Whenever P0 reaches line 9,
i.e., it has acquired permission to enter the critical section, trying[0] becomes false.
Consider the predicate H(0) defined as
Assuming that there is no interference from P1 it is clear that P0 makes this predicate true after executing
(turn = 1) at line 7. Similarly, the predicate
1) ∧ trying[0]. Since wantCS[1] is also true, we look at falsification of (turn = 1) ∧ trying[0] ∧ wantCS[1].
P0 can falsify this only by setting trying[0] to false (i.e., by acquiring the permission to enter the critical
section). But, (turn = 1) ∧ (wantCS[1]) implies that the condition for the while statement at line 8 is
true, so P0 cannot exit the while loop.
Now, it is easy to show mutual exclusion. If P0 and P1 are in critical section, we get ¬trying[0] ∧ H(0) ∧
¬trying[1] ∧ H(1), which implies (turn = 0) ∧ (turn = 1), a contradiction.
It is easy to see that the algorithm satisfies the progress property. If both the processes are forever
checking the entry protocol in the while loop, then we get
———————————————————————————–
import j a v a . u t i l . Arrays ;
The algorithm shown in Figure 2.8 requires a process Pi to go through two main steps before it can
enter the critical section. In the first step (lines 15–21), it is required to choose a number. To do that, it
reads the numbers of all other processes and chooses its number as one bigger than the maximum number
it read. We will call this step the doorway. In the second step the process Pi checks if it can enter the
critical section as follows. For every other process Pj , process Pi first checks whether Pj is currently in the
doorway at line 25. If Pj is in the doorway, then Pi waits for Pj to get out of the doorway. At lines 26–29,
Pi waits for the number[j] to be 0 or (number[i], i) < (number[j], j). When Pi is successful in verifying
this condition for all other processes, it can enter the critical section.
(A1) If a process Pi is in critical section and some other process Pk has already chosen its number, then
(number[i], i) < (number[k], k).
Let t be the time when Pi read the value of choosing[k] to be f alse. If Pk had chosen its number be-
fore t, then Pi must read Pk ’s number correctly. Since Pi managed to get out of the kth iteration of the
2.5. LOWER BOUND ON THE NUMBER OF SHARED MEMORY LOCATIONS 25
for loop, ((number[i], i) < (number[k], k)) at that iteration. If Pk had chosen its number after t, then
Pk must have read the latest value of number[i] and is guaranteed to have number[k] > number[i]. If
((number[i], i) < (number[k], k)) at the kth iteration, this will continue to hold because number[i] does
not change and number[k] can only increase.
(A2) is true because it is clear from the program text that the value of any number is at least 0 and a
process executes increment operation on its number at line 20 before entering the critical section.
Showing that the bakery algorithm satisfies mutual exclusion is now trivial. If two processes Pi and
Pk are in critical section, then from (A2) we know that both of their numbers are nonzero. From (A1) it
follows that (number[i], i) < (number[k], k) and vice versa, which is a contradiction.
The bakery algorithm also satisfies starvation freedom because any process that is waiting to enter
the critical section will eventually have the smallest nonzero number. This process will then succeed in
entering the critical section.
It can be shown that the bakery algorithm does not make any assumptions on atomicity of any read
or write operation. Note that the bakery algorithm does not use any variable that can be written by more
than one process. Process Pi writes only on variables number[i] and choose[i].
There are two main disadvantages of the bakery algorithm: (1) it requires O(N ) work by each process
to obtain the lock even if there is no contention, and (2) it requires each process to use timestamps that
are unbounded in size.
the CS and then request CS again till it is about to write on A again. At this point both A and B are in
state consistent with no process in the CS. Next, we let R run and enter the CS. Then, we run P and Q
for one step thereby overwriting any change that R may have done. One of them must be able to enter
the CS to keep the algorithm deadlock-free. We have a violation of mutual exclusion in that state because
R is already in the CS.
Our lower bound result assumed that processes are asynchronous. We now give an algorithm that uses
timing assumptions to provide mutual exclusion with a single shared variable turn. The variable turn is
either −1 signifying that the critical section is available or has the identifier of the process that has the
right to enter the critical section. Whenever any process Pi finds that turn is −1, it must set turn to i in
at most c time units. It must then wait for at least delta units of time before checking the variable turn
again. The algorithm requires delta to be greater than c. If turn is still set to i, then it can enter the
critical section.
We first show mutual exclusion. Suppose that both Pi and Pj are in the CS. Suppose that turn is i.
This means that turn must have been j when Pj entered and then later turn was set to i. But Pi can set
turn to i only within c time units of Pj setting turn to j. However, Pj found turn to be j even after d ≥ c
time units. Hence, both Pi and Pj cannot be in CS.
We leave the proof of deadlock-freedom as an exercise.
2.7. A FAST MUTEX ALGORITHM 27
2.7.1 Splitter
A splitter is a method that splits processes into three disjoint groups: Left, Right, and Down. We can
visualize a splitter as a box such that processes enter from the top and either move to the left, the right
or go down which explains the names of the groups. The key property a splitter satisfies is that at most
one process goes in the down direction and not all processes go in the left or the right direction.
The algorithm for the splitter is shown in Fig. 2.10.
Pi ::
var
door: {open, closed} initially open
last : pid initially -1;
last := i;
if (door == closed)
return Left;
else
door := closed;
if (last == i) return Down;
else return Right;
end
A splitter consists of two variables: door and last. The door is initially open and if any process
finishes executing splitter the door gets closed. The variable last records the last process that executed the
statement last := i.
Each process Pi first records its pid in the variable last. It then checks if the door is closed. All processes
that find the door closed are put in the group Left. We claim
Proof: Initially, the door is open. At least one process must find the door to be open because every process
checks the door to be open before closing it. Since at least one process finds the door open, it follows that
|Lef t| ≤ n − 1.
Process Pi that find the door open checks if the last variable contains its pid. If this is the case, then
the process goes in the Down direction. Otherwise, it goes in the Right direction.
We now have the following claim.
Proof: Suppose that Pi be the first process that finds the door to be open and last equal to i (and then
later returns Down). We have the following order of events: Pi wrote last variable, Pi closed the door, Pi
read last variable as i. During this interval, no process Pj modified the last variable. Any process that
modifies last after this interval will find the door closed and therefore cannot return Down. Consider any
process Pj that modifies last before this interval. If Pj checks last before the interval, then Pi is not the
first process then finds last as itself. If Pj checks last after Pi has written the variable last, then it cannot
find itself as the last process since its pid was overwritten by Pi .
Proof: Consider the last process that wrote its index in last. If it finds the door closed, then that process
goes left. If it finds the door open then it goes down.
Note that the above code does not use any synchronization. In addition, the code does not have any
loop.
var
X, Y: int initially -1;
flag: array[1..n] of {down, up};
acquire(int i)
{
while (true)
flag[i] := up;
X := i;
if (Y != -1) { // splitter’s left
flag[i] := down;
waitUntil(Y == -1)
continue;
}
else {
Y := i;
if (X == i) // success with splitter
return; // fast path
else {// splitter’s right
flag[i] := down;
forall j:
waitUntil(flag[j] == down);
if (Y == i) return; // slow path
else {
waitUntil(Y == -1);
continue;
}
}
}
}
release(int i)
{
Y := -1;
flag[i] := down;
}
Proof:
Consider processes that found the door open, i.e., Y to be −1. Let Q be the set of processes that are
stuck that found the door open. If any one of them succeeded in ”last-to-write-X” we are done; otherwise,
the last process that wrote Y can enter the CS.
1 import j a v a . u t i l . c o n c u r r e n t . atomic . ∗ ;
2
3 public c l a s s GetAndSet implements MyLock {
4 AtomicBoolean i s O c c u p i e d = new AtomicBoolean ( f a l s e ) ;
5 public void l o c k ( ) {
6 while ( i s O c c u p i e d . getAndSet ( true ) ) {
7 Thread . y i e l d ( ) ;
8 // s k i p ( ) ;
9 }
10 }
11 public void u n l o c k ( ) {
12 isOccupied . set ( false ) ;
13 }
14 }
This algorithm satisfies the mutual exclusion and progress property. However, it does not satisfy
starvation freedom. Developing such a protocol is left as an exercise.
Most modern machines provide the instruction compareAndSet which takes as argument an expected
value and a new value. It atomically sets the current value to the new value if the current value equals the
expected value. It also returns true if it succeeded in setting the current value; otherwise, it returns false.
The reader is invited to design a mutual exclusion protocol using compareAndSet.
We now consider an alternative implementation of locks using getAndSet operation. In this imple-
mentation, a thread first checks if the lock is available using the get operation. It calls the getAndSet
operation only when it finds the critical section available. If it succeeds in getAndSet, then it enters
the critical section; otherwise, it goes back to spinning on the get operation. The implementation called
GetAndGetAndSet (or testAndTestAndSet) is shown in Fig. 2.13.
Although the implementations in Fig. 2.12 and Fig. 2.13 are functionally equivalent, the second
implementation usually results in faster accesses to the critical section on current multiprocessors. Can
you guess why?
The answer to the above question is based on the current architectures that use a shared bus and a
local cache with each core. Since an access to the shared memory via bus is much slower compared to an
access to the local cache, each core checks for a data item in its cache before issuing a memory request. Any
2.9. QUEUE LOCKS 31
1 import j a v a . u t i l . c o n c u r r e n t . atomic . ∗ ;
2
3 public c l a s s GetAndGetAndSet implements MyLock {
4 AtomicBoolean i s O c c u p i e d = new AtomicBoolean ( f a l s e ) ;
5 public void l o c k ( ) {
6 while ( true ) {
7 while ( i s O c c u p i e d . g e t ( ) ) {
8 }
9 i f ( ! i s O c c u p i e d . getAndSet ( true ) ) return ;
10 }
11 }
12 public void u n l o c k ( ) {
13 isOccupied . set ( false ) ;
14 }
15 }
data item that is found in the local cache is termed as a cache hit and can be served locally. Otherwise, we
get a cache miss and the item must be served from the main memory or cache of some other core. Caches
improve the performance of the program but require that the system ensures coherence and consistency of
cache. In particular, an instruction such as getAndSet requires that all other cores should invalidate their
local copies of the data item on which getAndSet is called. When cores spin using getAndSet instruction,
they repeatedly access the bus resulting in high contention and a slow down of the system. In the second
implementation, threads spin on the variable isOccupied using get. If the memory location corresponding
to isOccupied is in cache, the thread only reads cached value and therefore avoids the use of the shared
data bus.
Even though the idea of getAndGetAndSet reduces contention of the bus, it still suffers from high
contention whenever a thread exits the critical section. Suppose that a large number of threads were
spinning on their cached copies of isOccupied. Now suppose that the thread that had the lock leaves the
critical section. When it updates isOccupied, the cached copies of all spinning threads get invalidated.
All these threads now get that isOccupied is false and try to set it to true using getAndSet. Only, one
of them succeeds but all of them end up contributing to the contention on the bus. An idea that is useful
in reducing the contention is called backoff. Whenever a thread finds that it failed in getAndSet after a
successful get, instead of continuing to get the lock, it backs off for a certain random period of time. The
exponential backoff doubles the maximum period of time a thread may have to wait after any unsuccessful
trial. The resulting implementation is shown in Fig. 2.14.
Another implementation that is sometimes used for building locks is based on getting a ticket number
similar to Bakery algorithm. This implementation is also not scalable since it results in high contention
for currentTicket.
1 import j a v a . u t i l . c o n c u r r e n t . atomic . ∗ ;
2
3 public c l a s s MutexWithBackOff {
4 AtomicBoolean i s O c c u p i e d = new AtomicBoolean ( f a l s e ) ;
5 public void l o c k ( ) {
6 while ( true ) {
7 while ( i s O c c u p i e d . g e t ( ) ) {
8 }
9 i f ( ! i s O c c u p i e d . getAndSet ( true ) ) return ;
10 else {
11 int timeToSleep = c a l c u l a t e D u r a t i o n ( ) ;
12 Thread . s l e e p ( timeToSleep ) ;
13 }
14 }
15 }
16 public void u n l o c k ( ) {
17 isOccupied . set ( false ) ;
18 }
19 }
1 import j a v a . u t i l . c o n c u r r e n t . atomic . ∗ ;
2
3 public c l a s s TicketMutex {
4 A t o m i c I n t e g e r n e x t T i c k e t = new A t o m i c I n t e g e r ( 0 ) ;
5 A t o m i c I n t e g e r c u r r e n t T i c k e t = new A t o m i c I n t e g e r ( 0 ) ;
6 public void l o c k ( ) {
7 int m y t i c k e t = n e x t T i c k e t . getAndIncrement ( ) ;
8 while ( m y t i c k e t != c u r r e n t T i c k e t . g e t ( ) ) {
9 // s k i p ( ) ;
10 }
11 }
12 public void u n l o c k ( ) {
13 int temp = c u r r e n t T i c k e t . getAndIncrement ( ) ;
14 }
15 }
methods maintain a queue of threads waiting to enter the critical section. Anderson’s lock uses a fixed size
array, CLH lock uses an implicit linked list and MCS lock uses an explicit linked list for the queue. One
of the key challenges in designing these algorithms is that we cannot use locks to update the queue.
Anderson’s lock uses a circular array Available of size n which is at least as big as the number of threads
that may be contending for the critical section. The array is circular so that the index i in the array is
always a value in the range 0..n − 1 and is incremented modulo n. Different threads waiting for the critical
section spin on the different slots in this array thus avoiding the problem of multiple threads spinning
on the same variable. An atomic integer tailSlot (initialized to 0) is maintained which points the next
available slot in the array. Any thread that wants to lock, reads the value of the tailSlot in its local
variable mySlot and advances it in one atomic operation using getAndIncrement(). It then spins on
Available[mySlot] until the slot becomes available. Whenever a thread finds that the entry for its slot
is true, it can enter the critical section. The algorithm maintains the invariant that distinct processes have
distinct slots and also that there is at most one entry in Available that is true. To unlock, the thread sets
its own slot as false and the entry in the next slot to be true. Whenever a thread sets the next slot to be
true, any thread that was spinning on that slot can then enter the critical section. Note that in Anderson
lock, only one thread is affected when a thread leaves the critical section. Other threads continue to spin
on other slots which are cached and thus result only in accesses of local caches.
Note that the above description assumes that each slot is big enough so that adjacent slots do not share
a cache line. Hence even though we just need a single bit to store Available[i], it is important to keep it
big enough by padding to avoid the problem of false sharing. Also note that since Anderson’s lock assigns
slots to threads in the FCFS manner, it guarantees fairness and therefore freedom from starvation.
A problem with Anderson lock is that it requires a separate array of size n for each lock. Hence, a
system that uses m locks shared among n threads will use up O(nm) space.
1 import j a v a . u t i l . c o n c u r r e n t . atomic . ∗ ;
2 public c l a s s CLHLock implements MyLock {
3 c l a s s Node {
4 boolean l o c k e d ;
5 }
6 AtomicReference<Node> t a i l N o d e ;
7 ThreadLocal<Node> myNode ;
8 ThreadLocal<Node> pred ;
9
10 public CLHLock ( ) {
11 t a i l N o d e = new AtomicReference<Node>(new Node ( ) ) ;
12 tailNode . get ( ) . locked = false ;
13 myNode = new ThreadLocal<Node> ( ) {
14 protected Node i n i t i a l V a l u e ( ) {
15 return new Node ( ) ;
16 }
17 };
18 pred = new ThreadLocal<Node> ( ) ;
19
20 }
21 public void l o c k ( ) {
22 myNode . g e t ( ) . l o c k e d = true ;
23 pred . s e t ( t a i l N o d e . getAndSet ( myNode . g e t ( ) ) ) ;
24 while ( pred . g e t ( ) . l o c k e d ) { Thread . y i e l d ( ) ; } ;
25 }
26 public void u n l o c k ( ) {
27 myNode . g e t ( ) . l o c k e d = f a l s e ;
28 myNode . s e t ( pred . g e t ( ) ) ; // r e u s i n g p r e d e c e s s o r node f o r f u t u r e use
29 }
30 }
2.10 Problems
2.1. Show that any of the following modifications to Peterson’s algorithm makes it incorrect:
(a) A process in Peterson’s algorithm sets the turn variable to itself instead of setting it to the
other process.
(b) A process sets the turn variable before setting the wantCS variable.
2.2. Show that Peterson’s algorithm also guarantees freedom from starvation.
2.3. Show that the bakery algorithm does not work in absence of choosing variables.
2.4. Prove the correctness of the Filter algorithm in Figure 2.7. (Hint: Show that at each level, the
algorithm guarantees that at least one processes loses in the competition.)
2.5. Consider the software protocol shown in Figure 2.19 for mutual exclusion between two processes.
Does this protocol satisfy (a) mutual exclusion, and (b) livelock freedom (both processes trying to
enter the critical section and none of them succeeding)? Does it satisfy starvation freedom?
2.6. Modify the bakery algorithm to solve k-mutual exclusion problem, in which at most k processes can
be in the critical section concurrently.
2.7. Give a mutual exclusion algorithm that uses atomic swap instruction.
2.8. Give a mutual exclusion algorithm that uses TestAndSet instruction and is free from starvation.
*2.9. Give a mutual exclusion algorithm on N processes that requires O(1) time in absence of contention.
36 CHAPTER 2. MUTUAL EXCLUSION PROBLEM
1 import j a v a . u t i l . c o n c u r r e n t . atomic . ∗ ;
2
3 public c l a s s MCSLock implements MyLock {
4 c l a s s QNode {
5 boolean l o c k e d ;
6 QNode next ;
7 QNode ( ) {
8 l o c k e d = true ;
9 next = null ;
10 }
11 }
12 AtomicReference<QNode> t a i l N o d e = new AtomicReference<QNode>( null ) ;
13 ThreadLocal<QNode> myNode ;
14
15 public MCSLock ( ) {
16 myNode = new ThreadLocal<QNode> ( ) {
17 protected QNode i n i t i a l V a l u e ( ) {
18 return new QNode ( ) ;
19 }
20 };
21 }
22 public void l o c k ( ) {
23 QNode pred = t a i l N o d e . getAndSet ( myNode . g e t ( ) ) ;
24 i f ( pred != null ) {
25 myNode . g e t ( ) . l o c k e d = true ;
26 pred . next = myNode . g e t ( ) ;
27 while ( myNode . g e t ( ) . l o c k e d ) { Thread . y i e l d ( ) ; } ;
28 }
29 }
30 public void u n l o c k ( ) {
31 i f ( myNode . g e t ( ) . next == null ) {
32 i f ( t a i l N o d e . compareAndSet ( myNode . g e t ( ) , null ) ) return ;
33 while ( myNode . g e t ( ) . next == null ) { Thread . y i e l d ( ) ; } ;
34 }
35 myNode . g e t ( ) . next . l o c k e d = f a l s e ;
36 myNode . g e t ( ) . next = null ;
37 }
38 }
———————————————————————————–
c l a s s Dekker implements Lock {
boolean wantCS [ ] = { f a l s e , f a l s e } ;
int t u r n = 1 ;
public void requestCS ( int i ) { // e n t r y p r o t o c o l
int j = 1 − i ;
wantCS [ i ] = true ;
while ( wantCS [ j ] ) {
i f ( t u r n == j ) {
wantCS [ i ] = f a l s e ;
while ( t u r n == j ) ; // b u s y w a i t
wantCS [ i ] = true ;
}
}
}
public void r e l e a s e C S ( int i ) { // e x i t p r o t o c o l
turn = 1 − i ;
wantCS [ i ] = f a l s e ;
}
}
———————————————————————————–