Java Multithreading For Senior Engineering Interviews
Java Multithreading For Senior Engineering Interviews
This lesson details the reasons why threads exist and what bene t do they provide. We also discuss the problems
that come with threads.
Introduction
Threads like most computer science concepts aren't physical objects. The
closest tangible manifestation of threads can be seen in a debugger. The
screen-shot below, shows the threads of our program suspended in the
debugger.
code files and click save, that clicking of the button will initiate a
workflow which will cause bytes to be written out to the underlying
physical disk. However, IO is an expensive operation, and the CPU will be
idle while bytes are being written out to the disk.
Whilst IO takes place, the idle CPU could work on something useful and
here is where threads come in - the IO thread is switched out and the UI
thread gets scheduled on the CPU so that if you click elsewhere on the
screen, your IDE is still responsive and does not appear hung or frozen.
Threads can give the illusion of multitasking even though at any given
point in time the CPU is executing only one thread. Each thread gets a
slice of time on the CPU and then gets switched out either because it
initiates a task which requires waiting and not utilizing the CPU or it
completes its time slot on the CPU. There are much more nuances and
intricacies on how thread scheduling works but what we just described,
forms the basis of it.
Bene ts of Threads
class Demonstration {
public static void main( String args[] ) throws InterruptedException {
SumUpExample.runTest();
}
}
class SumUpExample {
long startRange;
long endRange;
long counter = 0;
static long MAX_NUM = Integer.MAX_VALUE;
s2.add();
});
t1.start();
t2.start();
t1.join();
t2.join();
oneThread();
twoThreads();
}
}
In my run, I see the two threads scenario run within 652 milliseconds
whereas the single thread scenario runs in 886 milliseconds. You may
observe different numbers but the time taken by two threads would
always be less than the time taken by a single thread. The performance
gains can be many folds depending on the availability of multiple CPUs
and the nature of the problem being solved. However, there will always
be problems that don't yield well to a multi-threaded approach and may
very well be solved efficiently using a single thread.
1. Usually very hard to find bugs, some that may only rear head in
production environments
This lesson discusses the differences between a program, process and a thread. Also included is an example of a
thread-unsafe program.
Program
Process
Thread
special attention needs to be paid when any thread tries to read or write
to this global shared state. There are several constructs offered by various
programming languages to guard and discipline the access to this global
state, which we will go into further detail in upcoming lessons.
Notes
1. int counter = 0;
2.
3. void incrementCounter() {
4. counter++;
5. }
Read the value of the variable counter from the register where it is
stored
Now imagine if we have two threads trying to execute the same function
incrementCounter then one of the ways the execution of the two threads
can take place is as follows:
Lets call one thread as T1 and the other as T2. Say the counter value is
equal to 7.
3. T2 gets scheduled and luckily gets to complete all the three steps A, B
and C before getting switched out for T1. It reads the value 7, adds
one to it and stores 8 back.
4. T1 comes back and since its state was saved by the operating system,
it still has the stale value of 7 that it read before being context
switched. It doesn't know that behind its back the value of the
variable has been updated. It unfortunately thinks the value is still 7,
adds one to it and overwrites the existing 8 with its own computed 8.
If the threads executed serially the final value would have been 9.
import java.util.Random;
class DemoThreadUnsafe {
@Override
public void run() {
for (int i = 0; i < 100; i++) {
badCounter.increment();
DemoThreadUnsafe.sleepRandomlyForLessThan10Secs();
}
}
});
@Override
public void run() {
for (int i = 0; i < 100; i++) {
badCounter.decrement();
DemoThreadUnsafe.sleepRandomlyForLessThan10Secs();
}
}
});
class ThreadUnsafeCounter {
int count = 0;
void printFinalCounterValue() {
System.out.println("counter is: " + count);
}
}
This lesson clari es the common misunderstandings and confusions around concurrency and parallelism.
Introduction
Serial Execution
When programs are serially executed, they are scheduled one at a time on
the CPU. Once a task gets completed, the next one gets a chance to run.
Each task is run from the beginning to the end without interruption. The
analogy for serial execution is a circus juggler who can only juggle one
ball at a time. Definitely not very entertaining!
Concurrency
A concurrent system can have two programs in progress at the same time
where progress doesn't imply execution. One program can be suspended
while the other executes. Both programs are able to make progress as
their execution is interleaved. In concurrent systems, the goal is to
maximize throughput and minimize latency. For example, a browser
running on a single core machine has to be responsive to user clicks but
also be able to render HTML on screen as quickly as possible. Concurrent
systems achieve lower latency and higher throughput when programs
running on the system require frequent network or disk I/O.
Going back to our circus analogy, a concurrent juggler is one who can
juggle several balls at the same time. However, at any one point in time,
he can only have a single ball in his hand while the rest are in flight. Each
ball gets a time slice during which it lands in the juggler's hand and then
is thrown back up. A concurrent system is in a similar sense juggling
several processes at the same time.
Parallelism
Revisiting our juggler analogy, a parallel system would map to at least two
or more jugglers juggling one or more balls. In the case of an operating
system, if it runs on a machine with say four CPUs then the operating
system can execute four tasks at the same time, making execution
parallel. Either a single (large) problem can be executed in parallel or
distinct programs can be executed in parallel on a system supporting
parallel execution.
Concurrency vs Parallelism
This lesson details the differences between the two common models of multitasking.
Introduction
Preemptive Multitasking
Cooperative Multitasking
Preemptive Multitasking
Cooperative Multitasking
Cooperative Multitasking involves well-behaved programs to voluntarily
give up control back to the scheduler so that another program can run. A
program or thread may give up control after a period of time has expired
or if it becomes idle or logically blocked. The operating system’s scheduler
has no say in how long a program or thread runs for. A malicious
program can bring the entire system to a halt by busy waiting or running
an infinite loop and not giving up control. The process scheduler for an
operating system implementing cooperative multitasking is called a
cooperative scheduler. As the name implies, the participating programs
or threads are required to cooperate to make the scheduling scheme
work.
Cooperative vs Preemptive
This lesson discusses the differences between asynchronous and synchronous programming which are often
talked about in the context of concurrency.
Synchronous
Asynchronous
We delve into the characteristics of programs with different resource-use pro les and how that can affect program
design choices.
CPU Time
Memory
Networking Resources
Disk Storage
CPU Bound
I/O Bound
I/O bound programs are the opposite of CPU bound programs. Such
programs spend most of their time waiting for input or output operations
to complete while the CPU sits idle. I/O operations can consist of
operations that write or read from main memory or network interfaces.
Because the CPU and main memory are physically separate a data bus
exists between the two to transfer bits to and fro. Similarly, data needs to
be moved between network interfaces and CPU/memory. Even though the
physical distances are tiny, the time taken to move the data across is big
enough for several thousand CPU cycles to go waste. This is why I/O
bound programs would show relatively lower CPU utilization than CPU
bound programs.
Notes
This lessons discusses throughput and latency in the context of concurrent systems.
Throughput
Throughput is defined as the rate of doing work or how much work gets
done per unit of time. If you are an Instagram user, you could define
throughput as the number of images your phone or browser downloads
per unit of time.
Latency
Throughput vs Latency
The two terms are more frequently used when describing networking
links and have more precise meanings in that domain. In the context of
concurrency, throughput can be thought of as time taken to execute a
program or computation. For instance, imagine a program that is given
hundreds of files containing integers and asked to sum up all the
numbers. Since addition is commutative each file can be worked on in
parallel. In a single-threaded environment, each file will be sequentially
processed but in a concurrent system, several threads can work in
parallel on distinct files. Of course, there will be some overhead to
manage the state including already processed files. However, such a
program will complete the task much faster than a single thread. The
performance difference will become more and more apparent as the
number of input files increases. The throughput in this example can be
defined as the number of files processed by the program in a minute. And
latency can be defined as the total time taken to completely process all the
files. As you observe in a multithreaded implementation throughput will
go up and latency will go down. More work gets done in less amount of
time. In general, the two have an inverse relationship.
Critical Sections & Race Conditions
This section exhibits how incorrect synchronization in a critical section can lead to race conditions and buggy code.
The concepts of critical section and race condition are explained in depth. Also included is an executable example
of a race condition.
Critical Section
Critical section is any piece of code that has the possibility of being
executed concurrently by more than one thread of the application and
exposes any shared data or resources used by the application for access.
Race Condition
sequence is called test-then-act. The pitfall here is that the state can be
mutated by the second thread just after the test by the first thread and
before the first thread takes action based on the test. A different thread
changes the predicate in between the test and act. In this case, action by
the first thread is not justified since the predicate doesn't hold when the
action is executed.
Consider the snippet below. We have two threads working on the same
variable randInt . The modifier thread perpetually updates the value of
randInt in a loop while the printer thread prints the value of randInt
only if randInt is divisible by 5. If you let this program run, you'll notice
some values get printed even though they aren't divisible by 5
demonstrating a thread unsafe verison of test-then-act.
The below program spawns two threads. One thread prints the value of a
shared variable whenever the shared variable is divisible by 5. A race
condition happens when the printer thread executes a test-then-act if
clause, which checks if the shared variable is divisible by 5 but before the
thread can print the variable out, its value is changed by the modifier
thread. Some of the printed values aren't divisible by 5 which verifies the
existence of a race condition in the code.
import java.util.*;
class Demonstration {
class RaceCondition {
int randInt;
Random random = new Random(System.currentTimeMillis());
void printer() {
int i = 1000000;
while (i != 0) {
if (randInt % 5 == 0) {
if (randInt % 5 != 0)
System.out.println(randInt);
}
i--;
}
}
void modifier() {
int i = 1000000;
while (i != 0) {
randInt = random.nextInt(1000);
i--;
}
}
@Override
public void run() {
rc.printer();
}
});
Thread thread2 = new Thread(new Runnable() {
@Override
public void run() {
rc.modifier();
}
});
thread1.start();
thread2.start();
thread1.join();
thread2.join();
}
}
Even though the if condition on line 19 makes a check for a value which
is divisible by 5 and only then prints randInt . It is just after the if check
and before the print statement i.e. in-between lines 19 and 21, that the
For the impatient, the fix is presented below where we guard the read
and write of the randInt variable using the RaceCondition object as the
monitor. Don't fret if the solution doesn't make sense for now, it would,
once we cover various topics in the lessons ahead.
import java.util.*;
class Demonstration {
class RaceCondition {
int randInt;
Random random = new Random(System.currentTimeMillis());
void printer() {
int i = 1000000;
while (i != 0) {
synchronized(this) {
if (randInt % 5 == 0) {
if (randInt % 5 != 0)
System.out.println(randInt);
}
}
i--;
}
}
void modifier() {
int i = 1000000;
while (i != 0) {
synchronized(this) {
randInt = random.nextInt(1000);
i--;
}
}
}
@Override
public void run() {
rc.printer();
}
});
Thread thread2 = new Thread(new Runnable() {
@Override
public void run() {
rc.modifier();
}
});
thread1.start();
thread2.start();
thread1.join();
thread2.join();
}
}
We discuss important concurrency concepts deadlock, liveness, live-lock, starvation and reentrant locks in depth.
Also included are executable code examples for illustrating these concepts.
DeadLock
Deadlocks occur when two or more threads aren't able to make any
progress because the resource required by the first thread is held by the
second and the resource required by the second thread is held by the
first.
Liveness
Live-Lock
John moves to the left to let Arun pass, and Arun moves to his right to let
John pass. Both block each other now. John sees he's blocking Arun again
and moves to his right and Arun moves to his left seeing he's blocking
John. They never cross each other and keep blocking each other. This
scenario is an example of a livelock. A process seems to be running and
not deadlocked but in reality, isn't making any progress.
Starvation
Deadlock Example
void increment(){
acquire MUTEX_A
acquire MUTEX_B
// do work here
release MUTEX_B
release MUTEX_A
void decrement(){
acquire MUTEX_B
acquire MUTEX_A
// do work here
release MUTEX_A
release MUTEX_B
}
The above code can potentially result in a deadlock. Note that deadlock
may not always happen, but for certain execution sequences, deadlock
can occur. Consider the below execution sequence that ends up in a
deadlock:
T1 acquires MUTEX_A
T2 acquires MUTEX_B
You can come back to the examples presented below as they require an
understanding of the synchronized keyword that we cover in later
sections. Or you can just run the examples and observe the output for
now to get a high-level overview of the concepts we discussed in this
lesson.
If you run the code snippet below, you'll see that the statements for
acquiring locks: lock1 and lock2 print out but there's no progress after
that and the execution times out. In this scenario, the deadlock occurs
because the locks are being acquired in a nested fashion.
class Demonstration {
class Deadlock {
@Override
public void run() {
try {
for (int i = 0; i < 100; i++) {
incrementCounter();
System.out.println("Incrementing " + i);
}
} catch (InterruptedException ie) {
}
}
};
@Override
public void run() {
try {
for (int i = 0; i < 100; i++) {
decrementCounter();
System.out.println("Decrementing " + i);
}
} catch (InterruptedException ie) {
}
}
};
thread1.start();
// sleep to make sure thread 1 gets a chance to acquire lock1
Thread.sleep(100);
thread2.start();
thread1.join();
thread2.join();
synchronized (lock2) {
counter++;
}
}
}
Thread.sleep(100);
synchronized (lock1) {
counter--;
}
}
}
}
Example of a Deadlock
Reentrant Lock
Take a minute to read the code and assure yourself that any object of this
class if locked twice in succession would result in a deadlock. The same
thread gets blocked on itself, and the program is unable to make any
further progress. If you click run, the execution would time-out.
class Demonstration {
class NonReentrantLock {
boolean isLocked;
public NonReentrantLock() {
isLocked = false;
}
while (isLocked) {
wait();
}
isLocked = true;
}
The concept of and the difference between a mutex and a semaphore will draw befuddled expressions on most
developers' faces. We discuss the differences between the two most fundamental concurrency constructs offered
by almost all language frameworks. Difference between a mutex and a semaphore makes a pet interview question
for senior engineering positions!
Mutex
Semaphore
Mutex Example
The following illustration shows how two threads acquire and release a
mutex one after the other to gain access to shared data. Mutex guarantees
the shared state isn't corrupted when competing threads work on it.
When a Semaphore Masquerades as a Mutex?
A semaphore can potentially act as a mutex if the permits it can give out
is set to 1. However, the most important difference between the two is
that in case of a mutex the same thread must call acquire and
subsequent release on the mutex whereas in case of a binary
sempahore, different threads can call acquire and release on the
semaphore. The pthreads library documentation states this in the
pthread_mutex_unlock() method's description.
If a thread attempts to unlock a mutex that it has not locked or a mutex which is unlock
Learn what a monitor is and how it is different than a mutex. Monitors are advanced concurrency constructs and
speci c to languages frameworks.
To understand monitors, let's first see the problem they solve. Usually, in
multi-threaded applications, a thread needs to wait for some program
predicate to be true before it can proceed forward. Think about a
producer/consumer application. If the producer hasn't produced anything
the consumer can't consume anything, so the consumer must wait on a
predicate that lets the consumer know that something has indeed been
produced. What could be a crude way of accomplishing this? The
consumer could repeatedly check in a loop for the predicate to be set to
true. The pattern would resemble the pseudocode below:
void busyWaitFunction() {
// acquire mutex
while (predicate is false) {
// release mutex
// acquire mutex
}
// do something useful
// release mutex
}
Within the while loop we'll first release the mutex giving other threads a
chance to acquire it and set the loop predicate to true. And before we
check the loop predicate again, we make sure we have acquired the
mutex again. This works but is an example of "spin waiting" which
wastes a lot of CPU cycles. Next, let's see how condition variables solve the
spin-waiting issue.
Condition Variables
Now imagine a producer places an item in the buffer. The predicate, the
size of the buffer, just changed and the producer wants to let the
consumer threads know that there is an item to be consumed. This
producer thread would then invoke signal() on the condition variable.
The signal() method when called on a condition variable causes one of
the threads that has been placed in the wait queue to get ready for
execution. Note we didn't say the woken up thread starts executing, it just
gets ready - and that could mean being placed in the ready queue. It is
only after the producer thread which calls the signal() method has
released the associated mutex that the thread in the ready queue
starts executing. The thread in the ready queue must wait to acquire the
mutex associated with the condition variable before it can start executing.
void efficientWaitingFunction() {
mutex.acquire()
while (predicate == false) {
condVar.wait()
}
// Do something useful
mutex.release()
}
void changePredicate() {
mutex.acquire()
set predicate = true
condVar.signal()
mutex.release()
}
Note that the order of signaling the condition variable and releasing the
mutex can be interchanged, but generally, the preference is to signal first
and then release the mutex. However, the ordering might have
ramifications on thread scheduling depending on the threading
implementation.
The wary reader would have noticed us using a while loop to test for the
predicate. After all, the pseudocode could have been written as follows
void efficientWaitingFunction() {
mutex.acquire()
if (predicate == false) {
condVar.wait()
}
// Do something useful
mutex.release()
}
Monitor Explained
After the above discussion, we can now realize that a monitor is made
up of a mutex and one or more condition variables. A single monitor
can have multiple condition variables but not vice versa. Theoretically,
another way to think about a monitor is to consider it as an entity having
two queues or sets where threads can be placed. One is the entry set and
the other is the wait set. When a thread A enters a monitor it is placed
into the entry set. If no other thread owns the monitor, which is
equivalent of saying no thread is actively executing within the monitor
section, then thread A will acquire the monitor and is said to own it too.
Thread A will continue to execute within the monitor section till it exits
the monitor or calls wait() on an associated condition variable and be
placed into the wait set. While thread A owns the monitor no other
thread will be able to execute any of the critical sections protected by the
monitor. New threads requesting ownership of the monitor get placed
into the entry set.
Practically, in Java each object is a monitor and implicitly has a lock and is
a condition variable too. You can think of a monitor as a mutex with a
wait set. Monitors allow threads to exercise mutual exclusion as well as
cooperation by allowing them to wait and signal on conditions.
Continues the discussion of the differences between a mutex and a monitor and also looks at Java's
implementation of the monitor.
Java's Monitor
In Java every object is a condition variable and has an associated lock that
is hidden from the developer. Each java object exposes wait() and
notify() methods.
class BadSynchronization {
class BadSynchronization {
synchronized (lock) {
lock.notify();
Once the asleep thread is signaled and wakes up, you may ask why does it
need to check for the condition being false again, the signaling thread
must have just set the condition to true?
The Difference
Blindly adding threads to speed up program execution may not always be a good idea. Find out what Amdahl's
Law says about parallelizing a program
De nition
If you have a poultry farm where a hundred hens lay eggs each day, then
no matter how many people you hire to process the laid eggs, you still
need to wait an entire day for the 100 eggs to be laid. Increasing the
number of workers on the farm can't shorten the time it takes for a hen to
lay an egg. Similarly, software programs consist of parts which can't be
sped up even if the number of processors is increased. These parts of the
program must execute serially and aren't amenable to parallelism.
Example
Say our program has a parallelizable portion of P = 90% = 0.9. Now let's
see how the speed-up occurs as we increase the number of processes
n = 1 processor
n = 2 processors
n = 5 processors
n = 10 processors
n = 100 processors
n = 1000 processors
n = infinite processors
One should take the calculations using Amdahl's law with a grain of salt.
If the formula spits out a speed-up of 5x it doesn't imply that in reality one
would observe a similar speed-up. There are other factors such as the
memory architecture, cache misses, network and disk I/O etc that can
affect the execution time of a program and the actual speed-up might be
less than the calculated one.
The Amdahl's law works on a problem of fixed size. However as
computing resources are improved, algorithms run on larger and even
larger datasets. As the dataset size grows, the parallelizable portion of the
program grows faster than the serial portion and a more realistic
assessment of performance is given by Gustafson's law, which we won't
discuss here as it is beyond the scope of this text.
Moore's Law
Moore's Law
Initially, the clock speeds of processors also doubled along with the
transistor count. This is because as transistors get smaller, their
frequency increases and propagation delays decrease because now the
transistors are packed closer together. However, the promise of
exponential growth by Moore’s law came to an end more than a decade
ago with respect to clock speeds. The increase in clock speeds of
processors has slowed down much faster than the increase in number of
transistors that can be placed on a microchip. If we plot clock speeds we
find that the linear exponential growth stopped after 2003 and the trend
line flattened out. The clock speed (proportional to difference between
supply voltage and threshold voltage) cannot increase because the supply
voltage is already down to an extent where it cannot be decreased to get
dramatic gains in clock speed. In 10 years from 2000 to 2009, clock
speed just increased from 1.3 GHz to 2.8 GHz merely doubling in a
decade rather than increasing 32 times as expected by Moore's law.
The following plot shows the clock speeds flattening out towards 2010.
Since processors aren't getting faster as quickly as they use to, we need
alternative measures to achieve performance gains. One of the ways to do
that is to use multicore processors. Introduced in the early 2000s,
multicore processors have more than one CPU on the same machine. To
exploit this processing power, programs must be written as multi-
threaded applications. A single-threaded application running on an octa-
core processor will only use 1/8th of the total throughput of that machine,
which is unacceptable in most scenarios.
Another analogy is to think of a bullock cart being pulled by an ox. We
can breed the ox to be stronger and more powerful to pull more load but
eventually there's a limit to how strong the ox can get. To pull more load,
an easier solution is to attach several oxen to the bullock cart. The
computing industry is also going in the direction of this analogy.
Thready Safety & Synchronized
This lesson explains thread-safety and the use of the synchronized keyword.
With the abstract concepts discussed, we'll now turn to the concurrency
constructs offered by Java and use them in later sections to solve practical
coding problems.
Thread Safe
A class and its public APIs are labelled as thread safe if multiple threads
can consume the exposed APIs without causing race conditions or state
corruption for the class. Note that composition of two or more thread-safe
classes doesn't guarantee the resulting type to be thread-safe.
Synchronized
Each object in Java has an entity associated with it called the "monitor
lock" or just monitor. Think of it as an exclusive lock. Once a thread gets
hold of the monitor of an object, it has exclusive access to all the methods
marked as synchronized. No other thread will be allowed to invoke a
method on the object that is marked as synchronized and will block, till
the first thread releases the monitor which is equivalent of the first
thread exiting the synchronized method.
Note carefully:
1. For static methods, the monitor will be the class object, which is
distinct from the monitor of each instance of the same class.
class Employee {
// shared variable
private String name;
this.name = "";
}
As an example look at the employee class above. All the three methods
are synchronized on the "this" object. If we created an object and three
different threads attempted to execute each method of the object, only
one will get access, and the other two will block. If we synchronized on a
different object other than the this object, which is only possible for the
getName method given the way we have written the code, then the 'critical
sections' of the program become protected by two different locks. In that
scenario, since setName and resetName would have been synchronized on
the this object only one of the two methods could be executed
concurrently. However getName would be synchronized independently of
the other two methods and can be executed alongside one of them. The
change would look like as follows:
class Employee {
// shared variable
private String name;
private Object lock = new Object();
this.name = "";
}
All the sections of code that you guard with synchronized blocks on the
same object can have at most one thread executing inside of them at any
given point in time. These sections of code may belong to different
methods, classes or be spread across the code base.
Note with the use of the synchronized keyword, Java forces you to
implicitly acquire and release the monitor-lock for the object within
the same method! One can't explicitly acquire and release the monitor in
different methods. This has an important ramification, the same thread
will acquire and release the monitor! In contrast, if we used
semaphore, we could acquire/release them in different methods or by
different threads.
class Demonstration {
public static void main( String args[] ) throws InterruptedException {
IncorrectSynchronization.runExample();
}
}
class IncorrectSynchronization {
}
}
}
});
Thread t2 = new Thread(new Runnable() {
flag = false;
System.out.println("Boolean assignment done.");
}
});
t1.start();
Thread.sleep(1000);
t2.start();
t1.join();
t2.join();
}
wait()
The wait method is exposed on each java object. Each Java object can act
as a condition variable. When a thread executes the wait method, it
releases the monitor for the object and is placed in the wait queue. Note
that the thread must be inside a synchronized block of code that
synchronizes on the same object as the one on which wait() is being
called, or in other words, the thread must hold the monitor of the
object on which it'll call wait. If not so, an illegalMonitor exception is
raised!
notify()
Like the wait method, notify() can only be called by the thread which
owns the monitor for the object on which notify() is being called else an
illegal monitor exception is thrown. The notify method, will awaken one
of the threads in the associated wait queue, i.e., waiting on the thread's
monitor.
notifyAll()
ot y ()
This method is the same as the notify() one except that it wakes up all
the threads that are waiting on the object's monitor.
Interrupting Threads
Interrupted Exception
You'll often come across this exception being thrown from functions.
When a thread wait()-s or sleep()-s then one way for it to give up
waiting/sleeping is to be interrupted. If a thread is interrupted while
waiting/sleeping, it'll wake up and immediately throw Interrupted
exception.
The thread class exposes the interrupt() method which can be used to
interrupt a thread that is blocked in a sleep() or wait() call. Note that
invoking the interrupt method only sets a flag that is polled periodically
by sleep or wait to know the current thread has been interrupted and an
interrupted exception should be thrown.
class Demonstration {
class InterruptExample {
}
}
});
sleepyThread.start();
sleepyThread.join();
}
}
On line 22 we print the interrupt status for the thread, which is set to
true because of line 20.
Note that there are two methods to check for the interrupt status of a
thread. One is the static method Thread.interrupted() and the other
is Thread.currentThread().isInterrupted() . The important difference
between the two is that the static method would return the interrupt
status and also clear it at the same time. On line 22 we deliberately
call the object method first followed by the static method. If we
reverse the ordering of the two method calls on line 22, the output
for the line would be true and false, instead of true and true.
Volatile
Volatile
Reentrant Lock
Java's answer to the traditional mutex is the reentrant lock, which comes
with additional bells and whistles. It is similar to the implicit monitor lock
accessed when using synchronized methods or blocks. With the reentrant
lock, you are free to lock and unlock it in different methods but not with
different threads. If you attempt to unlock a reentrant lock object by a
thread which didn't lock it initially, you'll get an
IllegalMonitorStateException. This behavior is similar to when a thread
attempts to unlock a pthread mutex.
Condition Variable
java.util.concurrent
Missed Signals
In later sections, you'll learn that the way we are using the condition
variable's await method is incorrect. The idiomatic way of using await is
in a while loop with an associated boolean condition. For now, observe
the possibility of losing signals between threads.
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.ReentrantLock;
class Demonstration {
class MissedSignalExample {
}
});
lock.lock();
try {
condition.await();
System.out.println("Received signal");
} catch (InterruptedException ie) {
// handle interruption
}
lock.unlock();
}
});
signaller.start();
signaller.join();
waiter.start();
waiter.join();
System.out.println("Program Exiting.");
}
}
The above code when ran, will never print the statement Program Exiting
and execution would time out. Apart from refactoring the code to match
the idiomatic usage of condition variables in a while loop, the other
possible fix is to use a semaphore for signalling between the two threads
as shown below
import java.util.concurrent.Semaphore;
class Demonstration {
class FixedMissedSignalExample {
signaller.start();
signaller.join();
Thread.sleep(5000);
waiter.start();
waiter.join();
System.out.println("Program Exiting.");
}
}
Semaphore
import java.util.concurrent.Semaphore;
class Demonstration {
class IncorrectSemaphoreExample {
while (true) {
try {
semaphore.acquire();
} catch (InterruptedException ie) {
// handle thread interruption
}
});
badThread.start();
goodThread.start();
badThread.join();
goodThread.join();
System.out.println("Exiting Program");
}
}
The above code when run would time out and show that one of the
threads threw an exception. The code is never able to release the
semaphore causing the other thread to block forever. Whenever using
locks or semaphores, remember to unlock or release the semaphore in a
finally block. The corrected version appears below.
import java.util.concurrent.Semaphore;
class Demonstration {
class CorrectSemaphoreExample {
public static void example() throws InterruptedException {
while (true) {
try {
semaphore.acquire();
try {
throw new RuntimeException("");
} catch (Exception e) {
// handle any program logic exception and exit the function
return;
} finally {
System.out.println("Bad thread releasing semahore.");
semaphore.release();
}
badThread.start();
goodThread.start();
badThread.join();
goodThread.join();
System.out.println("Exiting Program");
}
}
Running the above code will print the Exiting Program statement.
Spurious Wakeups
Spurious Wakeups
Lock Fairness
We'll briefly touch on the topic of fairness in locks since its out of scope
for this course. When locks get acquired by threads, there's no guarantee
of the order in which threads are granted access to a lock. A thread
requesting lock access more frequently may be able to acquire the lock
unfairly greater number of times than other locks. Java locks can be
turned into fair locks by passing in the fair constructor parameter.
However, fair locks exhibit lower throughput and are slower compared to
their unfair counterparts.
Thread Pools
Java offers thread pools via its Executor Framework. The framework
includes classes such as the ThreadPoolExecutor for creating thread pools.
Java Memory Model
This lesson lays out the ground work for understanding the Java Memory Model.
Consider the below code snippet executed in our main thread. Assume
the application also spawns a couple of other threads, that'll execute the
method runMethodForOtherThreads()
Now you would expect that the other threads would see the myVariable
value change to 7 as soon as the main thread executes the assignment on
line 9. This assumption is false in modern architectures and other threads
may see the change in the value of the variable myVariable with a delay
or not at all. Below are some of the reasons that can cause this to happen
One likely scenario can be that the variable is updated with the new value
in the processor's cache but not in the main memory. When another
thread running on another core requests the variable myVariable 's value
from the memory, it still sees the stale value of 0. This is a specific
example of the cache coherence problem. Different processor
architectures have different policies as to when an individual processor's
cache is reconciled with the main memory
Within-Thread as-if-serial
Let's see in the next lesson how these optimizations can give surprising
results in multithreaded scenarios.
Reordering Effects
This lesson discusses the compiler, runtime or hardware optimizations that can cause reordering of program
instructions
Take a look at the following program and try to come up with all the
possible outcomes for the variables ping and pong .
class Demonstration {
public static void main( String args[] ) throws Exception {
(new ReorderingExample()).reorderTest();
}
}
class ReorderingExample {
t1.start();
t2.start();
t1.join();
t2.join();
System.out.println(ping + " " + pong);
}
}
1 and 1
1 and 0
0 and 1
However, it might surprise many but the program can very well print 0
and 0! How is that even possible? Think from the point of view of a
compiler, it sees the following instructions for thread t1's run() method:
bar = 1;
pong = foo;
The compiler doesn't know that the variable bar is being used by another
thread so it may take the liberty to reorder the statements like so:
pong = foo;
bar = 1;
The two statements don't have a dependence on each other in the sense
that they are working off of completely different variables. For
performance reasons, the compiler may decide to switch their ordering.
Other forces are also at play, for instance, the value of one of the variables
may get flushed out to the main memory from the processor cache but
not for the other variable.
Note that with the reordering of the statements the JVM still is able to
honor the within-thread as-if-serial semantics and is completely justified
to move the statements around. Such performance and optimization
tricks by the compiler, runtime or hardware catch unsuspecting
developers off guard and lead to bugs which are very hard to reproduce
in production.
Platform
Java is touts the famous code once, run anywhere mantra as one of its
strengths. However, this isn't possible without Java shielding us from the
vagrancies of the multitude of memory architectures that exist in the
wild. For instance, the frequency of reconciling a processor's cache with
the main memory depends on the processor architecture. A processor
may relax its memory coherence guarantees in favor of better
performance. The architecture's memory model specifies the guarantees
a program can expect from the memory model. It will also specify
instructions required to get additional memory coordination guarantees
when data is being shared among threads. These instructions are usually
called memory fences or barriers but the Java developer can rely on the
JVM to interface with the underlying platform's memory model through
its own memory model called JMM (Java Memory Model) and insert these
platform memory specific instructions appropriately. Conversely, the JVM
relies on the developer to identify when data is shared through the use of
proper synchronization.
synchronized (myLock) {
myVariable = 3;
}.
JVM
Java memory model
ensures correct working
across architectures
Processor Processor
Processor architecture
architecture architecture
1
2 3
The happens-before Relationship
Total Order
You are already familiar with total ordering, the sequence of natural
numbers i.e. 1, 2, 3 4, .... is a total ordering. Each element is either greater
or smaller than any other element in the set of natural numbers
(Totality). If 2 < 4 and 4 < 7 then we know that 2 < 7 necessarily
(Transitivity). And finally if 3 < 5 then 5 can't be less than 3
(Asymmetry).
Partial Order
If two fields X and Y are being assigned but don't depend on each
other, then the compiler is free to reorder them
Note that all these reorderings may happen behind the scenes in a single-
threaded program but the program sees no ill-effects of these reorderings
as the JMM guarantees that the outcome of the program would be the
same as if these reorderings never happened.
However, when multiple threads are involved then these reorderings take
on an altogether different meaning. Without proper synchronization,
these same optimizations can wreak havoc and program output would be
unpredictable.
The JMM is defined in terms of actions which can be any of the following
int x = 3;
int y = 7;
int a = 4;
int b = 9;
Object lock1 = new Object();
Object lock2 = new Object();
// BLOCK#1
// The statements in block#1 and block#2 aren't dependent
// on eachother and the two blocks can be reordered by the
// compiler
x = a;
// BLOCK#2
// These two writes within block#2 can't be reordered, as
// they are dependent on eachother. Though this block can
// be ordered before block#1
y += y;
y *= y;
// BLOCK#3
// Because this block uses x and y, it can't be placed before
// the assignments to the two variables, i.e. block#1 and blo
ck#2
synchronized (lock1) {
x *= x;
y *= y;
}
// BLOCK#4
// Since this block is also not dependent on block#3, it ca
n be
// placed before block#3 or block#2. But it can't be placed b
efore
// block#1, as that would assign a different value to x
synchronized (lock2) {
a *= a;
b *= b;
}
}
}
Now note that even though all this reordering magic can happen in the
background but the notion of program order is still maintained i.e. the
final outcome is exactly the same as without the ordering. Furthermore,
block#1 will appear to happen-before block#2 even if block#2 gets
executed before. Also note that block#2 and block#4 have no ordering
dependency on each other.
One can see that there's no partial ordering between block#1 and
block#2 but there's a partial ordering between block#1 and block#3
where block#3 must come after block#1.
a *= 10;
// BLOCK#4
// Moved out to here from writerThread method
synchronized (lock2) {
a *= a;
b *= b;
}
}
Note we moved out block#4 into the new method readerThread . Say if the
readerThread runs to completion, it is possible for the writerThread to
never see the updated value of the variable a as it may never have been
flushed out to the main memory, where the writerThread would attempt
to read from. There's no happens before relationship between the two
code snippets executed in two different threads!
To make sure that the changes done by one thread to shared data are
visible immediately to the next thread accessing those same variables, we
must establish a happens-before relationship between the execution of the
This implies that any memory operations which were visible to a thread
before exiting a synchronized block are visible to any thread after it
enters a synchronized block protected by the same monitor, since all the
memory operations happen before the release, and the release happens
before the acquire. Exiting a synchronized block causes the cache to be
flushed to the main memory so that the writes made by the exiting thread
are visible to other threads. Similarly, entering a synchronized block has
the effect of invalidating the local processor cache and reloading of
variables from the main memory so that the entering thread is able to see
the latest values.
mean that one thread executes before the other. All it means is that when
readerThread releases the monitor, up till that point, whatever shared
variables it has manipulated will have their latest values visible to the
writerThread as soon as it acquires the same monitor. If it acquires a
different monitor then there's no happens-before relationship and it may
or may not see the latest values for the shared variables
Thread 1
y = 17
Lock L Acquired
x=5
z=6
Releasing lock L causes the
variable values inside the
Lock L Released synchronized block and
before it to be flushed out to
the main memory Thread 2
Lock L Acquired
x=5
Acquiring the same the lock L here, establishes a happens-
y = 17 before relationship and variables are reloaded from the main
memory and latest values become visible to Thread 2
z=6
Lock L Released
Happens-Before Relationship
Blocking Queue | Bounded Buffer | Consumer Producer
Classical synchronization problem involving a limited size buffer which can have items added to it or removed from
it by different producer and consumer threads. This problem is known by different names: consumer producer
problem, bounded buffer problem or blocking queue problem.
Problem
Producer
Bounded Buffer
Thread
Consumer
Thread
Solution
Our queue will have a finite size that is passed in via the constructor.
Additionally, we'll use an array as the data structure for backing our
queue. Furthermore, we'll expose the APIs enqueue and dequeue for our
blocking queue class. We ll also need a head and a tail pointer to keep
track of the front and back of the queue and a size variable to keep track
of the queue size at any given point in time. Given this, the skeleton of our
blocking queue class would look something like below:
T[] array;
int size = 0;
int capacity;
int head = 0;
int tail = 0;
public T dequeue() {
}
}
Let's start with the enqueue method. If the current size of the queue ==
capacity then we know we'll need to block the caller of the method. We
can do so by appropriately calling wait() method in a while loop. The
while loop is conditioned on the size of the queue being equal to the max
capacity. The loop's predicate would become false, as soon as, another
thread performs a dequeue.
Note that whenever we test for the value of the size variable, we also
need to make sure that no other thread is manipulating the size variable.
This can be achieved by the synchronized keyword as it'll only allow a
single thread to invoke the enqueue/dequeue methods on the queue
object.
Finally, as the queue grows, it'll reach the end of our backing array, so we
need to reset the tail of the queue back to zero. Notice that since we only
Note that in the end we are calling notifyAll() method. Since we just
added an item to the queue, it is possible that a consumer thread is
blocked in the dequeue method of the queue class waiting for an item to
become available so it's necessary we send a signal to wake up any
waiting threads.
We need to reset head of the queue back to zero in-case it's pointing past
the end of the array. We need to decrement the size variable too since the
queue will now have one less item.
Finally, we remember to call notifyAll() since if the queue were full then
there might be producer threads blocked in the enqueue method. This
logic in code appears as below:
T item = null;
return item;
}
but for better readability we choose to expand this operation into two
lines.
Complete Code
class Demonstration {
public static void main( String args[] ) throws Exception{
final BlockingQueue<Integer> q = new BlockingQueue<Integer>(5);
@Override
public void run() {
try {
for (int i = 0; i < 50; i++) {
q.enqueue(new Integer(i));
System.out.println("enqueued " + i);
}
} catch (InterruptedException ie) {
}
}
});
@Override
public void run() {
try {
for (int i = 0; i < 25; i++) {
System.out.println("Thread 2 dequeued: " + q.dequeue());
}
} catch (InterruptedException ie) {
}
}
});
@Override
public void run() {
try {
for (int i = 0; i < 25; i++) {
System.out.println("Thread 3 dequeued: " + q.dequeue());
}
} catch (InterruptedException ie) {
}
}
});
t1.start();
Thread.sleep(4000);
t2.start();
t2.join();
t3.start();
t1.join();
t3.join();
}
}
T[] array;
Object lock = new Object();
int size = 0;
int capacity;
int head = 0;
int tail = 0;
@SuppressWarnings("unchecked")
public BlockingQueue(int capacity) {
// The casting results in a warning
array = (T[]) new Object[capacity];
this.capacity = capacity;
}
synchronized (lock) {
if (tail == capacity) {
tail = 0;
}
array[tail] = item;
size++;
tail++;
lock.notifyAll();
}
}
T item = null;
synchronized (lock) {
while (size == 0) {
lock.wait();
}
if (head == capacity) {
head = 0;
}
item = array[head];
array[head] = null;
head++;
size--;
lock.notifyAll();
}
return item;
}
}
The test case in our example creates two dequeuer threads and one
enqueuer thread. The enqueue-er thread initially fills up the queue and
gets blocked, till the dequeuer threads start off and remove elements
from the queue. The output would show enqueuing and dequeuing
activity interleaved after the first 5 enqueues.
Follow Up Question
This lesson explains how to solve the producer-consumer problem using a mutex.
Let's start with the enqueue() method. If the current size of the queue ==
capacity then we know we need to block the caller of the method until
the queue has space for a new item. Since a mutex only allows locking, we
give up the mutex at this point. The logic is shown below.
lock.lock();
while (size == capacity) {
// Release the mutex to give other threads
lock.unlock();
// Reacquire the mutex before checking the
// condition
lock.lock();
}
if (tail == capacity) {
tail = 0;
}
array[tail] = item;
size++;
tail++;
lock.unlock();
The most important point to realize in the above code is the weird-looking
while loop construct, where we release the lock and then immediately
attempt to reacquire it. Convince yourself that whenever we test the
while loop condition size == capacity , we do so while holding the mutex!
Also, it may not be immediately obvious but a different thread can
acquire the mutex just when a thread releases the mutex and attempts to
reacquire it within the while loop. Lastly, we modify the array variable
only when holding the mutex.
We also need to manage the tail as the queue grows. Once it reaches the
end of our backing array, we reset it to zero. Realize that since we only
proceed to add an item when size of queue < maxSize we are
guaranteed that tail will never overwrite an existing item.
Now let us see the code for the dequeue() method which is analogous to
the enqueue() one.
T item = null;
lock.lock();
while (size == 0) {
lock.unlock();
lock.lock();
}
if (head == capacity) {
head = 0;
}
item = array[head];
array[head] = null;
head++;
size--;
lock.unlock();
return item;
Again note that we always test for the condition size == 0 when holding
the lock. Additionally, all shared state is manipulated in mutual exclusion.
Additionally, we reset head of the queue back to zero in case it's pointing
past the end of the array. We need to decrement the size variable too
since the queue will now have one less item. The complete code appears
in the widget below. It also runs a simulation of several producers and
consumers that constantly write and retrieve from an instance of the
blocking queue, for one second.
main.java
BlockingQueueWithMutex.java
class Demonstration {
public static void main( String args[] ) throws InterruptedException {
final BlockingQueueWithMutex<Integer> q = new BlockingQueueWithMutex<Integer>(5);
}
}
});
}
}
});
}
}
});
}
}
});
}
}
});
producer1.setDaemon(true);
producer2.setDaemon(true);
producer3.setDaemon(true);
consumer1.setDaemon(true);
consumer2.setDaemon(true);
consumer3.setDaemon(true);
producer1.start();
producer2.start();
producer3.start();
consumer1.start();
consumer2.start();
consumer3.start();
Thread.sleep(1000);
}
}
Faulty Implementation
public T dequeue() {
T item = null;
while (size == 0) { }
lock.lock();
if (head == capacity) {
head = 0;
}
item = array[head];
array[head] = null;
head++;
size--;
lock.unlock();
return item;
}
and,
if (tail == capacity) {
tail = 0;
}
array[tail] = item;
size++;
tail++;
lock.unlock();
}
main.java
FaultyBlockingQueueWithMutex
class Demonstration {
}
});
producer1.setDaemon(true);
producer2.setDaemon(true);
producer3.setDaemon(true);
consumer1.setDaemon(true);
consumer2.setDaemon(true);
consumer3.setDaemon(true);
producer1.start();
producer2.start();
producer3.start();
consumer1.start();
consumer2.start();
consumer3.start();
Thread.sleep(20000);
}
}
... continued
This lesson explains how to solve the producer-consumer problem using semaphores.
set all the permits as currently given out. Let's look at the implementation
of enqueue() method.
semProducer.acquire();
if (tail == capacity) {
tail = 0;
}
array[tail] = item;
size++;
tail++;
semConsumer.release();
}
Suppose the size of the buffer is N. If you study the code above, it should
be evident that only N items can be enqueued in the items buffer. At the
end of the method, we signal any consumer threads waiting on the
semConsumer semaphore. However, the code is not yet complete. We have
only solved the problem of coordinating between the producer and the
consumer threads. The astute reader would immediately realize that
multiple producer threads can manipulate the code lines between the
first and the last semaphore statements in the above enqueue() method.
In our earlier implementations, we were able to guard the critical section
by synchronizing on objects that ensured only a single thread is active in
the critical section at a time. We need similar functionality using
semaphores. Recall that we can use a binary semaphore to exercise
mutual exclusion, however, any thread is free to signal the semaphore,
not just the one that acquired it. We'll introduce a semLock semaphore
that acts as a mutex. The complete version of the enqueue() method
appears below:
semProducer.acquire();
semLock.acquire();
if (tail == capacity) {
tail = 0;
}
array[tail] = item;
size++;
tail++;
semLock.release();
semConsumer.release();
}
Realize that we have modeled each item in the buffer as a permit. When
the buffer is full, the consumer threads have N permits to perform
dequeue() and when the buffer is empty the producer threads have N
permits to perform enqueue() . The code for dequeue() is similar and
appears below:
T item = null;
semConsumer.acquire();
semLock.acquire();
if (head == capacity) {
head = 0;
}
item = array[head];
array[head] = null;
head++;
size--;
semLock.release();
semProducer.release();
return item;
}
The complete code appears in the code widget below. We also include a
simple test with one producer and two consumer threads.
main.java
BlockingQueueWithSemaphore
CountingSemaphore.java
@SuppressWarnings("unchecked")
public BlockingQueueWithSemaphore(int capacity) {
// The casting results in a warning
array = (T[]) new Object[capacity];
this.capacity = capacity;
this.semProducer = new CountingSemaphore(capacity, capacity);
this.semConsumer = new CountingSemaphore(capacity, 0);
}
T item = null;
semConsumer.acquire();
semLock.acquire();
if (head == capacity) {
head = 0;
}
item = array[head];
array[head] = null;
head++;
size--;
semLock.release();
semProducer.release();
return item;
}
semProducer.acquire();
semLock.acquire();
if (tail == capacity) {
tail = 0;
}
array[tail] = item;
size++;
tail++;
semLock.release();
semConsumer.release();
}
}
Rate Limiting Using Token Bucket Filter
Problem
Imagine you have a bucket that gets filled with tokens at the rate of 1
token per second. The bucket can hold a maximum of N tokens.
Implement a thread-safe class that lets threads get a token when one is
available. If no token is available, then the token-requesting threads
should block.
The class should expose an API called getToken that various threads can
call to get a token
New tokens
get added at
the rate of 1
token/second Tokens are
requested by
threads
Solution
The key to the problem is to find a way to track the number of available
tokens when a consumer requests for a token. Note the rate at which the
tokens are being generated is constant. So if we know when the token
bucket was instantiated and when a consumer called getToken() we can
take the difference of the two instants and know the number of possible
tokens we would have collected so far. However, we'll need to tweak our
solution to account for the max number of tokens the bucket can hold.
Let's start with the skeleton of our class
}
}
Note how getToken() doesn't return any token type ! The fact a thread
can return from the getToken call would imply that the thread has the
token, which is nothing more than a permission to undertake some
action.
Note we are using synchronized on our getToken method, this means that
only a single thread can try to get a token, which makes sense since we'll
be computing the available tokens in a critical section.
We need to think about the following three cases to roll out our algorithm.
Let's assume the maximum allowed tokens our bucket can hold is 5.
The last request for token was more than 5 seconds ago: In this
scenario, each elapsed second would have generated one token
which may total more than five tokens since the last request was
more than 5 seconds ago. We simply need to set the maximum tokens
available to 5 since that is the most the bucket will hold and return
one token out of those 5.
The last request for token was within a window of 5 seconds: In this
scenario, we need to calculate the new tokens generated since the
last request and add them to the unused tokens we already have. We
then return 1 token from the count.
The last request was within a 5-second window and all the tokens are
used up: In this scenario, there's no option but to sleep for a whole
second to guarantee that a token would become available and then
let the thread return. While we sleep(), the monitor would still be
held by the token-requesting thread and any new threads invoking
getToken would get blocked, waiting for the monitor to become
available.
if (possibleTokens == 0) {
Thread.sleep(1000);
} else {
possibleTokens--;
}
lastRequestTime = System.currentTimeMillis();
System.out.println(
"Granting " + Thread.currentThread().getName() + " to
ken at " + (System.currentTimeMillis() / 1000));
}
}
You can see the final solution comes out to be very trivial without the
requirement for creating a bucket-filling thread of sorts, that runs
perpetually and increments a counter every second to reflect the addition
of a token to the bucket. Many candidates initially get off-track by taking
this approach. Though you might be able to solve this problem using the
mentioned approach, the code would unnecessarily be complex and
unwieldy.
If you execute the code below, you'll see we create a token bucket with
max tokens set to 1 and have ten threads request for a token. The threads
are shown being granted tokens at exactly 1 second intervals instead of
all at once. The program output displays the timestamps at which each
thread gets the token and we can verify the timestamps are 1 second
apart.
Complete Code
import java.util.HashSet;
import java.util.Set;
class Demonstration {
public static void main( String args[] ) throws InterruptedException {
TokenBucketFilter.runTestMaxTokenIs1();
}
}
class TokenBucketFilter {
if (possibleTokens == 0) {
Thread.sleep(1000);
} else {
possibleTokens--;
}
lastRequestTime = System.currentTimeMillis();
Below is a more involved test where we let the token bucket filter object
receive no token requests for the first 10 seconds.
import java.util.HashSet;
import java.util.Set;
class Demonstration {
public static void main( String args[] ) throws InterruptedException {
TokenBucketFilter.runTestMaxTokenIsTen();
}
}
class TokenBucketFilter {
if (possibleTokens == 0) {
Thread.sleep(1000);
} else {
possibleTokens--;
}
lastRequestTime = System.currentTimeMillis();
The output will show that the first five threads are granted tokens
immediately at the same second granularity instant. After that, the
subsequent threads are slowly given tokens at an interval of 1 second
since one token gets generated every second.
The astute reader would have noticed a problem or a deficiency in our
solution. We wait an entire 1 second before we let a thread return with a
token. Is that correct? Say we were 20 milliseconds away from getting the
next token, but we ended up waiting a full 1000 milliseconds before
declaring we have a token available. We can eliminate this inefficiency by
maintaining more state however for an interview problem the given
solution is sufficient.
Follow-up Exercise
This lesson explains how to solve the token bucket lter problem using threads.
MAX_TOKENS = maxTokens;
}
synchronized (this) {
while (possibleTokens == 0) {
this.wait();
}
possibleTokens--;
}
System.out.println(
"Granting " + Thread.currentThread().getName() + " to
ken at " + System.currentTimeMillis() / 1000);
}
while (true) {
synchronized (this) {
if (possibleTokens < MAX_TOKENS) {
possibleTokens++;
}
this.notify();
}
try {
Thread.sleep(ONE_SECOND);
} catch (InterruptedException ie) {
// swallow exception
}
}
}
import java.util.HashSet;
import java.util.Set;
class Demonstration {
public static void main( String args[] ) throws InterruptedException {
Set<Thread> allThreads = new HashSet<Thread>();
final MultithreadedTokenBucketFilter tokenBucketFilter = new MultithreadedTokenBucket
}
}
class MultithreadedTokenBucketFilter {
private long possibleTokens = 0;
private final int MAX_TOKENS;
private final int ONE_SECOND = 1000;
while (true) {
synchronized (this) {
if (possibleTokens < MAX_TOKENS) {
possibleTokens++;
}
this.notify();
}
try {
Thread.sleep(ONE_SECOND);
} catch (InterruptedException ie) {
// swallow exception
}
}
}
synchronized (this) {
while (possibleTokens == 0) {
this.wait();
}
possibleTokens--;
}
System.out.println(
"Granting " + Thread.currentThread().getName() + " token at " + System.curren
}
}
We reuse the test-case from the previous lesson, where we create a token
bucket with max tokens set to 1 and have ten threads request for a token.
The threads are shown being granted tokens at exactly 1-second intervals
instead of all at once. The program output displays the timestamps at
which each thread gets the token and we can verify the timestamps are 1
second apart. Additionally, wee mark the daemon thread as background
so that it exits when the application terminates.
Using a Factory
The problem with the above solution is that we start our thread in the
constructor. Never start a thread in a constructor as the child thread
can attempt to use the not-yet-fully constructed object using this .
This is an anti-pattern. Some candidates present this solution when
attempting to solve token bucket filter problem using threads. However,
when checked, few candidates can reason why starting threads in a
constructor is a bad choice.
There are two ways to overcome this problem, the naive but correct
solution is to start the daemon thread outside of the
MultithreadedTokenBucketFilter object. However, the con of this
approach is that the management of the daemon thread spills outside the
class. Ideally, we want the class to encapsulate all the operations related
with the management of the token bucket filter and only expose the
public API to the consumers of our class, as per good object orientated
design. This situation is a great for using the Simple Factory design
pattern. We'll create a factory class which produces token bucket filter
objects and also starts the daemon thread only when the object is full
constructed. If you are unaware of this pattern, I'll take the liberty insert
a shameless marketing plug here and refer you to this design patterns
course to get up to speed.
The complete code with the same test case appears below.
main.java
TokenBucketFilter.java
TokenBucketFilterFactory.java
import java.util.HashSet;
import java.util.Set;
class Demonstration {
public static void main( String args[] ) throws InterruptedException {
Set<Thread> allThreads = new HashSet<Thread>();
TokenBucketFilter tokenBucketFilter = TokenBucketFilterFactory.makeTokenBucketFilter
}
}
Thread Safe Deferred Callback
Asynchronous programming involves being able to execute functions at a future occurrence of some event.
Designing a thread-safe deferred callback class becomes a challenging interview question.
Problem
Solution
One naive way to solve this problem is to have a busy thread that
continuously loops over the list of callbacks and executes them as they
become due. However, the challenge here is to design a solution which
doesn't involve a busy thread.
Let's see how the skeleton of our class would look like:
Class Skeleton
/**
* Represents the class which holds the callback. For simplicit
y instead of
* executing a method, we print a message.
*/
static class CallBack {
long executeAt;
String message;
long executeAt;
String message;
Now lets come to the meat of our solution which is to design the execution
thread's workflow. The thread will run the start() method and enter
into a perpetual loop. The flow will be as follows:
Initially the queue will be empty and the execution thread should
just wait indefinitely on the condition variable to be signaled.
When the first callback gets registered, we note how many seconds
after its arrival does it need to be executed and await() on the
condition variable for that many seconds.
Now two things are possible at this point. No new callbacks arrive, in
which case the executor thread completes waiting and polls the
queue for tasks that should be executed and starts executing them.
while (true) {
// lock the critical section
lock.lock();
lock.unlock();
}
}
Initially, the queue is empty and the executor thread will simply
await() indefinitely on the condition newCallbackArrived to be
signaled. Note we wrap the waiting in a while loop to cater for
spurious wakeups.
If the queue is not empty, say if the executor thread is created later
than the consumer threads, then the executor will fall into the
second while loop and either wait for the callback to become due or
if one is already due break out of the while loop and execute the due
callback.
For all other happy path cases, adding a callback to the queue will
always signal the awaiting executor thread to wake up and
recalculate the time it needs to sleep before the next callback is ready
to be executed.
Note that both the await() calls are properly enclosed by while loops
to cater for spurious wakeups. In the second while loop, if a spurious
wakeup happens, the executor thread recalculates the sleep time,
find it to be greater than zero and goes back to sleeping until a
callback becomes due.
Complete Code
The complete code with the test case appears below. We insert ten
callbacks sequentially waiting randomly between insertions. The output
shows the epoch seconds at which the callback was expected to be
executed and the actual time at which it got executed. Both of these
values, for all the callbacks, should be the same or differ very slightly to
account for the minuscule time it for the executor thread to wake up and
execute the callback.
The code includes a test case where ten callbacks are executed. Since the
code runs in the browser, it might timeout before printing the complete
output.
import java.util.*;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.ReentrantLock;
class Demonstration {
public static void main( String args[] ) throws InterruptedException {
DeferredCallbackExecutor.runTestTenCallbacks();
}
}
class DeferredCallbackExecutor {
while (true) {
lock.lock();
while (q.size() == 0) {
newCallbackArrived.await();
}
while (q.size() != 0) {
sleepFor = findSleepDuration();
if(sleepFor <=0)
break;
newCallbackArrived.await(sleepFor, TimeUnit.MILLISECONDS);
}
CallBack cb = q.poll();
System.out.println(
"Executed at " + System.currentTimeMillis()/1000 + " required at " + cb.e
+ ": message:" + cb.message);
lock.unlock();
}
}
}
}
});
service.start();
Here's another test-case, which first submits a callback that should get
executed after eight seconds. Three seconds later another call back is
submitted which should be executed after only one second. The callback
being submitted later should execute first. The test run would timeout if
run in the browser since the callback service is a perpetual thread but
from the output you can observe the callback submitted second execute
first.
import java.util.*;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.ReentrantLock;
class Demonstration {
public static void main( String args[] ) throws InterruptedException {
DeferredCallbackExecutor.runLateThenEarlyCallback();
}
}
class DeferredCallbackExecutor {
while (true) {
lock.lock();
while (q.size() == 0) {
newCallbackArrived.await();
}
while (q.size() != 0) {
sleepFor = findSleepDuration();
if(sleepFor <=0)
break;
newCallbackArrived.await(sleepFor, TimeUnit.MILLISECONDS);
}
CallBack cb = q.poll();
System.out.println(
"Executed at " + System.currentTimeMillis()/1000 + " required at " + cb.e
+ ": message:" + cb.message);
lock.unlock();
}
}
service.start();
Thread.sleep(3000);
lateThread.join();
earlyThread.join();
}
}
Implementing Semaphore
Problem
Solution
Given the above definition we can now start to think of what functions
our Semaphore class will need to expose. We need a function to "gain the
permit" and a function to "return the permit".
The skeleton for our Semaphore class looks something like this so far.
Note we have added the synchronized keyword to both the class methods.
Adding the synchronized keyword causes only a single thread to execute
either of the methods. If a thread is currently executing acquire() then
another thread can't execute release() on the same semaphore object.
The astute observer would question why we don't take the locking to finer
grained level and use java's lock so that multiple threads can call either of
the two functions. With synchronized only one thread can call either
release or acquire. However, the counter to that is, even with finer
grained locking the entire code blocks within the two methods will be
guarded by a single lock and that would pretty much be the same as
putting synchronized on the methods definitions.
Now let us fill in the implementation for our acquire method. When can a
thread not be allowed to acquire a semaphore? When all the permits are
out! This implies we'll need to wait() when usedPermits == maxCount If
this condition isn t true we simply increment usedPermits to simulate
giving out a permit.
This might seem counter-intuitive, you might ask why would someone
call release() before calling acquire() - This is entirely possible since
semaphore can also be used for signalling between threads. A thread can
call release() on a semaphore object before another thread calls
acquire() on the same semaphore object. There is no concept of
ownership for a semaphore ! Hence different threads can call acquire or
release methods as they deem fit.
Complete Code
The complete code appears below along with a test. Note how we acquire
and release the semaphore in different threads in different methods,
something not possible with a mutex. Thread t1 always acquires the
semaphore while thread t2 always releases it. The semaphore has a max
permit of 1 so you'll see the output interleaved between the two threads.
You might see the print statements from the two threads not interleave
each other and may appear twice in succession. This is possible because
of how threads get scheduled for execution and also because we start
with an unused permit.
The astute reader would also observe that the given solution will always
block if the semaphore is initialized with zero permits
class Demonstration {
public static void main( String args[] ) throws InterruptedException {
@Override
public void run() {
try {
for (int i = 0; i < 5; i++) {
cs.acquire();
System.out.println("Ping " + i);
}
} catch (InterruptedException ie) {
}
}
});
@Override
public void run() {
for (int i = 0; i < 5; i++) {
try {
cs.release();
System.out.println("Pong " + i);
} catch (InterruptedException ie) {
}
}
}
});
t2.start();
t1.start();
t1.join();
t2.join();
}
}
class CountingSemaphore {
int usedPermits = 0;
int maxCount;
notify();
usedPermits++;
while (usedPermits == 0)
wait();
usedPermits--;
notify();
}
}
ReadWrite Lock
We discuss a common interview question involving synchronization of multiple reader threads and a single writer
thread.
Problem
Imagine you have an application where you have multiple readers and
multiple writers. You are asked to design a lock which lets multiple
readers read at the same time, but only one writer write at a time.
Solution
First of all let us define the APIs our class will expose. We'll need two for
writer and two for reader. These are:
acquireReadLock
releaseReadLock
acquireWriteLock
releaseWriteLock
Note that all the methods are synchronized on the ReadWriteLock object
itself.
Let's start with the reader use case. We can have multiple readers acquire
the read lock and to keep track of all of them; we'll need a count. We
increment this count whenever a reader acquires a read lock and
decrement it whenever a reader releases it.
Releasing the read lock is easy but before we acquire the read lock, we
need to be sure that no other writer is currently writing. Again, we'll need
some variable to keep track of whether a writer is writing. Since only a
single writer can write at a given point in time, we can just keep a
boolean variable to denote if the write lock is acquired or not. Let's
translate what we have discussed so far into code.
while (isWriteLocked) {
wait();
}
readers++;
}
For the writer case, releasing the lock would be as simple as setting the
isWriteLocked variable to false but don't forget to call notify() too since
there might be readers waiting in the acquireReadLock() method.
Acquiring the write lock is a little tricky, we have to check two things
whether any other writer has already set isWriteLocked to true and also if
any reader has incremented the readers variable. If isWriteLocked
equals false and no reader is writing then the writer should proceed
forward.
ption {
while (isWriteLocked) {
wait();
}
readers++;
}
isWriteLocked = true;
}
The astute reader will notice a bug in the code we have so far. Try finding
it, before reading ahead !
while (isWriteLocked) {
wait();
}
readers++;
}
isWriteLocked = true;
}
The complete code with a test-case appears below. Run the code and
examine the output messages. We start a reader and a writer thread
initially. The writer blocks until the read lock is released. Also, we release
the reader-lock through another reader thread.
A second writer thread is blocked forever since the first writer thread
never releases the write-lock. The execution eventually times out.
class Demonstration {
@Override
public void run() {
try {
}
}
});
@Override
public void run() {
try {
@Override
public void run() {
try {
rwl.acquireReadLock();
System.out.println("Read lock acquired: " + System.currentTimeMillis());
}
}
});
@Override
public void run() {
System.out.println("Read lock about to release: " + System.currentTimeMillis(
rwl.releaseReadLock();
System.out.println("Read lock released: " + System.currentTimeMillis());
}
});
tReader1.start();
t1.start();
Thread.sleep(3000);
tReader2.start();
Thread.sleep(1000);
t2.start();
tReader1.join();
tReader2.join();
t2.join();
}
}
class ReadWriteLock {
while (isWriteLocked) {
wait();
}
readers++;
}
isWriteLocked = true;
}
The write-lock is acquired only after the read-lock is released. If you look
at the output, the write lock acquisition timestamp would be between the
timestamps of the statements "read lock about to release" and "read
lock released". This is so because timestamps aren't granular enough
and read-lock's release timestamp and write-lock's acquisition timestamp
might be same.
Also - the read-lock's release statement might get printed after the write-
lock's acquisition statement but that is possible if the thread tReader2 gets
context-switched as soon as it releases the lock and before it gets a chance
to execute the print statement.
Last but not the least, running the above test in the browser would show
execution timing out. This is expected as our t1 thread is modelled as a
writer thread that never releases the write-lock.
Unisex Bathroom Problem
A synchronization practice problem requiring us to synchronize the usage of a single bathroom by both the
genders.
Problem
A bathroom is being designed for the use of both males and females in an
office but requires the following constraints to be maintained:
There cannot be men and women in the bathroom at the same time.
The solution should avoid deadlocks. For now, though, don’t worry about
starvation.
Bathroom
At most 3 company
employees can occupy the
bathroom
Solution
First let us come up with the skeleton of our Unisex Bathroom class. We
want to model the problem programmatically first. We'll need two APIs,
one that is called by a male to use the bathroom and another one that is
called by the woman to use the bathroom. Initially our class looks like the
following
Let us try to address the first problem of allowing either men or women
to use the bathroom. We'll worry about the max employees later. We need
to maintain state in a variable which would tell us which gender is
currently using the bathroom. Let's call this variable inUseBy . To make
the code more readable we'll make the type of the variable inUseBy a
string which can take on the values men, women or none.
We'll also have a method useBathroom() that'll mock a person using the
bathroom. The implementation of this method will simply sleep the
thread using the bathroom for some time.
Assume there's no one in the bathroom and a male thread invokes the
maleUseBathroom() method, the thread has to check first whether the
bathroom is being used by a female thread. If it is indeed being used by a
female, then the male thread has to wait for the bathroom to be empty. If
the male thread already finds the bathroom empty, which in our scenario
it does, the thread simply updates the inUseBy variable to "MEN" and
proceeds to use the bathroom. After using the bathroom, however, it must
let any waiting female threads know that it is done and they can now use
the bathroom.
The astute reader would immediately realize that we'll need to guard the
variable inUseBy since it can possibly be both read and written to by
different threads at the same time. Does that mean we should mark our
32.
33. if (empsInBathroom == 0) inUseBy = NONE;
34. // Since we might have just updateded the value of
35. // inUseBy, we should notifyAll waiting threads
36. this.notifyAll();
37. }
38. }
39.
40. void femaleUseBathroom(String name) throws InterruptedExceptio
n {
41.
42. synchronized (this) {
43. while (inUseBy.equals(MEN)) {
44. this.wait();
45. }
46. empsInBathroom++;
47. inUseBy = WOMEN;
48. }
49.
50. useBathroom(name);
51.
52. synchronized (this) {
53. empsInBathroom--;
54.
55. if (empsInBathroom == 0) inUseBy = NONE;
56. // Since we might have just updateded the value of
57. // inUseBy, we should notifyAll waiting threads
58. this.notifyAll();
59. }
60. }
61.}
The code so far allows any number of men or women to gather in the
bathroom. However, it allows only one gender to do so. The methods are
mirror images of each other with only gender-specific variable changes.
Let's discuss the important portions of the code.
Lines 17-26: Since java monitors are mesa monitors, we use a while
loop to check for the variable inUseBy . If it is set to MEN or NONE
then, we know the bathroom is either empty or already has men and
therefore it is safe to proceed ahead. If the inUseBy is set to WOMEN,
then the male thread, invokes wait() on line 23. Note, the thread
would give up the monitor for the object on which it is
synchronized thus allowing other threads to synchronize on the
same object and maybe update the inUseBy variable
Lines 30-37: After using the bathroom, the male thread is about to
leave the method so it should remember to decrement the number of
occupants in the bathroom. As soon as it does that, it has to check if it
were the last member of its gender to leave the bathroom and if so
then it should also update the inUseBy variable to NONE. Finally, the
thread notifies any other waiting threads that they are free to check
the value of inUseBy in case it has updated it. Question: Why did we
use notifyAll() instead of notify() ?
Complete Code
import java.util.concurrent.Semaphore;
class Demonstration {
public static void main( String args[] ) throws InterruptedException {
UnisexBathroom.runTest();
}
}
class UnisexBathroom {
synchronized (this) {
while (inUseBy.equals(WOMEN)) {
this.wait();
}
maxEmps.acquire();
empsInBathroom++;
inUseBy = MEN;
}
useBathroom(name);
maxEmps.release();
synchronized (this) {
empsInBathroom--;
synchronized (this) {
while (inUseBy.equals(MEN)) {
this.wait();
}
maxEmps.acquire();
empsInBathroom++;
inUseBy = WOMEN;
}
useBathroom(name);
maxEmps.release();
synchronized (this) {
empsInBathroom--;
}
}
});
}
}
});
}
}
});
}
}
});
}
}
});
female1.start();
male1.start();
male2.start();
male3.start();
male4.start();
female1.join();
male1.join();
male2.join();
male3.join();
male4.join();
}
}
If you look at the program output, you'd notice that the current employees
in the bathroom at one point is printed out to be 4 when the max allowed
is 3. This is just an outcome of how the code is structured, read on below
for an explanation.
In our test case we have four males and one female aspiring to use the
bathroom. We let the female thread use the bathroom first and then let all
the male threads loose. From the output, you'll observe, that no male
thread is inside the bathroom until Lisa is done using the bathroom. After
that, three male threads get access to the bathroom at the same instant.
The fourth male thread is held behind, till one of the male thread exits the
bathroom.
Imagine, if there are already three men in the bathroom and a fourth one
comes along then he gets blocked on line#31. This thread still holds the
bathroom object's monitor, when it becomes dormant due to non-
availability of permits. This prevents any female thread from changing
the inUseBy to WOMEN under any circumstance nor can the value of
empsInBathroom be changed.
Next note the threads returning from the useBathroom method, release the
semaphore. We must release the semaphore here because if we do not
then the blocked fourth male thread would never release the object's
monitor and the returning threads will never be able to access the second
synchronization block.
On releasing the semaphore, the blocked male thread will increment the
empsInBathroom variable to 4, before the thread that signaled the
semaphore enters the second synchronized block and decrements itself
from the count. It is also possible that male threads pile up before the
second synchronized block, while new arriving threads are chosen by the
system to run through the first synchronized block. In such a scenario,
the count empsIBathroom will keep increasing as threads returning from
the bathroom wait to synchronize on the this object and decrement the
count in the second synchronization block. Though eventually, these
threads will be able to synchronize and the count will reach zero.
Also, note that this solution isn't fair to the genders. If the first thread to
get bathroom access is male, and before it's done using the bathroom, a
steady stream of male threads start to arrive for bathroom use, then any
waiting female threads will starve.
Follow up
Problem
We can immediately realize that our solution will need a count variable to
track the number of threads that have arrived at the barrier. If we have n
threads, then n-1 threads must wait for the nth thread to arrive. This
suggests we have the n-1 threads execute the wait method and the nth
thread wakes up all the asleep n-1 threads.
Notice how we are resetting the count to zero in line 19. This is done so
that we are able to re-use the barrier.
Below is the working code, alongwith a test case. The test-case creates
three threads and has them synchronize on a barrier three times. We
introduce sleeps accordingly so that, thread 1 reaches the barrier first,
then thread 2 and finally thread 3. None of the thread is able to move
forward until all the threads reach the barrier. This is verified by the
order in which each thread prints itself in the output.
First Cut
class Demonstration {
public static void main( String args[] ) throws Exception{
Barrier.runTest();
}
}
class Barrier {
int count = 0;
int totalThreads;
if (count == totalThreads) {
notifyAll();
count = 0;
} else {
wait();
}
}
p1.start();
p2.start();
p3.start();
p1.join();
p2.join();
p3.join();
}
}
When you run the above code, you'll see that the threads print themselves
in order i.e. first thread 1 then thread 2 and finally thread 3 prints.
Thread 1 after reaching the barrier waits for the other two threads to
reach the barrier before moving forward.
The above code has a subtle but very crucial bug! Can you spot the bug
and try to fix it before reading on?
Second Cut
The previous code would have been hunky dory if we were guaranteed
that no spurious wake-ups could ever occur. The wait() method
invocation without the while loop is an error. We discussed in previous
sections that wait() should always be used with a while loop that checks
for a condition and if found false should make the thread wait again.
The condition the while loop can check for is simply how many threads
have incremented the count variable so far. A thread that wakes up
spuriously should go back to sleep if the count is less than the total
number of threads. We can check for this condition as follows:
The while loop introduces another problem. When the last thread does a
notifyAll() it also resets the count to 0, which means the threads that
are legitimately woken up will always be stuck in the while loop because
count is immediately set to zero. What we really want is not to reset the
count variable to zero until all the threads escape the while condition
when count becomes totalThreads . Below is the improved version:
The above code introduces a new variable released that keeps tracks of
how many threads exit the barrier and when the last thread exits the
barrier it resets count to zero, so that the barrier object can be reused in
the future.
There is still a bug in the above code! Can you guess what it is?
Final Cut
To understand why the above code is broken, consider three threads t1,
t2, and t3 trying to await() on a barrier object in an infinite loop. Note
the following sequence of events
4. With count equal to 4, t3 will not block at the barrier and exit which
breaks the contract for the barrier.
6. Another flaw with the above code is, it can cause a deadlock. Suppose
we wanted the three threads t1, t2, and t3 to congregate at a barrier
twice. The first invocation was in the order [t1, t2, t3] and the second
was in the order [t3, t2, t1]. If t3 immediately invoked await after the
first barrier, it would go past the second barrier without stopping
while t2 and t1 would become stranded at the second barrier, since
count would never equal totalThreads .
The fix requires us to block any new threads from proceeding until all the
threads that have reached the previous barrier are released. The code
with the fix appears below:
class Demonstration {
public static void main( String args[] ) throws Exception{
Barrier.runTest();
}
}
class Barrier {
int count = 0;
int released = 0;
int totalThreads;
try {
System.out.println("Thread 1");
barrier.await();
System.out.println("Thread 1");
barrier.await();
System.out.println("Thread 1");
barrier.await();
} catch (InterruptedException ie) {
}
}
});
p1.start();
p2.start();
p3.start();
p1.join();
p2.join();
p3.join();
}
count++;
if (count == totalThreads) {
notifyAll();
released = totalThreads;
} else {
released--;
if (released == 0) {
count = 0;
// remember to wakeup any threads
// waiting on line#81
notifyAll();
}
}
}
Uber Ride Problem
This lesson solves the constraints of an imaginary Uber ride problem where Republicans and Democrats can't be
seated as a minority in a four passenger car.
Problem
First let us model the problem as a class. We'll have two methods one
called by a Democrat and one by a Republican to get a ride home. When
either one gets a seat on the next ride, it'll call the seated() method.
Realize we'll also need a barrier where all the four threads, that have
been selected for the Uber ride arrive at, before riding away. This is
analogous to the four riders being seated in the car and the doors being
shut.
Once the doors are shut, one of the riders has to tell the driver to drive
which we simulate with a call to the drive() method. Note that exactly
one thread makes the shout-out to the driver to drive() .
void seated() {
}
void drive() {
}
}
Let's focus on the seatDemocrat() method first. For simplicity imagine the
first thread is a democrat and invokes seatDemocrat() . Since there's no
other rider available, it should be put to wait. We can use a semaphore to
make this thread wait. We'll not use a barrier, because we don't know
what party loyalty the threads arriving in future would have. It might be
that the next four threads are all republican and this Democrat isn't
placed on the next Uber ride. To differentiate between waiting democrats
and waiting republicans, we'll use two different semaphores demsWaiting
and repubsWaiting . Our first democrat thread will end up acquire() -ing
the demsWaiting semaphore.
Now it's easy to reason about how we select the threads for a ride. A
democrat thread has to check the following cases:
If there are two or more republican threads waiting and at least two
democrat threads (including the current thread) waiting, then the
current democrat thread can signal the repubsWaiting semaphore
twice to release the two waiting republican threads and signal the
demsWaiting semaphore once to release one more democrat thread.
Together the four of them would make up the next ride consisting of
two republican and two democrats.
If the above two conditions aren't true then the current democrat
thread should simply wait itself at the demsWaiting semaphore and
release the lock object so that other threads can enter the critical
sections.
The logic we discussed so far is translated into code below:
democrats++;
if (democrats == 4) {
// Seat all the democrats in the Uber ride.
demsWaiting.release(3);
democrats -= 4;
rideLeader = true;
} else if (democrats == 2 && republicans >= 2) {
// Seat 2 democrats & 2 republicans
demsWaiting.release(1);
repubsWaiting.release(2);
rideLeader = true;
democrats -= 2;
republicans -= 2;
} else {
lock.unlock();
demsWaiting.acquire();
}
seated();
barrier.await();
if (rideLeader == true) {
drive();
lock.unlock();
}
}
The thread that signals other threads to come along for the ride marks
itself as the rideLeader . This thread is responsible for informing the
driver to drive() . We can come up with some other criteria to choose the
rider leader but given the logic we implemented, it is easiest to make the
thread that determines an acceptable ride combination as the ride leader.
import java.util.HashSet;
import java.util.Set;
import java.util.concurrent.BrokenBarrierException;
import java.util.concurrent.CyclicBarrier;
import java.util.concurrent.Semaphore;
import java.util.concurrent.locks.ReentrantLock;
class Demonstration {
public static void main( String args[] ) throws InterruptedException {
UberSeatingProblem.runTest();
}
}
class UberSeatingProblem {
void drive() {
System.out.println("Uber Ride on Its wayyyy... with ride leader " + Thread.currentThr
System.out.flush();
}
democrats++;
if (democrats == 4) {
// Seat all the democrats in the Uber ride.
demsWaiting.release(3);
democrats -= 4;
rideLeader = true;
} else if (democrats == 2 && republicans >= 2) {
// Seat 2 democrats & 2 republicans
demsWaiting.release(1);
repubsWaiting.release(2);
rideLeader = true;
democrats -= 2;
republicans -= 2;
} else {
lock.unlock();
demsWaiting.acquire();
seated();
barrier.await();
if (rideLeader == true) {
drive();
lock.unlock();
}
}
void seated() {
System.out.println(Thread.currentThread().getName() + " seated");
System.out.flush();
}
republicans++;
if (republicans == 4) {
// Seat all the republicans in the Uber ride.
repubsWaiting.release(3);
rideLeader = true;
republicans -= 4;
} else if (republicans == 2 && democrats >= 2) {
// Seat 2 democrats & 2 republicans
repubsWaiting.release(1);
demsWaiting.release(2);
rideLeader = true;
republicans -= 2;
democrats -= 2;
} else {
lock.unlock();
repubsWaiting.acquire();
}
seated();
barrier.await();
if (rideLeader) {
drive();
lock.unlock();
}
}
}
});
thread.setName("Democrat_" + (i + 1));
allThreads.add(thread);
Thread.sleep(50);
}
The output of the program will show the members of each ride. Since we
create four more republican threads than democrat threads, you should
see at least one ride with all republican riders.
The astute reader may wonder what factor determines that a ride is
evenly split between members of the two parties or entirely made up of
the members of the same party, given enough riders exist that both
combinations can be possible. The key is to realize that each thread enters
This chapter discusses the famous Dijkstra's Dining Philosopher's problem. Two different solutions are explained
at length.
Problem
The arrangement of the philosophers and the forks are shown in the
diagram.
Design a solution where each philosopher gets a chance to eat his food
without causing a deadlock
Philosopher 0
Fork 4
Fork 0
Philosopher 4
Philosopher 1
Fork 3 Fork 1
Philosopher 3 Philosopher 2
Fork 2
Dining Philosophers
Solution
For no deadlock to occur at all and have all the philosopher be able to eat,
we would need ten forks, two for each philosopher. With five forks
available, at most, only two philosophers will be able to eat while letting a
third hungry philosopher to hold onto the fifth fork and wait for another
one to become available before he can eat.
Let's try to model the problem in code before we even attempt to find a
solution. Each fork represents a resource that two of the philosophers on
either side can attempt to acquire. This intuitively suggests using a
semaphore with a permit value of 1 to represent a fork. Each philosopher
can then be thought of as a thread that tries to acquire the forks to the left
and right of it. Given this, let's see how our class would look like.
while (true) {
contemplate();
eat(id);
}
}
// This method will have the meat of the solution, where the
// philosopher is trying to eat.
void eat(int id) throws InterruptedException {
}
}
That was easy enough. Now think about the eat method, when a
philosopher wants to eat, he needs the fork to the left and right of him. So:
This means each thread (philosopher) will also need to tell us what ID it is
before we can attempt to lock the appropriate forks for him. That is why
you see the eat() method take in an ID parameter.
forks[id]
forks[(id+4) % 5]
public DiningPhilosophers() {
forks[0] = new Semaphore(1);
forks[1] = new Semaphore(1);
forks[2] = new Semaphore(1);
forks[3] = new Semaphore(1);
forks[4] = new Semaphore(1);
}
while (true) {
contemplate();
eat(id);
}
}
void contemplate() throws InterruptedException {
Thread.sleep(random.nextInt(500));
}
}
If you run the above code eventually, it'll at some point end up in a
deadlock. Realize if all the philosophers simultaneously grab their left
fork, none would be able to eat. Below we discuss a couple of ways to
avoid this deadlock and arrive at the final solution.
A very simple fix is to allow only four philosophers at any given point in
time to even try to acquire forks. Convince yourself that with five forks
and four philosophers deadlock is impossible, since at any point in time,
even if each philosopher grabs one fork, there will still be one fork left
that can be acquired by one of the philosophers to eat. Implementing this
solution requires us to introduce another semaphore with a permit of 4
which guards the logic for lifting/grabbing of the forks by the
philosophers. The code appears below.
public DiningPhilosophers() {
forks[0] = new Semaphore(1);
forks[1] = new Semaphore(1);
forks[2] = new Semaphore(1);
forks[3] = new Semaphore(1);
forks[4] = new Semaphore(1);
}
while (true) {
contemplate();
eat(id);
}
}
forks[id].acquire();
forks[(id + 1) % 5].acquire();
System.out.println("Philosopher " + id + " is eating");
forks[id].release();
forks[(id + 1) % 5].release();
maxDiners.release();
}
}
public DiningPhilosophers2() {
forks[0] = new Semaphore(1);
forks[1] = new Semaphore(1);
forks[2] = new Semaphore(1);
forks[3] = new Semaphore(1);
forks[4] = new Semaphore(1);
}
public void lifecycleOfPhilosopher(int id) throws InterruptedExce
ption {
while (true) {
contemplate();
eat(id);
}
}
Below is the code for the first solution we discussed, along with a test. The
philosopher threads are perpetual so the widget execution times out. For
the limited time the test runs, one can see all philosopher's take turns to
eat food without any deadlock.
import java.util.Random;
import java.util.concurrent.Semaphore;
class Demonstration {
class DiningPhilosophers {
public DiningPhilosophers() {
forks[0] = new Semaphore(1);
forks[1] = new Semaphore(1);
forks[2] = new Semaphore(1);
forks[3] = new Semaphore(1);
forks[4] = new Semaphore(1);
}
while (true) {
contemplate();
eat(id);
}
}
forks[id].acquire();
forks[(id + 1) % 5].acquire();
System.out.println("Philosopher " + id + " is eating");
forks[id].release();
forks[(id + 1) % 5].release();
maxDiners.release();
}
}
}
p1.start();
p2.start();
p3.start();
p4.start();
p5.start();
p1.join();
p2.join();
p3.join();
p4.join();
p5.join();
}
}
Dining Philosopher Solution
Barber Shop
This lesson visits the synchronization issues when programmatically modeling a hypothetical barber shop and
how they are solved using Java's concurrency primitives.
Problem
Customer getting
haircut
Barber
Waiting Customers
Solution
First of all, we need to understand the different state transitions for this
problem before we devise a solution. Let's look at them piecemeal:
If any of the N chairs is free, the customer takes up the chair to wait
for his turn. Note this translates to using a semaphore on which
threads that have found a free chair wait on before being called in by
the barber for a haircut.
If a customer enters the shop and the barber is asleep it implies there
are no customers in the shop. The just-entered customer thread
wakes up the barber thread. This sounds like using a signaling
construct to wake up the barber thread.
We'll have a class which will expose two APIs one for the barber thread to
execute and the other for customers. The skeleton of the class would look
like the following:
Now let's think about the customer thread. It enters the shop, acquires a
lock to test the value of the counter waitingCustomers . We must test the
value of this variable while no other thread can modify its value, hinting
that we'll wrap the test under a lock. If the value equals all the chairs
available, then the customer thread gives up the lock and returns from
Next, the customer thread itself needs to wait on a semaphore before the
barber comes over, greets the customer and leads him to the salon chair.
Let's call this semaphore waitForBarberToGetReady . This is the same
semaphore the barber signals as soon as it wakes up. All customer
threads waiting for a haircut will block on this waitForBarberToGetReady
semaphore. The barber signaling this semaphore is akin to letting one
customer come through and sit on the barber chair for a haircut. This
logic when coded looks like the following:
lock.lock();
if (waitingCustomers == CHAIRS) {
System.out.println("Customer walks out, all chairs occupi
ed.");
// Remember to unlock before leaving
lock.unlock();
return;
}
waitingCustomers++;
lock.unlock();
// Let the barber know you are here, in case he's asleep
waitForCustomerToEnter.release();
// Wait for the barber to come take you to the salon chair wh
en its your turn
waitForBarberToGetReady.acquire();
// TODO: complete the rest of the logic.
}
Now let's work with the barber code. This should be a perpetual loop,
where the barber initially waits on the semaphore
waitForCustomerToEnter to simulate no customers in the shop. If woken
up, then it implies that there's at least one customer in the shop who
needs a hair-cut and the barber gets up, greets the customer and leads
him to his chair before starting the haircut. This sequence is translated
into code as the barber thread signaling the waitForBarberToGetReady
semaphore. Next, the barber simulates a haircut by sleeping for 50
milliseconds
Once the haircut is done. The barber needs to inform the customer thread
too; it does so by signaling the waitForBarberToCutHair semaphore. The
customer thread should already be waiting on this semaphore.
Finally, to make the barber thread know that the current customer thread
has left the barber chair and the barber can bring in the next customer,
we make the barber thread wait on yet another semaphore
waitForCustomerToLeave . This is the same semaphore the customer thread
needs to signal before exiting. The barber thread's implementation
appears below:
while (true) {
// wait till a customer enters a shop
waitForCustomerToEnter.acquire();
// let the customer know barber is ready
waitForBarberToGetReady.release();
lock.lock();
if (waitingCustomers == CHAIRS) {
System.out.println( Customer walks out, all chairs occupi
ed");
lock.unlock();
return;
}
waitingCustomers++;
lock.unlock();
lock.lock();
waitingCustomers--;
lock.unlock();
}
Complete Code
The entire code alongwith the test appears below. Since the barber thread
is perpetual, the widget execution would time-out.
import java.util.HashSet;
import java.util.concurrent.Semaphore;
import java.util.concurrent.locks.ReentrantLock;
class Demonstration {
public static void main( String args[] ) throws InterruptedException {
BarberShopProblem.runTest();
}
}
class BarberShopProblem {
lock.lock();
if (waitingCustomers == CHAIRS) {
System.out.println("Customer walks out, all chairs occupied");
lock.unlock();
return;
}
waitingCustomers++;
lock.unlock();
waitForCustomerToEnter.release();
waitForBarberToGetReady.acquire();
waitForBarberToCutHair.acquire();
waitForCustomerToLeave.release();
lock.lock();
waitingCustomers--;
lock.unlock();
}
while (true) {
waitForCustomerToEnter.acquire();
waitForBarberToGetReady.release();
hairCutsGiven++;
System.out.println("Barber cutting hair..." + hairCutsGiven);
Thread.sleep(50);
waitForBarberToCutHair.release();
waitForCustomerToLeave.acquire();
}
}
}
}
});
barberThread.start();
}
}
});
set.add(t);
}
set.clear();
Thread.sleep(800);
}
}
});
set.add(t);
}
for (Thread t : set) {
t.start();
}
barberThread.join();
}
}
The execution output would show 6 customers getting a haircut and the
rest walking out since there are only three chairs available at the barber
shop.
tweak our implementation to compensate for this change (lines 37, 38, 39
in the below widget) then we'll see the above test give eight haircuts
instead of six. The change entails we decrement the waitingCustomers
variable right after the barber seats a customer. The code with the change
appears below. If you run the widget, you'll see eight threads getting
haircut.
import java.util.HashSet;
import java.util.concurrent.Semaphore;
import java.util.concurrent.locks.ReentrantLock;
class Demonstration {
public static void main( String args[] ) throws InterruptedException {
BarberShopProblem.runTest();
}
}
class BarberShopProblem {
lock.lock();
if (waitingCustomers == CHAIRS) {
System.out.println("Customer walks out, all chairs occupied");
lock.unlock();
return;
}
waitingCustomers++;
lock.unlock();
waitForCustomerToEnter.release();
waitForBarberToGetReady.acquire();
waitForBarberToCutHair.acquire();
waitForCustomerToLeave.release();
}
void barber() throws InterruptedException {
while (true) {
waitForCustomerToEnter.acquire();
waitForBarberToGetReady.release();
hairCutsGiven++;
System.out.println("Barber cutting hair..." + hairCutsGiven);
Thread.sleep(50);
waitForBarberToCutHair.release();
waitForCustomerToLeave.acquire();
}
}
}
}
});
barberThread.start();
}
}
});
set.add(t);
}
set.clear();
Thread.sleep(500);
}
}
});
set.add(t);
}
for (Thread t : set) {
t.start();
Thread.sleep(5);
}
barberThread.join();
}
}
Superman Problem
Problem
You are designing a library of superheroes for a video game that your
fellow developers will consume. Your library should always create a
single instance of any of the superheroes and return the same instance to
all the requesting consumers.
Say, you start with the class Superman . Your task is to make sure that other
developers using your class can never instantiate multiple copies of
superman. After all, there is only one superman!
Solution
You probably guessed we are going to use the singleton pattern to solve
this problem. The singleton pattern sounds very naive and simple but
when it comes to implementing it correctly in Java, it's no cakewalk.
First let us understand what the pattern is. A singleton pattern allows
only a single object/instance of a class to ever exist during an
application run.
// Object method
public void fly() {
System.out.println("I am Superman & I can fly !");
}
}
Here's what your interviewer will tell you when you write this code:
What if the no one likes Superman and instead creates Batman in the
game. You just created Superman and he kept waiting without ever
being called upon to save the world. It's a waste of Superman's time
and also the memory and other resources he'll consume.
The next version is what most candidates would write and is incorrect.
private SupermanWithFlaws() {
// Object method
public void fly() {
System.out.println("I am Superman & I can fly !");
}
}
As any reader of this course should realize by now (if I have done a good
job of teaching) that the getInstance() method would fail miserably in a
multi-threaded scenario. A thread can context switch out just before it
initializes the Superman, causing later threads to also fall into the if
clause and end up creating multiple superman objects.
The naive way to fix this issue is to use our good friend synchronized and
either add synchronized to the signature of the getInstance() method or
add a synchronized block within the method body. Thee mutual exclusion
ensures that only one thread gets to initialize the object.
private SupermanCorrectButSlow() {
// Object method
public void fly() {
System.out.println("I am Superman & I can fly !");
}
}
private SupermanSlightlyBetter() {
return superman;
}
}
The above solution seems almost correct. In fact, it'll appear correct
unless you understand how the intricacies of Java's memory model and
compiler optimizations can affect thread behaviors. The memory model
defines what state a thread may see when it reads a memory location
modified by other threads. The above solution needs one last missing
piece but before we add that consider the below scenario:
1. Thread A comes along and gets to the second if check and allocates
memory for the superman object but doesn't complete construction
of the object and gets switched out. The Java memory model doesn't
ensure that the constructor completes before the reference to the
new object is assigned to an instance. It is possible that the variable
superman is non-null but the object it points to, is still being
initialized in the constructor by another thread.
2. Thread B wants to use the superman object and since the memory is
already allocated for the object it fails the first if check and returns a
semi-constructed superman object. Attempt to use a partially created
object results in a crash or undefined behavior.
To fix the above issue, we mark our superman static object as volatile .
The happens-before semantics of volatile guarantee that the faulty
scenario of threads A and B never happens.
Last but not the least, double-checked locking (DCL) is an antipattern and
its utility has dwindled over time as the JVM startup and uncontended
synchronization speeds have improved.
private Superman() {
if (superman == null) {
synchronized (Superman.class) {
if (superman == null) {
superman = new Superman();
}
}
}
return superman;
}
This lesson continues the discussion on implementing the Singleton pattern in Java.
private Superman() {
}
static {
try {
superman = new Superman();
} catch (Exception e) {
// Handle exception here
}
}
private Superman() {
}
class Demonstration {
public static void main( String args[] ) {
Superman superman = Superman.getInstance();
superman.fly();
}
}
class Superman {
private static Superman superman = new Superman();
private Superman() {
}
private Superman() {
}
if (superman == null) {
superman = new Superman();
}
return superman;
}
}
Thread safe
private Superman() {
}
if (superman == null) {
superman = new Superman();
}
return superman;
}
}
class Demonstration {
public static void main( String args[] ) {
Superman superman = Superman.getInstance();
superman.fly();
}
}
class Superman {
private static Superman superman;
private Superman() {
}
if (superman == null) {
superman = new Superman();
}
return superman;
}
class Demonstration {
public static void main( String args[] ) {
Superman superman = Superman.getInstance();
superman.fly();
}
}
class Superman {
private Superman() {
}
Merge Sort
In the case of merge sort, we divide the given array into two arrays of
equal size, i.e. we divide the original problem into sub-problems to be
solved recursively.
n
T (n) = Cost to divide into 2 unsorted arrays + 2 ∗ T ( ) + C
2
ost to merge 2 sorted arrayswhenn > 1
T (n) = O(1) when n = 1
O(nlgn)
Let's first implement the single threaded version of Merge Sort and then
attempt to make it multithreaded. Note that merge sort can be
implemented without using extra space but the implementation becomes
complex so we'll allow ourselves the luxury of using extra space and stick
to a simple-to-follow implementation.
1. class SingleThreadedMergeSort {
2.
3. private static int[] scratch = new int[10];
4.
5. public static void main( String args[] ) {
6. int[] input = new int[]{ 9, 8, 7, 6, 5, 4, 3, 2, 1, 0 };
7. printArray(input,"Before: ");
8. mergeSort(0, input.length-1, input);
9. printArray(input,"After: ");
10.
11. }
12.
13. private static void mergeSort(int start, int end, int[] input) {
14.
15. if (start == end) {
16. return;
17. }
18.
19. int mid = start + ((end - start) / 2);
21.
22. // sort first half
23. mergeSort(start, mid, input);
24.
25. // sort second half
26. mergeSort(mid + 1, end, input);
27.
28. // merge the two sorted arrays
29. int i = start;
30. int j = mid + 1;
31. int k;
32.
33. for (k = start; k <= end; k++) {
34. scratch[k] = input[k];
35. }
36.
37. k = start;
38. while (k <= end) {
39.
40. if (i <= mid && j <= end) {
41. input[k] = Math.min(scratch[i], scratch[j]);
42.
43. if (input[k] == scratch[i]) {
44. i++;
45. } else {
46. j++;
47. }
48. } else if (i <= mid && j > end) {
49. input[k] = scratch[i];
50. i++;
51. } else {
52. input[k] = scratch[j];
53. j++;
54. }
55. k++;
56. }
57. }
Below is the multithreaded code for Merge sort. Note the code is slightly
different than the single threaded version to account for changes
required for concurrent code.
import java.util.Random;
class Demonstration {
System.out.println("Unsorted Array");
printArray(input);
long start = System.currentTimeMillis();
(new MultiThreadedMergeSort()).mergeSort(0, input.length - 1, input);
long end = System.currentTimeMillis();
System.out.println("\n\nTime taken to sort = " + (end - start) + " milliseconds");
System.out.println("Sorted Array");
printArray(input);
}
}
class MultiThreadedMergeSort {
void mergeSort(final int start, final int end, final int[] input) {
if (start == end) {
return;
}
try {
worker1.join();
worker2.join();
} catch (InterruptedException ie) {
// swallow
}
k = start;
while (k <= end) {
if (input[k] == scratch[i]) {
i++;
} else {
j++;
}
} else if (i <= mid && j > end) {
input[k] = scratch[i];
i++;
} else {
input[k] = scratch[j];
j++;
}
k++;
}
}
}
We create two threads on lines 51 and 59 and then wait for them to finish
on lines 67-68. On smaller datasets the speed-up achieved may not be
visible but larger datasets which are processed on multiprocessor
machines, the speed-up effect will be much more pronounced.
Asynchronous to Synchronous Problem
Problem
Executor Class
Callback Interface
class Demonstration {
public static void main( String args[] ) throws Exception{
Executor executor = new Executor();
executor.asynchronousExecution(() -> {
System.out.println("I am done");
});
interface Callback {
class Executor {
Note how the main thread exits before the asynchronous execution is
completed.
Since we can’t modify the original code, we’ll extend a new class
SynchronousExecutor from the given Executor class and override the
asynchronousExecution() method. The trick here is to invoke the original
asynchronous implementation using super.asynchronousExecution()
inside the overridden method. The overridden method would look like:
Note that the variable signal gets captured in the scope of the new
callback that we define. However, the captured variable must be defined
final or be effectively final . Since we are assigning the variable only
once, it is effectively final . The code so far defines the basic structure of
the solution and we need to add a few missing pieces for it to work.
Note that the invariant here is isDone which is set to true after the
asynchronous execution is complete. The last problem here is that isDone
isn’t final . We can’t declare it final because isDone gets assigned to after
initialization. At this a slighly less elegant but workable solution is to use
a boolean array of size 1 to represent our boolean. The array can be final
because it gets assigned memory at initialization but the contents of the
array can be changed later without compromising the finality of the
variable.
@Override
public void asynchronousExecution(Callback callback) throws Exceptio
n {
Object signal = new Object();
final boolean[] isDone = new boolean[1];
Callback cb = new Callback() {
@Override
public void done() {
callback.done();
synchronized (signal) {
signal.notify();
isDone[0] = true;
}
}
};
// Call the asynchronous executor
super.asynchronousExecution(cb);
synchronized (signal) {
while (!isDone[0]) {
signal.wait();
}
}
}
class Demonstration {
public static void main( String args[] ) throws Exception {
SynchronousExecutor executor = new SynchronousExecutor();
executor.asynchronousExecution(() -> {
System.out.println("I am done");
});
interface Callback {
@Override
public void asynchronousExecution(Callback callback) throws Exception {
@Override
public void done() {
callback.done();
synchronized (signal) {
signal.notify();
isDone[0] = true;
}
}
};
synchronized (signal) {
while (!isDone[0]) {
signal.wait();
}
}
}
}
class Executor {
Note the main thread has its print-statement printed after the
asynchronous execution thread print its print-statement verifying that
the execution is now synchronous.
The way we have constructed the logic, all the variables in the overridden
method will be created on the thread-stack for each thread therefore the
method is threadsafe and multiple threads can execute it in parallel.
Epilogue
C. H. Afzal.
You can find the solutions to the problems discussed in this course at the
following github repo:
Github Repo
Every great product is a result of team effort and so is this course.
Collaborators on this course included the following good folks:
Last but not least, it is only human to err and so have I during the
composition of this course. I am very grateful to folks who very kindly
apprised me of the omissions and errors in the content and as a thank
you note, I acknowledge them below.
Sergey Lobov
Andriy Tskitishvili
Hanna Najjar
Fahim ul Haq
Bohan Zhang
Manish Narula
Stefan Cross
Sanjeev Panday
Diptanu Sarkar
Chinmay Das
Ordered Printing
Problem
Suppose there are three threads t1, t2 and t3. t1 prints First, t2 prints
Second and t3 prints Third. The code for the class is as follows:
Solution
We present two solutions for this problem; one using the basic wait() &
notifyAll() functions and the other using CountDownLatch.
Solution 1
In this solution, we have a class OrderedPrinting that consists of a private
variable; count . The class consists of 3 functions
printFirst() , printSecond() and printThird() . The structure of the class
is as follows:
class OrderedPrinting {
int count;
public OrderedPrinting() {
count = 1;
}
synchronized(this) {
System.out.println("First");
count++; //for printing Second, increment count
this.notifyAll();
}
}
synchronized(this) {
while(count != 3) {
this.wait();
}
System.out.println("Third");
}
}
The third method checks works in the same way as the second. The only
difference being the check for count to be equal to 3. If it is, then "Third"
is printed otherwise the calling thread waits.
}
}
//for printing "Second"
else if ("second".equals(method)) {
try {
obj.printSecond();
}
catch(InterruptedException e) {
}
}
//for printing "Third"
else if ("third".equals(method)) {
try {
obj.printThird();
}
catch(InterruptedException e) {
}
}
}
}
We will be creating 3 threads in the Main class for testing each solution.
Each thread will be passed the same object of OrderedPrinting . t1 will call
printFirst() , t2 will call printSecond() and t3 will call printThird() . The
output shows printing done in the proper order i.e first, second and third
irrespective of the calling order of threads.
class OrderedPrinting {
int count;
public OrderedPrinting() {
count = 1;
synchronized(this){
System.out.println("First");
count++;
this.notifyAll();
}
}
synchronized(this){
while(count != 2){
this.wait();
}
System.out.println("Second");
count++;
this.notifyAll();
}
synchronized(this){
while(count != 3){
this.wait();
}
System.out.println("Third");
}
}
}
}
}
//for printing "Third"
else if ("third".equals(method))
{
try
{
obj.printThird();
}
catch(InterruptedException e)
{
}
}
}
}
t2.start();
t3.start();
t1.start();
}
}
Solution 2
notified and control is given back to the main thread that has been
waiting for others to finish.
class OrderedPrinting {
CountDownLatch latch1;
CountDownLatch latch2;
public OrderedPrinting() {
latch1 = new CountDownLatch(1);
latch2 = new CountDownLatch(1);
}
}
System.out.println("Third");
}
import java.util.concurrent.CountDownLatch;
class OrderedPrinting
{
CountDownLatch latch1;
CountDownLatch latch2;
public OrderedPrinting()
{
latch1 = new CountDownLatch(1);
latch2 = new CountDownLatch(1);
}
{
try
{
obj.printFirst();
}
catch(InterruptedException e)
{
}
}
else if ("second".equals(method))
{
try
{
obj.printSecond();
}
catch(InterruptedException e)
{
}
}
else if ("third".equals(method))
{
try
{
obj.printThird();
}
catch(InterruptedException e)
{
}
}
}
}
t3.start();
t2.start();
t1.start();
}
}
Printing Foo Bar n Times
Learn how to execute threads in a speci c order for a user speci ed number of iterations.
Problem
Suppose there are two threads t1 and t2. t1 prints Foo and t2 prints Bar.
You are required to write a program which takes a user input n. Then the
two threads print Foo and Bar alternately n number of times. The code
for the class is as follows:
class PrintFooBar {
The two threads will run sequentially. You have to synchronize the two
threads so that the functions PrintFoo() and PrintBar() are executed in an
order. The workflow is shown below:
Workflow
Solution
We will solve this problem using the basic utilities of wait() and
notifyAll() in Java. The basic structure of FooBar class is given below:
class FooBar {
private int n;
private int flag = 0;
public FooBar(int n) {
this.n = n;
}
n is the user input that tells how many times "Foo" and "Bar" should be
printed. flag is an integer based on which the words are printed. When
the value of flag is 0, the word "Foo" will be printed and it will be
incremented. This way "Bar" can be printed next. flag is initialized with
0 because the printing has to start with "Foo". The class consists of two
methods foo() and bar() and their structures are given below:
FooBar fooBar;
String method;
To test our code, We will create two threads; t1 and t2. An object of
FooBar is initialized with 3 . Both threads will be passed the same object
of FooBar . t1 calls foo() & t2 calls bar() .
class FooBar {
private int n;
private int flag = 0;
public FooBar(int n) {
this.n = n;
}
}
}
System.out.print("Foo");
flag = 1;
this.notifyAll();
}
}
}
}
}
System.out.println("Bar");
flag = 0;
this.notifyAll();
}
}
}
}
FooBar fooBar;
String method;
t2.start();
t1.start();
}
}
Printing Number Series (Zero, Even, Odd)
This problem is about repeatedly executing threads which print a speci c type of number. Another variation of this
problem; print even and odd numbers; utilizes two threads instead of three.
Problem
class PrintNumberSeries {
public PrintNumberSeries(int n) {
this.n = n;
}
You are required to write a program which takes a user input n and
outputs the number series using three threads. The three threads work
together to print zero, even and odd numbers. The threads should be
synchronized so that the functions PrintZero(), PrintOdd() and PrintEven()
are executed in an order.
Solution
class PrintNumberSeries {
private int n;
private Semaphore zeroSem, oddSem, evenSem;
public PrintNumberSeries(int n) {
}
n is the user input that prints the series till nth number. The constructor
of this class appears below:
public PrintNumberSeries(int n) {
this.n = n;
zeroSem = new Semaphore(1);
oddSem = new Semaphore(0);
evenSem = new Semaphore(0);
}
oddSem.acquire();
System.out.print(i);
zeroSem.release();
}
}
import java.util.concurrent.*;
class PrintNumberSeries {
private int n;
private Semaphore zeroSem, oddSem, evenSem;
public PrintNumberSeries(int n) {
this.n = n;
zeroSem = new Semaphore(1);
oddSem = new Semaphore(0);
evenSem = new Semaphore(0);
}
zeroSem.release();
}
}
PrintNumberSeries zeo;
String method;
t2.start();
t1.start();
t3.start();
}
}
We were told to use three threads in the problem statement but the
solution can be achieved using two threads as well. Since zero is printed
before every number, we do not need to dedicate a special thread for it.
We can simply print a zero before printing every odd or even number.
Build a Molecule
This problem simulated the creation of water molecule by grouping three threads representing Hydrogen and
Oxygen atoms.
Problem
class H2OMachine {
public H2OMachine() {
}
}
The input to the machine can be in any order. Your program should
enforce a 2:1 ratio for Hydrogen and Oxygen threads, and stop more than
the required number of threads from entering the machine.
Solution
The problem is solved by using basic utility functions like notify() and
wait() . The class consits of 3 private members: sync for synchronization,
molecule which is a string array with a capacity of 3 elements (atoms)
and count to store the current index of the molecule array.
class H2OMachine {
Object sync;
String[] molecule;
int count;
public H2OMachine() {
}
public H2OMachine() {
molecule = new String[3];
count = 0;
sync = new Object();
}
molecule[count] = "H";
count++;
}
}
In case molecule is full and count is 3, then print the molecule and exit
the machine. The array molecule is reset (initialized with null) and count
goes back to 0 for a new molecule to be built. At the end of the method,
the waiting threads (atoms) are notified using notifyAll() . The complete
code for HydrogenAtom() is given below:
public void HydrogenAtom() {
synchronized (sync) {
molecule[count] = "O";
count++;
// if molecule is full, then exit.
if(count == 3) {
class H2OMachine {
Object sync;
String[] molecule;
int count;
public H2OMachine() {
molecule = new String[3];
count = 0;
sync = new Object();
}
molecule[count] = "H";
count++;
molecule[count] = "O";
count++;
H2OMachine molecule;
String atom;
}
catch (Exception e) {
}
}
else if ("O".equals(atom)) {
try {
molecule.OxygenAtom();
}
catch (Exception e) {
}
}
}
}
import java.util.Arrays;
import java.util.Collections;
class H2OMachine {
Object sync;
String[] molecule;
int count;
public H2OMachine() {
molecule = new String[3];
count = 0;
sync = new Object();
}
molecule[count] = "H";
count++;
molecule[count] = "O";
count++;
H2OMachine molecule;
String atom;
t2.start();
t1.start();
t4.start();
t3.start();
}
}
Fizz Buzz Problem
This problem explores a multi-threaded solution to the very common Fizz Buzz programming task
Problem
Suppose we have four threads t1, t2, t3 and t4. Thread t1 checks if the
number is divisible by 3 and prints fizz. Thread t2 checks if the number is
divisible by 5 and prints buzz. Thread t3 checks if the number is divisible
by both 3 and 5 and prints fizzbuzz. Thread t4 prints numbers that are
not divisible by 3 or 5. The workflow of the program is shown below:
class MultithreadedFizzBuzz {
private int n;
public MultithreadedFizzBuzz(int n) {
this.n = n;
}
For an input integer n, the program should output a string containing the
words fizz, buzz and fizzbuzz representing certain numbers. For
example, for n = 15, the output should be: 1, 2, fizz, 4, buzz, fizz, 7, 8, fizz,
buzz, 11, fizz, 13, 14, fizzbuzz.
Solution
We will solve this problem using the basic Java functions; wait() and
notifyAll() . The basic structure of the class is given below.
class MultithreadedFizzBuzz {
private int n;
private int num = 1;
public MultithreadedFizzBuzz(int n) {
}
public void fizzbuzz() {
}
public void fizz() {
}
public MultithreadedFizzBuzz(int n) {
this.n = n;
}
The second function in the class, fizz() prints "fizz" only if the current
number is divisible by 3. The first loop checks if num (current number) is
smaller than or equal to n (user input). Then num is checked for its
divisibility by 3. We check if num is divisible by 3 and not by 5 because
some multiples of 3 are also multiples of 5. If the condition is met, then
"fizz" is printed and num is incremented. The waiting threads are notified
via notifyAll() . If the condition is not met, the thread goes into wait() .
The next function buzz() works in the same manner as fizz() . The only
difference here is the check to see if num is divisibile by 5 and not by 3.
The reasoning is the same: some multiples of 5 are also multiples of 3 and
notifyAll();
}
else {
wait();
}
}
}
class MultithreadedFizzBuzz {
private int n;
private int num = 1;
public MultithreadedFizzBuzz(int n) {
this.n = n;
}
MultithreadedFizzBuzz obj;
String method;
try {
obj.buzz();
}
catch (Exception e) {
}
}
else if ("FizzBuzz".equals(method)) {
try {
obj.fizzbuzz();
}
catch (Exception e) {
}
}
else if ("Number".equals(method)) {
try {
obj.number();
}
catch (Exception e) {
}
}
}
}
To test our solution, we will be making 4 threads: t1,t2, t3 and t4. Three
threads will check for divisibility by 3, 5 and 15 and print fizz, buzz, and
fizzbuzz accordingly. Thread t4 prints numbers that are not divisible by 3
or 5.
class MultithreadedFizzBuzz {
private int n;
private int num = 1;
public MultithreadedFizzBuzz(int n) {
this.n = n;
}
public synchronized void fizz() throws InterruptedException {
while (num <= n) {
if (num % 3 == 0 && num % 5 != 0) {
System.out.println("Fizz");
num++;
notifyAll();
} else {
wait();
}
}
}
MultithreadedFizzBuzz obj;
String method;
}
}
t2.start();
t1.start();
t4.start();
t3.start();
}
}
Next Steps
Creating Threads
Runnable Interface
class Demonstration {
public static void main( String args[] ) {
Thread t = new Thread(new Runnable() {
class Demonstration {
public static void main( String args[] ) {
The second way to set-up threads is to subclass the Thread class itself as
shown below.
class Demonstration {
public static void main( String args[] ) throws Exception {
ExecuteMe executeMe = new ExecuteMe();
executeMe.start();
executeMe.join();
}
}
class ExecuteMe extends Thread {
@Override
public void run() {
System.out.println("I ran after extending Thread class");
}
The con of the second approach is that one is forced to extend the Thread
class which limits code's flexibility. Passing in an object of a class
implementing the Runnable interface may be a better choice in most
cases.
Joining Threads
class Demonstration {
public static void main( String args[] ) throws InterruptedException {
If we want the main thread to wait for the innerThread to finish before
proceeding forward, we can direct the main thread to suspend its
execution by calling join method on the innerThread object right after
we start the innerThread. The change would look like the following.
Daemon Threads
innerThread.setDaemon(true);
Note that in case a spawned thread isn't marked as daemon then even if
the main thread finishes execution, JVM will wait for the spawned thread
to finish before tearing down the process.
Sleeping Threads
A thread can be made dormant for a specified period using the sleep
method. However, be wary to not use sleep as a means for coordination
among threads. It is a common newbie mistake. Java language framework
offers other constructs for thread synchronization that'll be discussed
later.
class SleepThreadExample {
public static void main( String args[] ) throws Exception {
ExecuteMe executeMe = new ExecuteMe();
Thread innerThread = new Thread(executeMe);
innerThread.start();
innerThread.join();
System.out.println("Main thread exiting.");
}
static class ExecuteMe implements Runnable {
In the above example, the innerThread is made to sleep for 1 second and
from the output of the program, one can see that main thread exits only
after innerThread is done processing. If we remove the join statement
on line-6, then the main thread may print its statement before
innerThread is done executing.
Interrupting Threads
In the previous code snippets, we wrapped the calls to join and sleep in
try/catch blocks. Imagine a situation where if a rogue thread sleeps
forever or goes into an infinite loop, it can prevent the spawning thread
from moving ahead because of the join call. Java allows us to force such
a misbehaved thread to come to its senses by interrupting it. An example
appears below.
class HelloWorld {
public static void main( String args[] ) throws InterruptedException {
ExecuteMe executeMe = new ExecuteMe();
Thread innerThread = new Thread(executeMe);
innerThread.start();
}
Executor Framework
Task
Executor Framework
In Java, the primary abstraction for executing logical tasks units is the
Executor framework and not the Thread class. The classes in the Executor
framework separate out:
Task Submission
Task Execution
The framework allows us to specify different policies for task execution.
Java offers three interfaces, which classes can implement to manage
thread lifecycle. These are:
Executor Interface
ExecutorService
ScheduledExecutorService
The Executor interface forms the basis for the asynchronous task
execution framework in Java.
import java.util.concurrent.Executor;
class ThreadExecutorExample {
}
A Dumb Thread Executor
Sequential Approach
The method simply accepts an order and tries to execute it. The method
blocks other requests till it has completed processing the current request.
void receiveAndExecuteClientOrders() {
while (true) {
Order order = waitForNextOrder();
order.execute();
}
}
You'll write the above code if you have never worked with concurrency. It
sequentially processes each buy order and will not be responsive or have
acceptable throughput.
while (true) {
final Order order = waitForNextOrder();
thread.start();
}
}
Active threads consume memory even if they are idle. If there are
less number of processors than threads then several of them will sit
idle tying up memory.
Note that the above improvement may still make the application
unresponsive. Imagine if several hundred requests are received between
the time it takes for the method to receive an order request and spawn off
a thread to deal with the request. In such a scenario, the method will end
up with a growing backlog of requests and may cause the program to
crash.
This lesson introduces thread pools and their utility in concurrent programming.
Thread Pools
A thread pool can be tuned for the size of the threads it holds. A thread
pool may also replace a thread if it dies of an unexpected exception. Using
a thread pool immediately alleviates from the ails of manual creation of
threads.
The system will not go out of memory because threads are not
created without any limits
Fine tuning the thread pool will allow us to control the throughput of
the system. We can have enough threads to keep all processors busy
but not so many as to overwhelm the system.
The application will degrade gracefully if the system is under load.
Below is the updated version of the stock order method using a thread
pool.
void receiveAndExecuteClientOrdersBest() {
while (true) {
final Order order = waitForNextOrder();
executor.execute(new Runnable() {
In the above code we have used the factory method exposed by the
Executors class to get an instance of a thread pool. We discuss the
different type of thread pools available in Java in the next section.
Types of Thread Pools
This lesson details the different types of thread pools available in the Java class library.
There is also another kind of pool which we'll only mention in passing as
it's not widely used: ForkJoinPool . A prefconfigured version of it can be
instantiated using the factory method Executors.newWorkStealingPool() .
These pools are used for tasks which fork into smaller subtasks and then
join results once the subtasks are finished to give an uber result. It's
essentially the divide and conquer paradigm applied to tasks.
Using thread pools we are able to control the order in which a task is
executed, the thread in which a task is executed, the maximum number of
tasks that can be executed concurrently, maximum number of tasks that
can be queued for execution, the selection criteria for rejecting tasks
when the system is overloaded and finally actions to take before or after
execution of tasks.
Executor Lifecycle
Running
Shutting Down
Terminated
As mentioned earlier, JVM can't exit unless all non-daemon thread have
terminated. Executors can be made to shutdown either abruptly or
gracefully. When doing the former, the executor attempts to cancel all
tasks in progress and doesn't work on any enqueued ones, whereas when
doing the latter, the executor gives a chance for tasks already in execution
to complete but also completes the enqueued tasks. If shutdown is
initiated then the executor will refuse to accept new tasks and if any are
submitted, they can be handled by providing a RejectedExecutionHandler .
An Example: Timer vs ScheduledThreadPool
Timer
The achilles' heel of the Timer class is its use of a single thread to execute
submitted tasks. Timer has a single worker thread that attempts to
execute all user submitted tasks. Issues with this approach are detailed
below:
If a task misbehaves and never terminates, all other tasks would not
be executed
class Demonstration {
public static void main( String args[] ) throws Exception {
Timer timer = new Timer();
TimerTask badTask = new TimerTask() {
@Override
public void run() {
// run forever
while (true)
;
}
};
@Override
public void run() {
}
};
timer.schedule(badTask, 100);
timer.schedule(goodTask, 500);
import java.util.Timer;
import java.util.TimerTask;
class Demonstration {
public static void main( String args[] ) throws Exception{
@Override
public void run() {
throw new RuntimeException("Something Bad Happened");
}
};
@Override
public void run() {
System.out.println("Hello I am a well-behaved task");
}
};
timer.schedule(badTask, 10);
Thread.sleep(500);
timer.schedule(goodTask, 10);
}
}
Callable Interface
Callable Interface
Note the interface also allows a task to throw an exception. A task goes
through the various stages of its life which include the following:
created
submitted
started
completed
Let's say we want to compute the sum of numbers from 1 to n. Our task
should accept an integer n and spit out the sum. Below are two ways to
int n;
public SumTask(int n) {
this.n = n;
}
if (n <= 0)
return 0;
int sum = 0;
for (int i = 1; i <= n; i++) {
sum += i;
}
return sum;
}
}
final int n = 10
Callable<Integer> sumTask = new Callable<Integer>() {
Now we know how to represent our tasks using the Callable interface. In
the next section we'll explore the Future interface which will help us
manage a task's lifecycle as well as retrieve results from it.
Future Interface
Future Interface
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
class Demonstration {
Future<Integer> f = threadPool.submit(sumTask);
return f.get();
}
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
class Demonstration {
Future<Integer> f = threadPool.submit(sumTask);
try {
result = f.get();
} catch (ExecutionException ee) {
System.out.println( Something went wrong. + ee.getCause());
}
return result;
}
On line 31 of the above code, we make a get method call. The method
throws an execution exception, which we catch. The reason for the
exception can be determined by using the getCause method of the
execution exception. If you run the above snippet, you'll see it prints the
runtime exception that we throw on line 24.
The get method is a blocking call. It'll block till the task completes. We
can also write a polling version, where we poll periodically to check if the
task is complete or not. Future also allows us to cancel tasks. If a task has
been submitted but not yet executed, then it'll be cancelled. However, if a
task is currently running, then it may or may not be cancellable. We'll
discuss cancelling tasks in detail in future lessons.
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
class Demonstration {
int sum = 0;
for (int i = 1; i <= n; i++)
sum += i;
return sum;
}
};
Future<Integer> f1 = threadPool.submit(sumTask1);
Future<Void> f2 = threadPool.submit(randomTask);
return result;
}
}
line 45 the second task submitted doesn't return any value so the
future is parametrized with Void .
On line 52, we cancel the second task. Since our thread pool consists
of a single thread and the first task sleeps for a bit before it starts
executing, we can assume that the second task will not have started
executing and can be cancelled. This is verified by checking for and
printing the value of the isCancelled method later in the program.
On lines 56 - 58, we repeatedly poll for the status of the first task.
The final output of the program shows messages from polling and the
status of the second task cancellation request.
FutureTask
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorCompletionService;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.FutureTask;
class Demonstration {
@SuppressWarnings("unchecked")
public static void main( String args[] ) throws Exception{
if(duplicateFuture.isDone() != futureTask.isDone()){
System.out.println("This should never happen.");
}
System.out.println((int)futureTask.get());
threadPool.shutdown();
}
}
CompletionService Interface
CompletionService Interface
import java.util.Random;
import java.util.concurrent.ExecutorCompletionService;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
class Demonstration {
int n;
public TrivialTask(int n) {
this.n = n;
}
threadPool.shutdown();
}
}
ThreadLocal
ThreadLocal
int count = 5;
count += val;
System.out.println(val);
Do you think the above method is thread-safe? If multiple threads call this
method, then each executing thread will create a copy of the local
variables on its own thread stack. There would be no shared variables
amongst the threads and the instance method by itself would be thread-
safe.
However, if we moved the count variable out of the method and declared
it as an instance variable then the same code will not be thread-safe.
We can have a copy of an instance (or a class) variable for each thread
that accesses it by declaring the instance variable ThreadLocal. Look at
the thread unsafe code below. If you run it multiple times, you'll see
different results. The count variable is incremented 100 times by 100
threads so in a thread-safe world the final value of the variable should
come out to be 10,000.
import java.util.concurrent.Executors;
class Demonstration {
public static void main( String args[] ) throws Exception{
System.out.println(usc.count);
}
}
class UnsafeCounter {
// Instance variable
int count = 0;
void increment() {
count = count + 1;
}
}
Now we'll change the code to make the instance variable threadlocal. The
change is:
ThreadLocal variables get tricky when used with the executor service
(threadpools) since threads don't terminate and are returned to the
threadpool. So any threadlocal variables aren't garbage collected. For
interesting scenarios, please see Quiz#8.
class Demonstration {
public static void main( String args[] ) throws Exception{
UnsafeCounter usc = new UnsafeCounter();
Thread[] tasks = new Thread[100];
System.out.println(usc.counter.get());
});
tasks[i] = t;
t.start();
}
System.out.println(usc.counter.get());
}
}
class UnsafeCounter {
void increment() {
counter.set(counter.get() + 1);
}
}
CountDownLatch
CountDownLatch
If the CountDownLatch is initialized with zero, the thread would not wait
for any other thread(s) to complete. The count passed is basically the
number of times countDown() must be invoked before threads can pass
through await() . If the CountDownLatch has reached zero and countDown()
is again invoked, the latch will remain released hence making no
difference.
Two workers, A & B, are being executed concurrently (two back to back
threads initiated) while the master thread waits for them to finish. Every
time a worker completes execution, the counter in the CountDownLatch is
decremented by 1. Once all the workers have completed execution, the
counter reaches 0 and notifies the threads blocked on the await()
method. Subsequently, the latch opens and allows the master thread to
run.
/**
* The worker thread that has to complete its tasks first
*/
public class Worker extends Thread
{
private CountDownLatch countDownLatch;
@Override
public void run()
{
System.out.println("Worker " +Thread.currentThread().getName
()+" started");
try
{
Thread.sleep(3000);
}
catch (InterruptedException ex)
{
ex.printStackTrace();
}
System.out.println("Worker "+Thread.currentThread().getName
()+" finished");
/**
* The master thread that has to wait for the worker to complete it
s operations first
*/
public class Master extends Thread
{
public Master(String name)
{
super(name);
}
@Override
public void run()
{
System.out.println("Master executed "+Thread.currentThread().
getName());
try
{
Thread.sleep(2000);
}
catch (InterruptedException ex)
{
ex.printStackTrace();
}
}
}
/**
* The main thread that executes both the threads in a particular ord
er
*/
public class Main
{
public static void main(String[] args) throws InterruptedExceptio
n
{
//Created CountDownLatch for 2 threads
CountDownLatch countDownLatch = new CountDownLatch(2);
B.start();
//When two threads(A and B)complete their tasks, they are ret
urned (counter reached 0).
countDownLatch.await();
main.java
Worker.java
Master.java
@Override
public void run()
{
System.out.println("Master executed "+Thread.currentThread().getName());
try
{
Thread.sleep(2000);
}
catch (InterruptedException ex)
{
ex.printStackTrace();
}
}
}
A pictorial representation appears below:
CyclicBarrier
CyclicBarrier
/**
* Runnable task for each thread.
*/
class Task implements Runnable {
/**
* Main thread that demonstrates how to use CyclicBarrier.
*/
public class Main {
public static void main (String args[]) {
t1.start();
t2.start();
t3.start();
}
}
main.java
Task.java
import java.util.concurrent.BrokenBarrierException;
import java.util.concurrent.CyclicBarrier;
/**
* Main thread that demonstrates how to use CyclicBarrier.
*/
public class main {
public static void main (String args[]) {
t1.start();
t2.start();
t3.start();
}
}
A pictorial representation appears below:
Concurrent Collections
This lesson gives a brief introduction about Java's concurrent collection classes.
Concurrent Collections
CopyOnWrite Example
Lets take an example with a regular ArrayList along with a
CopyOnWriteArrayList . We will measure the time it takes to add an item to
an already initialized array. The output from running the code widget
below demonstrates that the CopyOnWriteArrayList takes much more time
than a regular ArrayList because under the hood, all the elements of the
CopyOnWriteArrayList object get copied thus making an insert operation
that much more expensive.
import java.util.concurrent.CopyOnWriteArrayList;
import java.util.*;
/**
* Java program to illustrate CopyOnWriteArrayList
*/
public class main
{
public static void main(String[] args)
throws InterruptedException
{
//Initializing a regular Arraylist
ArrayList<Integer> array_list = new ArrayList<>();
array_list.ensureCapacity(500000);
//Initializing a new CopyOnWrite Arraylist with 500,000 numbers
CopyOnWriteArrayList<Integer> numbers = new CopyOnWriteArrayList<>(array_list);
}
}
Quiz 1
Question # 1
A thread has some state private to itself but threads of a process can
share the resources allocated to the process including memory
address space.
Question # 2
Given the below code, can you identify what the coder missed?
};
threadPool.submit(sumTask);
f.get();
}
The above code forgets to shutdown the executor thread pool. The thread
pool when instantiated would also create 5 worker threads. If we don't
shutdown the executor when exiting the main method, then JVM would
also not exit. It will keep waiting for the pool's worker threads to finish,
since they aren't marked as daemon. As an example execute the below
code snippet.
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
class Demonstration {
public static void main( String args[] ) throws Exception {
ExecutorService threadPool = Executors.newFixedThreadPool(5);
threadPool.submit(someTask).get();
The above program execution will show execution timed out, even though
both the string messages are printed. You can fix the above code by
adding threadPool.shutdown() as the last line of the method.
Question # 3
class ThreadsWithLambda {
void compute(Runnable r) {
System.out.println("Runnable invoked");
r.run();
}
The lamda expression is returning the string done, therefore the compiler
will match the call to the second compute method and the expression will
be considered a type of interface Callable . You can run the below snippet
and verify the output to convince yourself.
import java.util.concurrent.Callable;
class Demonstration {
public static void main( String args[] ) throws Exception{
(new LambdaTargetType()).getWorking();
}
}
class LambdaTargetType {
public void getWorking() throws Exception {
compute(() -> "done");
}
void compute(Runnable r) {
System.out.println("Runnable invoked");
r.run();
}
Question # 4
Check Answers
Question # 5
Given the code snippet below, how many times will the innerThread
print its messages?
innerThread.start();
System.out.println("Main thread exiting");
}
Check Answers
Question # 6
Given the below code snippet how many messages will the
innerThread print?
innerThread.setDaemon(true);
innerThread.start();
System.out.println("Main thread exiting");
}
Check Answers
Question # 7
Say your program takes exactly 10 minutes to run. After reading this
course, you become excited about introducing concurrency in your
program. However, you only use two threads in your program.
Holding all other variables constant, what is minimum time your
improved program can theoretically run in?
Check Answers
Question # 8
Q
Check Answers
Quiz 2
Question # 1
Question # 2
int total = 0;
for (int i = 0; i < vals.length; i++) {
total += vals[i];
}
return total;
}
}
Q
Check Answers
Show Explanation
The class Sum is stateless i.e. it doesn't have any member variables. All
stateless objects and their corresponding classes are thread-safe. Since the
actions of a thread accessing a stateless object can't affect the correctness
of operations in other threads, stateless objects are thread-safe.
However, note that the method takes in variable arguments and the
class wouldn't be thread safe anymore if the passed in argument was
an array instead of individual integer variables and at the same the
time, the sum method performed a write operation on the passed in
array.
Question # 3
Question # 4
Given the following code snippet, can you work out a scenario that
causes a race condition?
1. class HitCounter {
2.
3. long count = 0;
4.
5. void hit() {
6. count++;
7. }
8.
9. long getHits() {
10. return this.count;
11. }
}
1. Say count = 7
8. The net effect is count ends up with a value 8 when it should have
been 9. This is an example of read-modify-write type of race
condition.
Question # 5
Given the following code snippet, can you work out a scenario that
causes a race condition?
1. class MySingleton {
2.
3. MySingleton singleton;
4.
5. private MySingleton() {
6. }
7.
8. MySingleton getInstance() {
9. if (singleton == null)
10. singleton = new MySingleton();
11.
12. return singleton;
13. }
14. }
This is the classic problem in Java for creating a singleton object. The
following sequence will result in a race condition:
1. Thread A reaches line#9, finds the singleton object null and
proceeds to line#10
Question # 1
t.start();
t.join();
class Demonstration {
public static void main( String args[]) throws Exception {
t.start();
t.join();
}
}
Question # 2
// Anoymous class
Callable<Void> task = new Callable<Void>() {
@Override
public Void call() throws Exception {
System.out.println("Using callable indirectly with in
stance of thread class");
return null;
}
};
// Anoymous class
Callable<Void> task = new Callable<Void>() {
@Override
public Void call() throws Exception {
System.out.println("Using callable indirectly with in
stance of thread class");
return null;
}
};
ExecutorService executorService = Executors.newFixedThreadPoo
l(5);
executorService.submit(task);
executorService.shutdown();
import java.util.concurrent.Callable;
import java.util.concurrent.FutureTask;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
class Demonstration {
public static void main( String args[] ) throws Exception {
usingExecutorService();
usingThread();
@Override
public Void call() throws Exception {
System.out.println("Using callable with executor service.");
return null;
}
};
@Override
public Void call() throws Exception {
System.out.println("Using callable indirectly with instance of thread class")
return null;
}
};
}
Question # 3
We can extend from the Thread class to represent our task. Below is an
example of a class that computes the square roots of given numbers. The
Task class encapsulates the logic for the task being performed.
T item;
class Demonstration {
public static void main( String args[] ) throws Exception{
for(int i = 0;i<10;i++) {
tasks[i].join();
}
}
}
T item;
Question # 1
Synchronized blocks guarded by the same lock will execute one at a time.
These blocks can be thought of as being executed atomically. Locks
provide serialized access to the code paths they guard.
class ContactBook {
Question # 2
void doubleSynchronization() {
synchronized (this) {
synchronized (this) {
System.out.println("Is this line unreachable ?");
}
}
}
Check Answers
Show Explanation
Question # 3
Consider the below class which has a synchronized method. Can you
tell what object does the thread invoking the addName() method
synchronize on?
class ContactBook {
Check Answers
Show Explanation
Question # 4
Show Explanation
Quiz 5
Question # 1
int count = 0;
count++;
int total = 0;
for (int i = 0; i < vals.length; i++) {
total += vals[i];
}
return total;
}
void printInvocations() {
System.out.println(count);
}
}
Q
COMPLETED 0%
1 of 1
Show Explanation
Question # 2
What are the different ways in which we can make the Sum class
thread-safe?
count.getAndIncrement();
int total = 0;
for (int i = 0; i < vals.length; i++) {
total += vals[i];
}
return total;
}
void printInvocations() {
System.out.println(count.get());
}
}
We can also fix the sum class by using synchronizing on the object
instance.
Using Synchronization on this
int count = 0;
count++;
int total = 0;
for (int i = 0; i < vals.length; i++) {
total += vals[i];
}
return total;
}
We could also use another object other than this for synchronization.
The code would then be as follows:
int count = 0;
Object lock = new Object();
synchronized (lock) {
count++;
}
int total = 0;
for (int i = 0; i < vals.length; i++) {
total += vals[i];
}
return total;
}
void printInvocations() {
synchronized (lock) {
System.out.println(count);
}
}
}
Question # 3
In the above question, when we fixed the Sum class for thread safety
we synchronized the printInvocations() method. What will happen if
we didn't synchronize the printInvocations() method?
Question # 4
int total = 0;
for (int i = 0; i < vals.length; i++) {
total += vals[i];
}
return total;
}
Check Answers
Show Explanation
Quiz 6
Question # 1
int myvalue = 2;
boolean done = false;
void thread1() {
while (!done);
System.out.println(myvalue);
void thread2() {
myvalue = 5;
done = true;
}
}
We create an object of the above class and have two threads run
each of the two methods like so:
thread1.start();
thread2.start();
thread1.join();
thread2.join();
Check Answers
Show Explanation
Question # 2
Will the following change guarantee that thead1 sees the changes
made to shared variables by thead2?
void thread1() {
synchronized (this) {
while (!done);
System.out.println(myvalue);
}
}
void thread2() {
myvalue = 5;
done = true;
}
}
Check Answers
Show Explanation
Question # 3
int myvalue = 2;
boolean done = false;
synchronized (this) {
while (!done)
this.wait();
System.out.println(myvalue);
}
}
void thread2() {
synchronized (this) {
myvalue = 5;
done = true;
this.notify();
}
}
}
Question # 4
Question # 5
int myvalue = 2;
volatile boolean done = false;
void thread1() {
while (!done);
System.out.println(myvalue);
void thread2() {
myvalue = 5;
done = true;
this.notify();
}
}
It is intuitive to think that if we declare just the boolean flag volatile , it'll
prevent from infinite looping but the latest value for the variable myvalue
may not get printed, since it is not declared myvalue . However, that is not
true and we can get away by only declaring the boolean flag as volatile .
Though note that declaring both the shared variables volatile is
acceptable too.
Question # 6
Thread 2 changes the value of myvalue to 5 and sets the volatile flag
done to true
Question # 7
When locking isn't required for reading the variable or that the
variable doesn't participate in maintaining a variant with other state
variables
Quiz 7
Question # 1
Can you enumerate the implications of the poor design choice for the
below class?
The above class is a bad design choice for the following reasons:
import java.io.File;
class Demonstration {
public static void main( String args[] ) throws Exception {
BadClassDesign bcd = (new BadClassDesign());
}
}
class BadClassDesign {
// Private field
private File file;
Question # 2
All local variables live on the executing thread's stack and are confined to
the executing thread. This intrinsically makes a snippet of code thread-
safe. For instance consider the following instance method of a class:
int getSum(int n) {
int sum = 0;
for (int i = 1; i <= n; i++)
sum += i;
return sum;
Primitive local types are always stack confined but care has to be
exercised when dealing with local reference types as returning them from
methods or storing a reference to them in shared variables can allow
simultaneous manipulation by multiple threads thus breaking stack
confinement.
Quiz 8
Question # 1
public Counter() {
counter.set(10);
}
void increment() {
counter.set(counter.get() + 1);
}
}
COMPLETED 0%
1 of 1
Show Explanation
Question # 2
Given the same Counter class as in the previous question, what is the
output of println statement below:
return counter.counter.get();
});
}
es.shutdown();
}
COMPLETED 0%
1 of 1
Show Explanation
Question # 3
What would have been the output of the print statement from the
previous question if we created a pool with 20 threads?
COMPLETED 0%
1 of 1
Show Explanation
The code for all the three scenarios discussed above appears below.
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
class Demonstration {
public static void main( String args[] ) throws Exception {
usingThreads();
usingSingleThreadPool();
usingMultiThreadsPool();
}
System.out.println(counter.counter.get());
}
@SuppressWarnings("unchecked")
static void usingSingleThreadPool() throws Exception {
return counter.counter.get();
});
}
System.out.println(tasks[99].get());
es.shutdown();
}
@SuppressWarnings("unchecked")
static void usingMultiThreadsPool() throws Exception {
return counter.counter.get();
});
}
System.out.println(tasks[99].get());
es.shutdown();
}
class Counter {
public Counter() {
counter.set(0);
}
void increment() {
counter.set(counter.get() + 1);
}
}
Question # 4
int countTo100() {
return count.get();
ExecutorService es = Executors.newFixedThreadPool(1);
Future<Integer>[] tasks = new Future[100];
What would the output of the print statement for the 100 tasks?
COMPLETED 0%
1 of 1
Show Explanation
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
class Demonstration {
@SuppressWarnings("unchecked")
public static void main( String args[] ) throws Exception {
ExecutorService es = Executors.newFixedThreadPool(1);
Future<Integer>[] tasks = new Future[100];
es.shutdown();
}
count.set(count.get() + 1);
return count.get();
}
}
Question # 5
int countTo100() {
return count.get();