Java Multithreading for Senior Engineering Interviews Part II
Java Multithreading for Senior Engineering Interviews Part II
Webpage Screenshot share download .zip report bug or abuse Buy me a coffee
Reordering Effects
Different processor architectures have different policies as to when an
individual processor’s cache is reconciled with the main memory.
For instance, the frequency of reconciling a processor’s cache with the main
memory depends on the processor architecture. A processor may relax its
memory coherence guarantees in favor of better performance. The
architecture’s memory model specifies the guarantees a program can expect
from the memory model. It will also specify instructions required to get
additional memory coordination guarantees when data is being shared
among threads. These instructions are usually called memory fences or
barriers but the Java developer can rely on the JVM to interface with the
underlying platform’s memory model through its own memory model called
JMM (Java Memory Model) and insert these platform memory specific
instructions appropriately. Conversely, the JVM relies on the developer to
identify when data is shared through the use of proper synchronization.
[Java Developer]
|-> Proper Synchronization Used
|-> Improper Synchronization
Elements of a set will exhibit partial ordering when they possess transitivity
and asymmetry but not totality. As an example think about your family tree.
Your father is your ancestor, your grandfather is your father’s ancestor. By
transitivity, your grandfather is also your ancestor. However, your father or
grandfather aren’t ancestors of your mother and in a sense they are
incomparable.
The JMM is defined in terms of actions which can be any of the following,
read and writes of variables, locks and unlocks of monitors, starting and
joining of threads. The JMM enforces a happens-before ordering on these
actions. When an action A happens-before an action B, it implies that A is
guaranteed to be ordered before B and visible to B. The reordering tricks are
harmless in case of a single threaded program but all hell will break loose
when we introduce another thread that shares the data that is being read or
written to in the writerThread method.
workerThread.start();
workerThread.join(); // Wait for workerThread to finish
6. Finalizer Rule
The constructor for an object happens-before the start of the finalizer for
that object. This ensures that an object’s finalizer sees the fully constructed
state of the object.
@Override
protected void finalize() throws Throwable {
System.out.println(data); // Finalizer reads data
super.finalize();
}
}
workerThread.start();
workerThread.interrupt(); // Interrupt the worker thread
}
}
This implies that any memory operations which were visible to a thread
before exiting a synchronized block are visible to any thread after it enters a
synchronized block protected by the same monitor, since all the memory
operations happen before the release, and the release happens before the
acquire. Exiting a synchronized block causes the cache to be flushed to the
main memory so that the writes made by the exiting thread are visible to
other threads. Similarly, entering a synchronized block has the effect of
invalidating the local processor cache and reloading of variables from the
main memory so that the entering thread is able to see the latest values.
All it means is that when readerThread releases the monitor, up till that
point, whatever shared variables it has manipulated will have their latest
values visible to the writerThread as soon as it acquires the same monitor. If
it acquires a different monitor then there’s no happens-before relationship
and it may or may not see the latest values for the shared variables.
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;
producerThread.start();
consumerThread.start();
}
}
@Override
public void run() {
try {
for (int i = 1; i <= produceCount; i++) {
queue.put(i);
System.out.println("Produced: " + i);
Thread.sleep(100); // Simulate time taken to produce an item
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
@Override
public void run() {
try {
for (int i = 1; i <= consumeCount; i++) {
Integer item = queue.take();
System.out.println("Consumed: " + item);
Thread.sleep(150); // Simulate time taken to process an item
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
Thread Sleep: Thread.sleep() calls are used to simulate the time taken to
produce and consume items, allowing observation of the blocking
behavior.
Considerations:
Thread Interruption: Both put() and take() methods can throw
InterruptedException if the thread is interrupted while waiting. Proper
handling of this exception is essential to ensure thread termination or
other interruption policies are respected.
Blocking queues are a powerful tool for coordinating work between multiple
threads, simplifying the implementation of producer-consumer patterns by
handling synchronization internally.
For a visual explanation and further insights into BlockingQueue in Java, you
might find the following video helpful:
Java BlockingQueue
Share
Watch on
Let’s see how the implementation would look like, if we were restricted to
using a mutex. There’s no direct equivalent of a theoretical mutex in Java as
each object has an implicit monitor associated with it. For this question,
we’ll use an object of the Lock class and pretend it doesn’t expose the wait()
and notify() methods and only provides mutual exclusion similar to a
theoretical mutex. Without the ability to wait or signal the implication is, a
blocked thread will constantly poll in a loop for a predicate/condition to
become true before making progress. This is an example of a busy-wait
solution.
import java.util.LinkedList;
import java.util.Queue;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
producer.start();
consumer.start();
}
Producer Thread:
If the queue is full, it releases the lock, yields the processor to allow other
threads to execute, and then reacquires the lock to check the condition
again. This loop continues until space becomes available.
Once space is available, it adds an item to the queue and releases the
lock.
Consumer Thread:
If the queue is empty, it releases the lock, yields the processor, and then
reacquires the lock to check the condition again. This loop continues
until an item is available.
Once an item is available, it removes the item from the queue and
releases the lock.
Considerations:
Alternative Approach:
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;
producer.start();
consumer.start();
}
Scenario:
public T dequeue() {
// Check condition without locking
while (queue.isEmpty()) {
// Busy-wait until there's an item
}
// Proceed to remove item
T item = queue.poll();
System.out.println("Dequeued: " + item);
return item;
}
}
Two threads see the queue as not full and both proceed to enqueue,
potentially exceeding the capacity.
Two threads see the queue as not empty and both proceed to dequeue,
leading to null returns or errors.
synchronized Methods: Ensure that only one thread can execute either
enqueue() or dequeue() at a time, preventing race conditions.
wait() causes the current thread to release the lock and wait until
notified.
Conclusion:
Key Concepts:
that allows setting the maximum number of permits and the number of
permits already given out.
Implementation Steps:
int bufferSize = 5;
CountingSemaphore semProducer = new CountingSemaphore(bufferSize, 0);
CountingSemaphore semConsumer = new CountingSemaphore(bufferSize, bufferSize);
We’ll use a fixed-size queue to represent the buffer and synchronize access
to it using the semaphores.
import java.util.LinkedList;
import java.util.Queue;
We’ll create producer and consumer threads that use the enqueue() and
dequeue() methods of the BoundedBuffer .
@Override
public void run() {
try {
for (int i = 0; i < 10; i++) {
buffer.enqueue(i);
System.out.println("Produced: " + i);
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
We’ll set up the buffer and start the producer and consumer threads.
producerThread.start();
consumerThread.start();
try {
producerThread.join();
consumerThread.join();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
This is an actual interview question asked at Uber and Oracle. Imagine you
have a bucket that gets filled with tokens at the rate of 1 token per second.
The bucket can hold a maximum of N tokens. Implement a thread-safe class
that lets threads get a token when one is available. If no token is available,
then the token-requesting threads should block. The class should expose an
API called getToken that various threads can call to get a token.
The key to the problem is to find a way to track the number of available
tokens when a consumer requests for a token. Note the rate at which the
tokens are being generated is constant. So if we know when the token bucket
was instantiated and when a consumer called getToken() we can take the
difference of the two instants and know the number of possible tokens we
would have collected so far. However, we’ll need to tweak our solution to
account for the max number of tokens the bucket can hold.
Key Concepts:
Implementation Strategy:
Implementation Steps:
Fields:
Constructor:
Initializes the bucket with the maximum number of tokens and sets the
last refill timestamp to the current time.
refill Method:
Calculates the number of tokens to add based on the elapsed time since
the last refill.
Ensures that the number of available tokens does not exceed the
maximum capacity.
getToken Method:
If no tokens are available, calculates the time to wait for the next token to
become available and waits on the tokensAvailable condition.
Once a token is available, decrements the available tokens and signals all
waiting threads.
Defines a task where each thread attempts to acquire a token and prints a
message upon success.
Watch on
We need to think about the following three cases to roll out our algorithm.
Let’s assume the maximum allowed tokens our bucket can hold is 5. The last
request for token was more than 5 seconds ago: In this scenario, each
elapsed second would have generated one token which may total more than
five tokens since the last request was more than 5 seconds ago. We simply
need to set the maximum tokens available to 5 since that is the most the
bucket will hold and return one token out of those 5. The last request for
token was within a window of 5 seconds: In this scenario, we need to
calculate the new tokens generated since the last request and add them to
the unused tokens we already have. We then return 1 token from the count.
The last request was within a 5-second window and all the tokens are used
up: In this scenario, there’s no option but to sleep for a whole second to
guarantee that a token would become available and then let the thread
return. While we sleep(), the monitor would still be held by the token-
requesting thread and any new threads invoking getToken would get
blocked, waiting for the monitor to become available.
You can see the final solution comes out to be very trivial without the
requirement for creating a bucket-filling thread of sorts, that runs
perpetually and increments a counter every second to reflect the addition of
a token to the bucket. Many candidates initially get off-track by taking this
approach. Though you might be able to solve this problem using the
mentioned approach, the code would unnecessarily be complex and
unwieldy. Note we achieve thread-safety by simply adding synchronized to
the getToken method. We can have finer grained synchronization inside the
method, but that wouldn’t help since the entire code snippet within the
method is critical and would be guarded by a lock.
Implementation Details:
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
while (true) {
lock.lock();
try {
if (waitingThreads.peek() == currentThread) { // Check if it's t
refill();
if (availableTokens >= 1) {
availableTokens--;
waitingThreads.take(); // Remove the thread from the que
return;
}
}
} finally {
lock.unlock();
}
Thread.sleep(100); // Sleep briefly before retrying
}
}
}
Fields:
Constructor:
Initializes the bucket with the specified maximum tokens and token
generation rate.
Sets the initial available tokens to the maximum and records the current
time.
refill Method:
Calculates the number of tokens to add based on the elapsed time and the
token generation rate.
Ensures that the number of available tokens does not exceed the
maximum capacity.
getToken Method:
Enters a loop where it acquires the lock and checks if it’s the thread’s turn
(i.e., it’s at the front of the queue).
If not enough tokens are available or it’s not the thread’s turn, releases the
lock and sleeps briefly before retrying.
Defines a task where each thread attempts to acquire a token and prints a
message upon success.
Fields:
Constructor:
Initializes the bucket with the specified maximum tokens.
Methods:
Token Addition:
The running flag controls the execution of the daemon thread, allowing
for a graceful shutdown.
Token Acquisition:
The getToken() method allows threads to acquire a token. If no tokens
are available ( currentTokens == 0 ), the thread waits ( lock.wait() ) until
notified ( lock.notifyAll() ) that a token has been added.
Synchronization:
Daemon Thread:
Graceful Shutdown:
The stop() method sets the running flag to false and interrupts the
daemon thread, allowing it to terminate gracefully.
Considerations:
Thread Safety:
Performance:
Daemon Threads:
Daemon threads are suitable for background tasks that should not
prevent the JVM from exiting. However, care should be taken to manage
their lifecycle appropriately to avoid resource leaks.
Never start a thread in a constructor as the child thread can attempt to use the
not-yet-fully constructed object using this. This is an anti-pattern. Some
candidates present this solution when attempting to solve token bucket filter
problem using threads. However, when checked, few candidates can reason
why starting threads in a constructor is a bad choice.
There are two ways to overcome this problem, the naive but correct solution
is to start the daemon thread outside of the MultithreadedTokenBucketFilter
object. However, the con of this approach is that the management of the
daemon thread spills outside the class. Ideally, we want the class to
encapsulate all the operations related with the management of the token
bucket filter and only expose the public API to the consumers of our class, as
per good object orientated design.
4. Ensure the daemon thread starts only after full construction: The
factory method will start the daemon thread before returning the fully
constructed object.
Usage Example:
public class Main {
public static void main(String[] args) throws InterruptedException {
// Create a token bucket filter with a capacity of 5 tokens and a refill
TokenBucketFilter filter = TokenBucketFilterFactory.makeTokenBucketFilte
This design ensures that consumers can only create token bucket filter
instances through the factory, enforcing proper initialization and
encapsulation.
Implementation Steps:
The method will accept a Runnable representing the callback and a delay
interval in seconds.
It will schedule the callback for execution after the specified delay.
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicBoolean;
Usage Example:
DeferredCallbackExecutor Class:
Manages the scheduling and execution of deferred callbacks.
registerCallback Method:
Schedules the callback for execution after the specified delay using the
scheduler .
Ensures that the callback is removed from the queue after execution to
prevent memory leaks.
Ensure that no new callbacks can be registered once the executor is shut
down.
Usage Example:
Design Overview:
Use a single execution thread to monitor the queue and execute callbacks
when their scheduled time arrives.
Implementation:
@Override
public void execute() {
System.out.println(message);
}
}
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicBoolean;
@Override
public int compareTo(ScheduledCallback other) {
return Long.compare(this.scheduledTime, other.scheduledTime);
}
}
}
Usage Example:
try {
Thread.sleep(15000); // Wait for 15 seconds to allow callbacks to ex
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
executor.shutdown();
}
}
Considerations:
import java.util.PriorityQueue;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.ReentrantLock;
public DeferredCallbackExecutor() {
this.callbackQueue = new PriorityQueue<>();
this.lock = new ReentrantLock();
this.condition = lock.newCondition();
this.isRunning = true;
@Override
public int compareTo(ScheduledCallback other) {
return Long.compare(this.scheduledTime, other.scheduledTime);
}
}
}
DeferredCallbackExecutor Class:
The shutdown method stops the executor and signals the execution thread
to terminate.
Considerations:
Watch on
Key Considerations:
Callback Timing: The total delay for a callback is the sum of the time
elapsed since its registration and the delay specified at registration.
Usage Example:
executor.shutdown();
}
}
Expected Output:
Thread Coordination: The executor thread waits for the earliest callback
to become due and executes it. It uses a Condition variable to be notified
when a new callback is registered.
Considerations:
// Acquire a permit
public synchronized void acquire() throws InterruptedException {
while (usedPermits == maxPermits) {
wait(); // Wait until a permit is available
}
usedPermits++;
notify(); // Notify any waiting threads
}
// Release a permit
public synchronized void release() {
while (usedPermits == 0) {
try {
wait(); // Wait until a permit is acquired
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
usedPermits--;
notify(); // Notify any waiting threads
}
}
Usage Example:
public class SemaphoreTest {
public static void main(String[] args) {
CustomSemaphore semaphore = new CustomSemaphore(1); // Binary semaphore
t1.start();
t2.start();
}
}
Expected Output:
This implementation ensures that only one thread can access the shared
resource at a time, effectively simulating a binary semaphore.
Considerations:
Thread Safety: The synchronized keyword ensures that only one thread
can execute either the acquire() or release() method at a time,
preventing race conditions.
Waiting and Notification: The wait() and notify() methods are used to
manage the availability of permits. Threads wait when no permits are
available and are notified when a permit is released.
ReadWrite Lock
Imagine you have an application where you have multiple readers and
multiple writers. You are asked to design a lock which lets multiple readers
read at the same time, but only one writer write at a time.
Usage Example:
// Reader thread
Thread readerThread = new Thread(() -> {
try {
rwLock.acquireReadLock();
System.out.println("Reader thread is reading.");
Thread.sleep(1000); // Simulate reading
rwLock.releaseReadLock();
System.out.println("Reader thread has finished reading.");
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
});
// Writer thread
Thread writerThread = new Thread(() -> {
try {
rwLock.acquireWriteLock();
System.out.println("Writer thread is writing.");
Thread.sleep(1000); // Simulate writing
rwLock.releaseWriteLock();
System.out.println("Writer thread has finished writing.");
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
});
readerThread.start();
writerThread.start();
}
}
Expected Output:
The reader thread acquires the read lock, performs its reading task, and
then releases the read lock.
The writer thread waits for the reader to release the read lock, acquires
the write lock, performs its writing task, and then releases the write lock.
Considerations:
Thread Safety: The synchronized blocks ensure that only one thread can
execute the critical sections at a time, preventing race conditions.
Designing a unisex bathroom system that allows both males and females to
use the facility while ensuring that no more than three individuals are
present simultaneously, and preventing simultaneous use by both genders,
requires careful synchronization to avoid deadlocks and starvation.
Problem Constraints:
1. The bathroom cannot be used by males and females at the same time.
Solution Approach:
Key Components:
Implementation:
import java.util.concurrent.Semaphore;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
useBathroom("Male");
lock.lock();
try {
currentCount--;
bathroomSemaphore.release();
if (currentCount == 0) {
inUseBy = "none";
condition.signalAll();
}
} finally {
lock.unlock();
}
}
useBathroom("Female");
lock.lock();
try {
currentCount--;
bathroomSemaphore.release();
if (currentCount == 0) {
inUseBy = "none";
condition.signalAll();
}
} finally {
lock.unlock();
}
}
Usage Example:
Considerations:
This implementation ensures that the bathroom is used safely and efficiently
by multiple threads, adhering to the specified constraints and preventing
both deadlocks and starvation.
Implementing a Barrier
A barrier can be thought of as a point in the program code, which all or some of
the threads need to reach at before any one of them is allowed to proceed
further.
Barrier Action: An optional Runnable that is executed once the last thread
arrives at the barrier.
if (partiesAwait > 0) {
this.wait();
} else {
// All parties have arrived
partiesAwait = initialParties; // Reset for reuse
if (barrierAction != null) {
barrierAction.run();
}
notifyAll();
}
}
}
@Override
public void run() {
try {
System.out.println(Thread.currentThread().getName() + " is performin
// Simulate work
Thread.sleep((long) (Math.random() * 1000));
System.out.println(Thread.currentThread().getName() + " is waiting a
barrier.await();
System.out.println(Thread.currentThread().getName() + " has crossed
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Output:
Considerations:
Barrier Action: An optional action that executes once all threads have
reached the barrier.
import java.util.concurrent.Semaphore;
Worker(int workerId) {
this.workerId = workerId;
}
@Override
public void run() {
try {
// Acquiring a permit before accessing the shared resource
semaphore.acquire();
System.out.println("Worker " + workerId + " is accessing the res
// Simulating work by sleeping for 2 seconds
Thread.sleep(2000);
System.out.println("Worker " + workerId + " is releasing the res
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
// Releasing the permit after accessing the resource
semaphore.release();
}
}
}
}
After completing its work, the thread releases the permit, allowing other
threads to acquire it.
import java.util.concurrent.locks.ReentrantLock;
The ReentrantLock ensures that only one thread at a time can execute the
critical section where the shared counter is incremented.
Each thread acquires the lock before modifying the shared counter and
releases it afterward, ensuring thread-safe operations.
A barrier allows multiple threads to wait for each other at a specific point
before proceeding. This ensures that threads reach a particular execution
point before any can continue, facilitating coordinated behavior.
import java.util.concurrent.CyclicBarrier;
Task(int taskId) {
this.taskId = taskId;
}
@Override
public void run() {
try {
System.out.println("Task " + taskId + " is performing work.");
// Simulating work by sleeping for a random time
Thread.sleep((long) (Math.random() * 3000));
System.out.println("Task " + taskId + " is waiting at the barrie
// Waiting at the barrier
barrier.await();
System.out.println("Task " + taskId + " has crossed the barrier.
} catch (Exception e) {
Thread.currentThread().interrupt();
}
}
}
}
Each Task thread performs some work, waits at the barrier, and then
proceeds once all threads have reached the barrier.
import java.util.concurrent.Semaphore;
import java.util.concurrent.CyclicBarrier;
import java.util.concurrent.locks.ReentrantLock;
Implementation Overview:
Synchronization Primitives:
Semaphores:
Shared Counters:
Methods:
drive() : Called once all four passengers are seated to start the ride.
Semaphore Usage:
Threads that cannot form a valid group immediately release the lock and
acquire the respective semaphore ( demsWaiting or repubsWaiting ),
causing them to wait until enough threads arrive to form a valid group.
When a valid group is formed (either four of the same party or two of
each), the leading thread releases the appropriate number of waiting
threads by calling release() on the semaphores.
Ride Leader: The thread that forms a valid group becomes the ride
leader, responsible for calling drive() to start the ride. This thread also
ensures the lock is released after starting the ride.
1. Define the UberRide Class: This class will contain the synchronization
logic as previously discussed, including methods for Democrats and
Republicans to request seats, and the drive() method to start the ride.
2. Create Rider Threads: We’ll define a Rider class that extends Thread .
4. Monitor the Output: Each rider will print messages when seated and
when the ride starts. This output will help us verify that the riders are
grouped correctly and that the system behaves as expected under
concurrent conditions.
Rider(String party) {
this.party = party;
}
@Override
public void run() {
try {
if ("Democrat".equals(party)) {
uberRide.seatDemocrat();
} else {
uberRide.seatRepublican();
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
Output:
The program will print messages indicating when each rider is seated and
when a ride starts. For example:
Thread-0 is seated.
Thread-1 is seated.
Thread-2 is seated.
Thread-3 is seated.
Ride is starting with Thread-0 as the leader.
Thread-4 is seated.
Thread-5 is seated.
Thread-6 is seated.
Thread-7 is seated.
Ride is starting with Thread-4 as the leader.
Note: The order of thread execution may vary due to the concurrent nature
of threads. The provided implementation ensures that the synchronization
constraints are met, and acceptable rider combinations are formed for each
ride.
https://github.com/yugal-nandurkar || https://www.linkedin.com/in/yugal-
nandurkar/ || https://medium.com/@microteam93
No responses yet
Write a response
What are your thoughts?
yugal-nandurkar yugal-nandurkar
Feb 7 Jan 8 1
yugal-nandurkar yugal-nandurkar
Feb 23 Feb 7
AKCoding.com Sujith C
Feb 20 6d ago
5d ago 5d ago
Help Status About Careers Press Blog Privacy Rules Terms Text to speech