0% found this document useful (0 votes)
0 views

Java Multithreading for Senior Engineering Interviews Part I

The document discusses Java multithreading, emphasizing its benefits for performance and resource management in engineering interviews. It covers concepts such as thread safety, concurrency vs. parallelism, and provides examples of single-threaded and multi-threaded summation. Additionally, it highlights the challenges of thread management, including bugs, resource utilization, and the importance of synchronization.

Uploaded by

Anshul Shah
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

Java Multithreading for Senior Engineering Interviews Part I

The document discusses Java multithreading, emphasizing its benefits for performance and resource management in engineering interviews. It covers concepts such as thread safety, concurrency vs. parallelism, and provides examples of single-threaded and multi-threaded summation. Additionally, it highlights the challenges of thread management, including bugs, resource utilization, and the importance of synchronization.

Uploaded by

Anshul Shah
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

archive.today Saved from https://medium.

com/@yugalnandurkar5/java-multithreading-for-senior-engine search 30 Mar 2025 01:29:17 UTC


webpage capture no other snapshots from this url
All snapshots from host medium.com

Webpage Screenshot share download .zip report bug or abuse

Search Write Sign up Sign in

Java Multithreading for Senior


Engineering Interviews (Part I)
yugal-nandurkar · Follow
59 min read · Jan 8, 2025

3 1

Why threads exist and what benefit do they provide?


Knowledge of concurrent programming principles exhibit maturity and
technical depth of a candidate.
Note that thread creation is light weight in comparison to spawning a brand
new process. Web servers that use threads instead of creating new processes
when fielding web requests, consume far fewer resources.

Performance Gains via Multi-Threading

Calculating the sum of all integers from 0 to Integer.MAX_VALUE in Java


requires careful consideration due to the vast range of values involved. The
sum of integers from 0 to Integer.MAX_VALUE is a known mathematical
quantity that exceeds the storage capacity of standard integer types.
Specifically, the sum is ((long) Integer.MAX_VALUE * ((long)

Integer.MAX_VALUE + 1)) / 2 , which equals 2305843005992468481 . This value


exceeds the maximum value storable in a long variable ( 2^63 - 1 or
9223372036854775807 ).

Attempting to compute this sum directly in Java, whether using a single


thread or multiple threads, will encounter significant challenges:

1. Overflow Issues: Using integer types ( int or long ) will result in


overflow, leading to incorrect results.

2. Performance Constraints: Iterating through such a large range, even with


multiple threads, is computationally intensive and impractical for real-
time execution.

3. Memory Limitations: Storing intermediate results for such a large


computation may exceed available memory resources.

Given these constraints, it’s advisable to avoid attempting this computation


directly in Java. Instead, if the goal is to measure the performance difference
between single-threaded and multi-threaded computations, consider using a
smaller, more manageable range of integers. This approach allows for
meaningful performance comparisons without encountering the issues
mentioned above.

Here’s how you can implement both scenarios with a smaller range:

Single-Threaded Summation:

public class SingleThreadSum {


public static void main(String[] args) {
int start = 0;
int end = 1_000_000_000; // A smaller, manageable range
long sum = 0;

long startTime = System.currentTimeMillis();


for (int i = start; i <= end; i++) {
sum += i;
}
long endTime = System.currentTimeMillis();

System.out.println("Single-threaded sum: " + sum);


System.out.println("Time taken: " + (endTime - startTime) + " millisecon
}
}

Multi-Threaded Summation:

public class MultiThreadSum {


private static final int NUM_THREADS = 2;
private static final int START = 0;
private static final int END = 1_000_000_000; // A smaller, manageable range
private static long sum = 0;

public static void main(String[] args) throws InterruptedException {


Thread[] threads = new Thread[NUM_THREADS];
int range = (END - START + 1) / NUM_THREADS;

long startTime = System.currentTimeMillis();


for (int i = 0; i < NUM_THREADS; i++) {
final int threadStart = START + i * range;
final int threadEnd = (i == NUM_THREADS - 1) ? END : threadStart + r

threads[i] = new Thread(() -> {


long threadSum = 0;
for (int j = threadStart; j <= threadEnd; j++) {
threadSum += j;
}
addPartialSum(threadSum);
});
threads[i].start();
}

for (Thread thread : threads) {


thread.join();
}
long endTime = System.currentTimeMillis();

System.out.println("Multi-threaded sum: " + sum);


System.out.println("Time taken: " + (endTime - startTime) + " millisecon
}

private synchronized static void addPartialSum(long partialSum) {


sum += partialSum;
}
}

Key Points:

Range Selection: The range [0, 1_000_000_000] is chosen for


demonstration purposes. Adjust the range based on your system's
capabilities.
Thread Management: In the multi-threaded example, the range is
divided equally among the threads. The addPartialSum method is
synchronized to prevent race conditions when updating the shared sum

variable.

Performance Measurement: Execution time is measured using


System.currentTimeMillis() . For more precise timing, especially for
shorter durations, consider using System.nanoTime() .

Overflow Consideration: Ensure that the chosen range does not cause
the sum to exceed the maximum value of the long data type to avoid
overflow.

By using a manageable range, you can effectively compare the performance


of single-threaded and multi-threaded summation in Java without
encountering the limitations associated with extremely large computations.

Problems with Threads:

1. Usually very hard to find bugs, some that may only rear head in
production environments

2. Higher cost of code maintenance since the code inherently becomes


harder to reason about

3. Increased utilization of system resources. Creation of each thread


consumes additional memory, CPU cycles for book-keeping and waste of
time in context switches.

4. Programs may experience slowdown as coordination amongst threads


comes at a price. Acquiring and releasing locks adds to program
execution time. Threads fighting over acquiring locks cause lock
contention.

Program vs Process vs Thread


A program is a set of instructions and associated data that resides on the disk
and is loaded by the operating system to perform some task.

In order to run a program, the operating system’s kernel is first asked to


create a new process, which is an environment in which a program executes.

A process is a program in execution.

A process is an execution environment that consists of instructions, user-


data, and system-data segments, as well as lots of other resources such as
CPU, memory, address space, disk and network I/O acquired at runtime. A
program can have several copies of it running at the same time but a process
necessarily belongs to only one program.

Thread is the smallest unit of execution in a process.

Usually, there would be some state associated with the process that is shared
among all the threads and in turn each thread would have some state private
to itself.
There are several constructs offered by various programming languages to
guard and discipline the access to the global state in access by multiple
threads.

Processes don’t share any resources amongst themselves whereas threads of


a process can share the resources allocated to that particular process,
including memory address space. However, languages do provide facilities to
enable inter-process communication.

Without properly guarding access to mutable variables or data-structures,


threads can cause hard to find bugs.

Thread Unsafe Class

In Java, a class is considered thread-unsafe if it does not function correctly


when accessed concurrently by multiple threads. This can lead to issues
such as race conditions, data corruption, and unpredictable behavior.
Example of a Thread-Unsafe Class:

Consider a simple counter class that increments a value:

public class Counter {


private int count = 0;

public void increment() {


count++;
}

public int getCount() {


return count;
}
}

In a single-threaded environment, this class works as expected. However, in


a multi-threaded context, simultaneous access to the increment() method
can cause inconsistent results due to race conditions.

Demonstrating Thread-Unsafe Behavior:

To illustrate the thread-unsafe nature of the Counter class, we can create


multiple threads that increment the counter concurrently:

public class ThreadUnsafeDemo {


public static void main(String[] args) throws InterruptedException {
Counter counter = new Counter();
int numberOfThreads = 1000;
Thread[] threads = new Thread[numberOfThreads];

for (int i = 0; i < numberOfThreads; i++) {


threads[i] = new Thread(() -> {
for (int j = 0; j < 1000; j++) {
counter.increment();
}
});
threads[i].start();
}

for (Thread thread : threads) {


thread.join();
}

System.out.println("Final count: " + counter.getCount());


}
}

In this example, we create 1,000 threads, each incrementing the counter


1,000 times. Ideally, the final count should be 1,000,000. However, due to the
lack of synchronization in the Counter class, the actual result is often less
than expected, demonstrating its thread-unsafe behavior.

Making the Class Thread-Safe:

To ensure correct behavior in a multi-threaded environment, we can make


the Counter class thread-safe by synchronizing the increment() method:

public class Counter {


private int count = 0;

public synchronized void increment() {


count++;
}

public int getCount() {


return count;
}
}

By adding the synchronized keyword, we ensure that only one thread can
execute the increment() method at a time, preventing race conditions and
ensuring accurate results.

Alternative Approach Using Atomic Variables:

Another way to achieve thread safety is by using atomic variables from the
java.util.concurrent.atomic package:

import java.util.concurrent.atomic.AtomicInteger;

public class Counter {


private AtomicInteger count = new AtomicInteger(0);

public void increment() {


count.incrementAndGet();
}

public int getCount() {


return count.get();
}
}

The AtomicInteger class provides thread-safe operations without the need for
explicit synchronization, often resulting in better performance in highly
concurrent scenarios.
When designing classes intended for use in multi-threaded environments,
it’s crucial to ensure thread safety to prevent unpredictable behavior and
data inconsistencies. This can be achieved through synchronization
mechanisms or by utilizing thread-safe classes provided by the Java API.

For more detailed information on thread safety and how to achieve it in Java,
you can refer to resources like GeeksforGeeks.

Concurrency vs Parallelism

To clarify the concept, we’ll borrow a juggler from a circus to use as an analogy.
Consider the juggler to be a machine and the balls he juggles as processes.

Serial Execution

The analogy for serial execution is a circus juggler who can only juggle one
ball at a time. Definitely not very entertaining!
Concurrency

A concurrent program is one that can be decomposed into constituent parts and
each part can be executed out of order or in partial order without affecting the
final outcome.

A system capable of running several distinct programs or more than one


independent unit of the same program in overlapping time intervals is called
a concurrent system.

In concurrent systems, the goal is to maximize throughput and minimize


latency. For example, a browser running on a single core machine has to be
responsive to user clicks but also be able to render HTML on screen as
quickly as possible.

The classic example of a concurrent system is that of an operating system


running on a single core machine. Such an operating system is concurrent but
not parallel. It can only process one task at any given point in time but all the
tasks being managed by the operating system appear to make progress
because the operating system is designed for concurrency.

Going back to our circus analogy, a concurrent juggler is one who can juggle
several balls at the same time. However, at any one point in time, he can only
have a single ball in his hand while the rest are in flight. Each ball gets a time
slice during which it lands in the juggler’s hand and then is thrown back up.
A concurrent system is in a similar sense juggling several processes at the
same time.
Parallelism

A parallel system is one which necessarily has the ability to execute multiple
programs at the same time.

Remember an individual problem has to be concurrent in nature, that is


portions of it can be worked on independently without affecting the final
outcome before it can be executed in parallel.

Example problems include matrix multiplication, 3D rendering, data analysis,


and particle simulation.
Either a single (large) problem can be executed in parallel or distinct
programs can be executed in parallel on a system supporting parallel
execution.

Concurrency vs Parallelism

A concurrent system need not be parallel, whereas a parallel system is indeed


concurrent.

Additionally, a system can be both concurrent and parallel e.g. a


multitasking operating system running on a multicore machine.

Concurrency is a property of a program or a system whereas parallelism is a


runtime behaviour of executing multiple tasks.

Single-processor concurrency is akin to alternatively serving customers


from the two queues but with a single coffee machine, while parallelism is
similar to serving each customer queue with a dedicated coffee machine.
Cooperative Multitasking vs Preemptive Multitasking
A system can achieve concurrency by employing one of the following
multitasking models:

1. Preemptive Multitasking

2. Cooperative Multitasking
Preemptive Multitasking

In preemptive multitasking, the operating system preempts a program to


allow another waiting task to run on the CPU.

A thread or program once taken off of the CPU by the scheduler can’t
determine when it will get on the CPU next.

As a consequence, if a malicious program initiates an infinite loop, it only


hurts itself without affecting other programs or threads. Lastly, the
programmer isn’t burdened to decide when to give up control back to the
CPU in code.

Cooperative Multitasking

Cooperative Multitasking involves well-behaved programs to voluntarily give


up control back to the scheduler so that another program can run.

The operating system’s scheduler has no say in how long a program or


thread runs for.
Synchronous vs Asynchronous
Synchronous execution refers to line-by-line execution of code. If a function
is invoked, the program execution waits until the function call is completed.
Synchronous execution blocks at each method call before proceeding to the
next line of code. A program executes in the same sequence as the code in
the source code file. Synchronous execution is synonymous to serial
execution.

Asynchronous programming is a means of parallel programming in which a


unit of work runs separately from the main application thread and notifies
the calling thread of its completion, failure or progress.

Async execution can invoke a method and move onto the next line of code
without waiting for the invoked function to complete or receive its result.
Usually, such methods return an entity sometimes called a future or promise
that is a representation of an in-progress computation. The program can
query for the status of the computation via the returned future or promise
and retrieve the result once completed.
Another pattern is to pass a callback function to the asynchronous function
call which is invoked with the results when the asynchronous function is
done processing.

Asynchronous programming is an excellent choice for applications that do


extensive network or disk I/O and spend most of their time waiting. As an
example, Javascript enables concurrency using AJAX library’s asynchronous
method calls.

In non-threaded environments, asynchronous programming provides an


alternative to threads in order to achieve concurrency and fall under the
cooperative multitasking model.

Asynchronous programming in Java allows tasks to execute independently


without blocking the main thread, enhancing application responsiveness
and performance. This approach is particularly beneficial for I/O-bound
operations, such as file access or network communication, where waiting for
resources can impede the application’s flow.

Asynchronous Programming Without Explicit Thread Management:

In Java, asynchronous behavior is often achieved using the


CompletableFuture class, introduced in Java 8. This class enables writing non-
blocking, asynchronous code without directly managing threads.

Example: Asynchronous File Reading Using CompletableFuture

Let’s consider an example where we read a file asynchronously using


CompletableFuture :

import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutionException;

public class AsyncFileReader {

public static void main(String[] args) {


String filePath = "example.txt";

// Asynchronously read the file


CompletableFuture<String> fileContentFuture = readFileAsync(filePath);

// Perform other tasks while the file is being read


System.out.println("Performing other tasks...");

// Retrieve and print the file content once it's available


fileContentFuture.thenAccept(content -> {
System.out.println("File content:");
System.out.println(content);
}).exceptionally(ex -> {
System.err.println("An error occurred: " + ex.getMessage());
return null;
});

// Keep the main thread alive until the asynchronous operation completes
try {
fileContentFuture.get();
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
}

public static CompletableFuture<String> readFileAsync(String filePath) {


return CompletableFuture.supplyAsync(() -> {
try {
// Read file content
byte[] fileBytes = Files.readAllBytes(Paths.get(filePath));
return new String(fileBytes);
} catch (IOException e) {
throw new RuntimeException("Failed to read file", e);
}
});
}
}

CompletableFuture.supplyAsync : This method initiates an asynchronous


computation that supplies a result. The provided lambda expression
reads the file content.

Non-Blocking Operations: While the file is being read asynchronously,


the main thread can perform other tasks, demonstrating cooperative
multitasking.

Handling Results: The thenAccept method processes the file content


once it's available. The exceptionally method handles any exceptions
that occur during the asynchronous operation.

Waiting for Completion: The get method is called to ensure the main
thread waits for the asynchronous operation to complete before exiting.
In a real-world application, especially in a server or GUI context, you
might not need this, as the application would continue running, allowing
the asynchronous tasks to complete naturally.

Benefits of Asynchronous Programming:

Improved Responsiveness: By performing I/O operations


asynchronously, the application remains responsive, as the main thread
isn’t blocked.

Efficient Resource Utilization: Asynchronous tasks can utilize system


resources more efficiently, leading to better performance, especially in
I/O-bound applications.

Considerations:

Error Handling: Proper error handling is crucial in asynchronous


operations to manage exceptions that may occur during task execution.

Thread Management: While CompletableFuture abstracts thread


management, understanding the underlying thread pool is important. By
default, supplyAsync uses the common ForkJoinPool, but you can provide
a custom executor if needed.

Asynchronous programming in Java, facilitated by classes like


CompletableFuture , enables writing efficient, non-blocking code without the
complexity of manual thread management. This approach aligns with the
cooperative multitasking model, allowing multiple tasks to make progress by
yielding control appropriately, leading to more responsive and scalable
applications.

For more detailed information on asynchronous programming in Java, you


can refer to resources like Baeldung.

I/O Bound vs CPU Bound


We delve into the characteristics of programs with different resource-use
problems and how that can affect program design choices.

We write programs to solve problems. Programs utilize various resources of


the computer systems on which they run. For instance a program running
on your machine will broadly require: CPU Time, Memory Networking
Resources, and Disk Storage.

Programs which are compute-intensive i.e. program execution requires very


high utilization of the CPU (close to 100%) are called CPU bound programs.
Such programs primarily depend on improving CPU speed to decrease
program completion time.

I/O bound programs are the opposite of CPU bound programs. Such
programs spend most of their time waiting for input or output operations to
complete while the CPU sits idle. I/O operations can consist of operations
that write or read from main memory or network interfaces.
Throughput vs Latency
If you are an Instagram user, you could define throughput as the number of
images your phone or browser downloads per unit of time

The time it takes for a web browser to download Instagram images from the
internet is the latency for downloading the images.

In general, the two have an inverse relationship.

Critical Sections & Race Conditions


Critical section is any piece of code that has the possibility of being executed
concurrently by more than one thread of the application and exposes any
shared data or resources used by the application for access.

Race conditions happen when threads run through critical sections without
thread synchronization. The threads “race” through the critical section to
write or read shared resources and depending on the order in which threads
finish the “race”, the program output changes.

As an example consider a thread that tests for a state/condition, called a


predicate, and then based on the condition takes subsequent action. This
sequence is called test-then-act.

In multithreaded programming, a race condition occurs when multiple


threads access and manipulate shared data concurrently, leading to
unpredictable and erroneous outcomes. In this example, we’ll demonstrate a
race condition involving two threads interacting with a shared variable.

Scenario:

Modifier Thread: Continuously increments a shared variable.

Printer Thread: Checks if the shared variable is divisible by 5 and prints


its value if true.

Due to the lack of proper synchronization, the Printer Thread may


sometimes print values that aren’t divisible by 5, illustrating a race
condition.
Implementation:

public class RaceConditionDemo {


private static int sharedVariable = 0;
private static final int MAX_ITERATIONS = 1000;
private static volatile boolean running = true;

public static void main(String[] args) {


Thread modifierThread = new Thread(() -> {
for (int i = 0; i < MAX_ITERATIONS; i++) {
sharedVariable++;
// Introduce a small delay to increase the chance of context swi
try {
Thread.sleep(1);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
running = false; // Signal the printer thread to stop
});

Thread printerThread = new Thread(() -> {


while (running || sharedVariable % 5 == 0) {
if (sharedVariable % 5 == 0) {
System.out.println("Shared variable is divisible by 5: " + s
// Introduce a small delay to increase the chance of context
try {
Thread.sleep(1);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
});

modifierThread.start();
printerThread.start();

try {
modifierThread.join();
printerThread.join();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}

Shared Variable: sharedVariable is accessed and modified by both


threads without synchronization, making it susceptible to race
conditions.

Modifier Thread: Increments sharedVariable in a loop, introducing a


slight delay ( Thread.sleep(1) ) to increase the likelihood of context
switching between threads.

Printer Thread: Continuously checks if sharedVariable is divisible by 5. If


true, it prints the value and introduces a slight delay to enhance the
chance of context switching. The loop continues while running is true or
the sharedVariable is divisible by 5 to ensure it prints the last divisible
value before stopping.

Volatile Keyword: The running flag is declared as volatile to ensure


visibility across threads, allowing the Printer Thread to detect when it
should stop.

Expected Outcome:

Due to the race condition, the Printer Thread may sometimes print values of
sharedVariable that aren't divisible by 5. This occurs because the Modifier
Thread may increment the variable between the check ( sharedVariable % 5

== 0 ) and the print statement in the Printer Thread.

Illustration of the Race Condition:

1. Printer Thread checks if sharedVariable % 5 == 0 and finds it true (e.g.,


sharedVariable is 10).

2. Context Switch: Before the Printer Thread executes the print statement,
the Modifier Thread increments sharedVariable (now 11).

3. Printer Thread resumes and prints the value 11, which isn’t divisible by 5.

Mitigation:

To prevent such race conditions, proper synchronization mechanisms


should be employed, such as using synchronized blocks or locks to control
access to the shared variable.

Note:

This example is designed to demonstrate a race condition and may not


consistently produce the erroneous behavior due to the non-deterministic
nature of thread scheduling. Adjusting the number of iterations and sleep
durations can influence the likelihood of encountering the race condition.

For a deeper understanding of race conditions and their implications in Java


concurrency, you may find the following video resource helpful:
Race Condition vs Data Races in Java
Share

Watch on

Race Condition vs Data Races in Java

Deadlocks, Liveness & Reentrant Locks

Logical follies committed in multithreaded code, while trying to avoid race


conditions and guarding critical sections, can lead to a host of subtle and
hard to find bugs and side-effects. Some of these incorrect usage patterns
have their names and are discussed below.

Deadlocks occur when two or more threads aren’t able to make any progress
because the resource required by the first thread is held by the second and
the resource required by the second thread is held by the first.

Ability of a program or an application to execute in a timely manner is called


liveness. If a program experiences a deadlock then it’s not exhibiting
liveness.

A live-lock occurs when two threads continuously react in response to the


actions by the other thread without making any real progress. The best
analogy is to think of two persons trying to cross each other in a hallway.

An application thread can also experience starvation, when it never gets CPU
time or access to shared resources.

Re-entrant locks allow for re-locking or re-entering of a synchronization


lock.
Mutex vs Semaphore
Mutex as the name hints implies mutual exclusion. A mutex is used to guard
shared data such as a linked-list, an array or any primitive type.

Semaphore, on the other hand, is used for limiting access to a collection of


resources. Think of semaphore as having a limited number of permits to
give out. If a semaphore has given out all the permits it has, then any new
thread that comes along requesting for a permit will be blocked, till an
earlier thread with a permit returns it to the semaphore. A typical example
would be a pool of database connections that can be handed out to
requesting threads.

Semaphores can also be used for signaling among threads.

A semaphore can potentially act as a mutex if the permits it can give out is
set to 1. However, the most important difference between the two is that in
case of a mutex the same thread must call acquire and subsequent release on
the mutex whereas in case of a binary sempahore, different threads can call
acquire and release on the semaphore.

This leads us to the concept of ownership. A mutex is owned by the thread


acquiring it till the point the owning-thread releases it, whereas for a
semaphore there’s no notion of ownership.

A mutex in contrast only guards access to shared data among competing


threads by forcing threads to serialize their access to critical sections and
shared data-structures.

Think of semaphore analogous to a car rental service such as Hertz. Each


outlet has a certain number of cars, it can rent out to customers. It can rent
several cars to several customers at the same time but if all the cars are
rented out then any new customers need to be put on a waitlist till one of the
rented cars is returned.
In contrast, think of a mutex like a lone runway on a remote airport. Only a
single jet can land or take-off from the runway at a given point in time. No
other jet can use the runway simultaneously with the first aircraft.
Mutex vs Monitor
Monitors are advanced concurrency constructs and specific to language
frameworks.

Mutex and semaphore are lower-level or OS provided constructs.

Mutex provides mutual exclusion, however, at times mutual exclusion is not


enough. We want to test for a predicate with a mutually exclusive lock so that
no other thread can change the predicate when we test for it but if we find
the predicate to be false, we’d want to wait on a condition variable till the
predicate’s value is changed. This thus is the solution to spin waiting.

As an example, say we have a consumer thread that checks for the size of the
buffer, finds it empty and invokes wait() on a condition variable. The
predicate in this example would be the size of the buffer.

The order of signaling the condition variable and releasing the mutex can be
interchanged, but generally, the preference is to signal first and then release
the mutex.

For one, a different thread could get scheduled and change the predicate
back to false before the signaled thread gets a chance to execute, therefore
the signaled thread must check the predicate again, once it acquires the
monitor.

The idiomatic and correct usage of a monitor dictates that the predicate
always be tested for in a while loop.

We can now realize that a monitor is made up of a mutex and one or more
condition variables.

Theoretically, another way to think about a monitor is to consider it as an


entity having two queues or sets where threads can be placed. One is the
entry set and the other is the wait set.

Practically, in Java each object is a monitor and implicitly has a lock and is a
condition variable too. You can think of a monitor as a mutex with a wait set.
Monitors allow threads to exercise mutual exclusion as well as cooperation
by allowing them to wait and signal on conditions.

Java’s Monitor & Hoare vs Mesa Monitors


In Java every object is a condition variable and has an associated lock that is
hidden from the developer. Each java object exposes wait() and notify()
methods. Before we execute wait() on a java object we need to lock its hidden
mutex.

That is done implicitly through the synchronized keyword. If you attempt to


call wait() or notify() outside of a synchronized block, an
IllegalMonitorStateException would occur. It’s Java reminding the developer
that the mutex wasn’t acquired before wait on the condition variable was
invoked. wait() and notify() can only be called on an object once the calling
thread becomes the owner of the monitor.

The ownership of the monitor can be achieved in the following ways:

1. The method the thread is executing has synchronized in its signature

2. The thread is executing a block that is synchronized on the object on


which wait or notify will be called in case of a class

3. The thread is executing a static method which is synchronized

In Mesa monitors — Mesa being a language developed by Xerox researchers


in the 1970s — calls it is possible that the time gap between thread B notify()
and releases its mutex and the instant at which the asleep thread A, wakes up
and reacquires the mutex, the predicate is changed back to false by another
thread different than the signaler and the awoken threads! The woken up
thread competes with other threads to acquire the mutex once the signaling
thread B empties the monitor. On signaling, thread B doesn’t give up the
monitor just yet; rather it continues to own the monitor until it exits the
monitor section.

In contrast, Hoare monitors — Hoare being one of the original inventor of


monitors — the signaling thread B yields the monitor to the woken up thread
A and thread A enters the monitor, while thread B sits out. This guarantees
that the predicate will not have changed and instead of checking for the
predicate in a while loop an if-clause would suffice. The woken-up/released
thread A immediately starts execution when the signaling thread B signals
that the predicate has changed. No other thread gets a chance to change the
predicate since no other thread gets to enter the monitor.

Java, in particular, subscribes to Mesa monitor semantics and the developer


is always expected to check for condition/predicate in a while loop. Mesa
monitors are more efficient than Hoare monitors.

In Java, thread synchronization is achieved through the use of monitors,


which are intrinsic locks associated with every object. Java’s monitor
mechanism follows Mesa semantics, where threads waiting on a condition
variable are notified but not immediately granted execution upon
notification. Instead, they re-enter the entry queue and must reacquire the
lock before proceeding. This design necessitates rechecking the condition
within a while loop after being notified to ensure the condition still holds, as
other threads may have altered it in the meantime.

Key Differences Between Hoare and Mesa Monitors:

Hoare Monitors: In Hoare semantics, when a thread signals a condition,


the waiting thread is immediately scheduled to run, and the signaling
thread is suspended. This guarantees that the condition is true when the
waiting thread resumes.

Mesa Monitors: In Mesa semantics, signaling a condition only moves the


waiting thread to the ready queue. The signaling thread continues
execution, and the waiting thread must reacquire the lock before
proceeding. This means the condition may no longer hold when the
waiting thread resumes, necessitating a recheck.

Why Use a while Loop to Check Conditions:

Due to the nature of Mesa semantics, a thread awakened by a notify() or


notifyAll() call must recheck the condition upon reacquiring the lock.
Using an if statement could lead to incorrect behavior if the condition has
changed, whereas a while loop ensures the thread waits until the condition
is truly satisfied.

Example: Producer-Consumer Problem Using Mesa Monitor Semantics

Below is a Java implementation of the producer-consumer problem,


demonstrating the use of wait() and notifyAll() with a while loop to
handle condition checks appropriately.
import java.util.LinkedList;
import java.util.Queue;

public class ProducerConsumer {


private static final int CAPACITY = 5;
private final Queue<Integer> queue = new LinkedList<>();

public static void main(String[] args) {


ProducerConsumer pc = new ProducerConsumer();
Thread producerThread = new Thread(pc.new Producer());
Thread consumerThread = new Thread(pc.new Consumer());

producerThread.start();
consumerThread.start();
}

class Producer implements Runnable {


@Override
public void run() {
int value = 0;
while (true) {
synchronized (queue) {
while (queue.size() == CAPACITY) {
try {
queue.wait();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
return;
}
}
queue.add(value);
System.out.println("Produced " + value);
value++;
queue.notifyAll();
}
}
}
}

class Consumer implements Runnable {


@Override
public void run() {
while (true) {
synchronized (queue) {
while (queue.isEmpty()) {
try {
queue.wait();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
return;
}
}
int value = queue.poll();
System.out.println("Consumed " + value);
queue.notifyAll();
}
}
}
}
}

Shared Queue: A LinkedList is used as a shared buffer between the


producer and consumer threads.
Producer Thread: Produces integers and adds them to the queue. If the
queue is full ( queue.size() == CAPACITY ), it waits ( queue.wait() ) until
notified. After adding an item, it calls queue.notifyAll() to wake up
waiting threads.

Consumer Thread: Consumes integers from the queue. If the queue is


empty, it waits until notified. After removing an item, it calls
queue.notifyAll() to wake up waiting threads.

while Loop for Condition Check: Both producer and consumer use a
while loop to check their respective conditions ( queue.size() == CAPACITY

for producer and queue.isEmpty() for consumer). This ensures that upon
being notified, the thread rechecks the condition, accounting for any
changes made by other threads before it reacquired the lock.

Java’s adoption of Mesa monitor semantics requires developers to use a


while loop when waiting for conditions to prevent issues arising from
condition changes by other threads. This approach ensures robust and
correct synchronization in concurrent programming.

For a deeper understanding of Mesa monitor semantics and their application


in Java, you can refer to the following resource:

Mesa-style monitors (CS 4410, Summer 2018) — CS@Cornell

Semaphore vs Monitor
A monitor is made up of a mutex and a condition variable. One can think of a
mutex as a subset of a monitor.

A monitor and a semaphore are interchangeable and theoretically, one can


be constructed out of the other or one can be reduced to the other. However,
monitors take care of atomically acquiring the necessary locks whereas, with
semaphores, the onus of appropriately acquiring and releasing locks is on
the developer, which can be error-prone.

Semaphores are lightweight when compared to monitors, which are bloated.


However, the tendency to misuse semaphores is far greater than monitors.
When using a semaphore and mutex pair as an alternative to a monitor, it is
easy to lock the wrong mutex or just forget to lock altogether. Even though
both constructs can be used to solve the same problem, monitors provide a
pre-packaged solution with less dependency on a developer’s skill to get the
locking right.

Java monitors enforce correct locking by throwing the IllegalMonitorState


exception object when methods on a condition variable are invoked without
first acquiring the associated lock. The exception is in a way saying that
either the object’s lock/mutex was not acquired at all or that an incorrect
lock was acquired.

A semaphore can allow several threads access to a given resource or critical


section, however, only a single thread at any point in time can own the
monitor and access associated resource.

Semaphores can be used to address the issue of missed signals, however


with monitors additional state, called the predicate, needs to be maintained
apart from the condition variable and the mutex which make up the
monitor, to solve the issue of missed signals.
Amdahl’s Law
Blindly adding threads to speed up program execution may not always be a
good idea. Find out what Amdahl’s Law says about parallelizing a program.

The law specifies the cap on the maximum speedup that can be achieved
when parallelizing the execution of a program.

If you have a poultry farm where a hundred hens lay eggs each day, then no
matter how many people you hire to process the laid eggs, you still need to
wait an entire day for the 100 eggs to be laid. Increasing the number of
workers on the farm can’t shorten the time it takes for a hen to lay an egg.
Similarly, software programs consist of parts which can’t be sped up even if
the number of processors is increased. These parts of the program must
execute serially and aren’t amenable to parallelism.
As you can see the theoretical maximum speed-up for our program with 10%
serial execution will be 10. We can’t speed-up our program execution more
than 10 times compared to when we run the same program on a single CPU
or thread. To achieve greater speed-ups than 10 we must optimize or
parallelize the serially executed portion of the code.

Another important aspect to realize is that when we speed-up our program


execution by roughly 5 times, we do so by employing 10 processors. The
utilization of these 10 processors, in turn, decreases by roughly 50% because
now the 10 processors remain idle for the rest of the time that a single
processor would have been busy. Utilization is defined as the speedup
divided by the number of processors.

There are other factors such as the memory architecture, cache misses,
network and disk I/O etc that can affect the execution time of a program and
the actual speed-up might be less than the calculated one. The Amdahl’s law
works on a problem of fixed size. However as computing resources are
improved, algorithms run on larger and even larger datasets. As the dataset
size grows, the parallelizable portion of the program grows faster than the
serial portion and a more realistic assessment of performance is given by
Gustafson’s law.

Amdahl’s Law suggests that the maximum speedup is limited by the serial
portion of the program. As more processors are added, the impact of the
non-parallelizable section becomes the bottleneck, leading to diminishing
returns.

Gustafson’s Law:

Gustafson’s Law offers a different perspective by considering scenarios


where the problem size scales with the number of processors. It posits that
as computing resources increase, the parallelizable portion of the workload
can grow, making parallel computing more efficient for larger datasets. The
law is represented as:

S=N−(N−1)×(1−P)

Where:
S is the scaled speedup.

N is the number of processors.

P is the parallel fraction of the workload

Gustafson’s Law indicates that with larger problem sizes, the parallel portion
dominates, allowing for near-linear speedup with the addition of more
processors. This perspective is particularly relevant in modern computing,
where increasing dataset sizes benefit significantly from parallel processing
capabilities.

Key Differences:

Problem Size Assumption: Amdahl’s Law assumes a fixed problem size,


while Gustafson’s Law considers scalable problem sizes that grow with
the number of processors.

Performance Implication: Amdahl’s Law highlights the limitations of


parallelism due to the serial portion of a task, leading to diminishing
returns. In contrast, Gustafson’s Law emphasizes that increasing the
problem size can lead to linear or near-linear speedup, showcasing the
potential of parallel computing for large-scale problems.

Practical Implications:

In real-world applications, especially with the advent of big data and


complex simulations, workloads often scale with available computing
resources. Gustafson’s Law provides a more optimistic and realistic
assessment of performance improvements in such scenarios, highlighting
the importance of designing algorithms that can effectively leverage
parallelism as datasets expand.

For a visual explanation of Gustafson’s Law and its implications in parallel


computing, you may find the following video resource helpful:
Gustafson's Law Explained
Share

Watch on

Gustafson’s Law Explained

Moore’s Law
Gordon Moore, co-founder of Intel, observed the number of transistors that
can be packed into a given unit of space doubles about every two years and
in turn the processing power of computers doubles and the cost halves.
Moore’s law is more of an observation than a law grounded in formal
scientific research. It states that the number of transistors per square inch
on a chip will double every two years. This exponential growth has been
going on since the 70’s and is only now starting to slow down.

The increase in clock speeds of processors has slowed down much faster
than the increase in number of transistors that can be placed on a
microchip. If we plot clock speeds we find that the linear exponential growth
stopped after 2003 and the trend line flattened out. The clock speed
(proportional to difference between supply voltage and threshold voltage)
cannot increase because the supply voltage is already down to an extent
where it cannot be decreased to get dramatic gains in clock speed. In 10
years from 2000 to 2009, clock speed just increased from 1.3 GHz to 2.8 GHz
merely doubling in a decade rather than increasing 32 times as expected by
Moore’s law.

Another analogy is to think of a bullock cart being pulled by an ox. We can


breed the ox to be stronger and more powerful to pull more load but
eventually there’s a limit to how strong the ox can get. To pull more load, an
easier solution is to attach several oxen to the bullock cart. The computing
industry is also going in the direction of this analogy.
Thready Safety & Synchronized
A class and its public APIs are labelled as thread safe if multiple threads can
consume the exposed APIs without causing race conditions or state
corruption for the class. Note that composition of two or more thread-safe
classes doesn’t guarantee the resulting type to be thread-safe.

Each object in Java has an entity associated with it called the “monitor lock”
or just monitor. Think of it as an exclusive lock. Once a thread gets hold of
the monitor of an object, it has exclusive access to all the methods marked as
synchronized. No other thread will be allowed to invoke a method on the
object that is marked as synchronized and will block, till the first thread
releases the monitor which is equivalent of the first thread exiting the
synchronized method.

With the use of the synchronized keyword, Java forces you to implicitly
acquire and release the monitor-lock for the object within the same method!
One can’t explicitly acquire and release the monitor in different methods.
This has an important ramification, the same thread will acquire and release
the monitor! In contrast, if we used semaphore, we could acquire/release
them in different methods or by different threads.

In Java, synchronizing on an object and then reassigning that object


reference can lead to unexpected behavior and exceptions, such as
IllegalMonitorStateException . This occurs because synchronization in Java is
tied to the specific object instance, not the reference variable. If the
reference is changed after acquiring the lock, subsequent synchronization
attempts on the new object may fail, as the thread no longer holds the
monitor for the original object.

Understanding the Issue:

When a thread synchronizes on an object, it acquires the monitor lock


associated with that specific object instance. If, during execution, the
reference to that object is reassigned to a different instance, the original lock
remains with the initial object. Any further synchronization attempts using
the new reference will not recognize the original lock, leading to potential
exceptions or race conditions.

Example Demonstrating the Problem:

public class SynchronizationIssueDemo {


private static Boolean flag = Boolean.TRUE;

public static void main(String[] args) {


Thread thread1 = new Thread(() -> {
synchronized (flag) {
try {
System.out.println("Thread 1: Acquired lock on flag");
// Simulate some work
Thread.sleep(1000);
System.out.println("Thread 1: Waiting on flag");
flag.wait(); // May throw IllegalMonitorStateException
System.out.println("Thread 1: Resumed");
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} catch (IllegalMonitorStateException e) {
System.out.println("Thread 1: Caught IllegalMonitorStateExce
}
}
});

Thread thread2 = new Thread(() -> {


try {
// Ensure thread1 acquires the lock first
Thread.sleep(500);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
synchronized (flag) {
System.out.println("Thread 2: Acquired lock on flag");
// Reassign the flag reference
flag = Boolean.FALSE;
System.out.println("Thread 2: flag reassigned");
// Notify any waiting threads on the old flag
flag.notifyAll(); // May throw IllegalMonitorStateException
}
});

thread1.start();
thread2.start();
}
}

Thread 1:

Synchronizes on the flag object (initially Boolean.TRUE ).

Sleeps for 1 second to simulate work.

Attempts to call flag.wait() .

Thread 2:

Sleeps for 0.5 seconds to allow Thread 1 to acquire the lock first.

Synchronizes on the flag object.

Reassigns flag to Boolean.FALSE .

Attempts to call flag.notifyAll() .

Potential Issues:

Reassignment of flag : When Thread 2 reassigns flag to a new Boolean

object, the synchronization context changes. Thread 1 is still


synchronized on the original Boolean.TRUE object, but subsequent
operations reference the new Boolean.FALSE object.

IllegalMonitorStateException: If Thread 1 attempts to call flag.wait()

after the reassignment, it may throw an IllegalMonitorStateException

because it's invoking wait() on the new flag object without holding its
monitor. Similarly, Thread 2's call to flag.notifyAll() may throw the
same exception if it doesn't hold the monitor for the new flag object.
Baeldung

Best Practices to Avoid This Issue:

Avoid Reassigning Synchronized Objects: Once an object is used for


synchronization, its reference should remain constant to ensure
consistent locking behavior.
Use Final References: Declare synchronized objects as final to prevent
reassignment:

private static final Object lock = new Object();

Consistent Locking: Ensure all threads synchronize on the same object


instance throughout the program to maintain proper coordination.

Revised Example with Proper Synchronization:

public class SynchronizationProperDemo {


private static final Object lock = new Object();

public static void main(String[] args) {


Thread thread1 = new Thread(() -> {
synchronized (lock) {
try {
System.out.println("Thread 1: Acquired lock");
// Simulate some work
Thread.sleep(1000);
System.out.println("Thread 1: Waiting");
lock.wait();
System.out.println("Thread 1: Resumed");
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
});

Thread thread2 = new Thread(() -> {


try {
// Ensure thread1 acquires the lock first
Thread.sleep(500);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
synchronized (lock) {
System.out.println("Thread 2: Acquired lock");
lock.notifyAll();
System.out.println("Thread 2: Notified all");
}
});

thread1.start();
thread2.start();
}
}

Both threads synchronize on the same lock object, which is declared as


final to prevent reassignment.

Thread 1 waits on the lock object, and Thread 2 notifies waiting threads
on the same lock object.
This ensures proper synchronization without the risk of
IllegalMonitorStateException .

Reassigning objects used for synchronization can lead to complex bugs and
exceptions in multithreaded Java applications. By maintaining consistent
references and following best practices, such issues can be avoided, leading
to more robust and reliable code.

For more information on IllegalMonitorStateException and proper


synchronization techniques, refer to the following resources:

IllegalMonitorStateException in Java | Baeldung

How to Handle the Illegal Monitor State Exception in Java — Rollbar

Marking all the methods of a class synchronized in order to make it thread-


safe may reduce throughput. As a naive example, consider a class with two
completely independent properties accessed by getter methods. Both the
getters synchronize on the same object, and while one is being invoked, the
other would be blocked because of synchronization on the same object. The
solution is to lock at a finer granularity, possibly use two different locks for
each property so that both can be accessed in parallel.

Wait & Notify


The wait method is exposed on each java object. Each Java object can act as a
condition variable. When a thread executes the wait method, it releases the
monitor for the object and is placed in the wait queue. Note that the thread
must be inside a synchronized block of code that synchronizes on the same
object as the one on which wait() is being called, or in other words, the
thread must hold the monitor of the object on which it’ll call wait. If not so,
an illegalMonitor exception is raised!

Like the wait method, notify() can only be called by the thread which owns
the monitor for the object on which notify() is being called else an illegal
monitor exception is thrown. The notify method, will awaken one of the
threads in the associated wait queue, i.e., waiting on the thread’s monitor.
However, this thread will not be scheduled for execution immediately and
will compete with other active threads that are trying to synchronize on the
same object. The thread which executed notify will also need to give up the
object’s monitor, before any one of the competing threads can acquire the
monitor and proceed forward.
This method is the same as the notify() one except that it wakes up all the
threads that are waiting on the object’s monitor.

Interrupting Threads
You’ll often come across this exception being thrown from functions. When
a thread wait()-s or sleep()-s then one way for it to give up waiting/sleeping is
to be interrupted. If a thread is interrupted while waiting/sleeping, it’ll wake
up and immediately throw Interrupted exception.

The thread class exposes the interrupt() method which can be used to
interrupt a thread that is blocked in a sleep() or wait() call. Note that
invoking the interrupt method only sets a flag that is polled periodically by
sleep or wait to know the current thread has been interrupted and an
interrupted exception should be thrown.

In Java, threads can be interrupted to signal that they should stop their
current activity and handle the interruption appropriately. When a thread is
in a sleeping state via Thread.sleep() , it can be interrupted by another
thread, causing it to throw an InterruptedException .

Example: Interrupting a Sleeping Thread

The following example demonstrates a thread that is initially set to sleep for
one hour but is interrupted by the main thread shortly after starting.

public class ThreadInterruptionDemo {

public static void main(String[] args) {


// Create a new thread that will sleep for an extended period
Thread sleepingThread = new Thread(() -> {
try {
System.out.println("Thread: Going to sleep for 1 hour.");
// Sleep for 1 hour (3600000 milliseconds)
Thread.sleep(3600000);
System.out.println("Thread: Woke up naturally.");
} catch (InterruptedException e) {
System.out.println("Thread: Interrupted during sleep.");
// Handle the interruption appropriately
Thread.currentThread().interrupt(); // Preserve the interrupt st
}
});

// Start the sleeping thread


sleepingThread.start();

// Main thread sleeps for 2 seconds before interrupting the sleeping thr
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
System.out.println("Main thread interrupted during sleep.");
Thread.currentThread().interrupt();
}

// Interrupt the sleeping thread


System.out.println("Main thread: Interrupting the sleeping thread.");
sleepingThread.interrupt();
}
}

Thread Creation:

A new thread ( sleepingThread ) is created using a lambda expression.

Within the thread’s run method, it attempts to sleep for one hour
( 3600000 milliseconds).

Starting the Thread:

The sleepingThread is started, beginning its execution.

Main Thread Actions:

The main thread sleeps for 2 seconds ( 2000 milliseconds) to allow the
sleepingThread to enter its sleep state.

After waking up, the main thread interrupts the sleepingThread by calling
its interrupt() method.

Handling Interruption:

When sleepingThread is interrupted during its sleep, it catches the


InterruptedException .

Within the catch block, it prints a message indicating it was interrupted


and calls Thread.currentThread().interrupt() to preserve its interrupted
status.

Key Points:

Thread.sleep(): This method pauses the current thread’s execution for


the specified duration. If the thread is interrupted while sleeping, it
throws an InterruptedException . GeeksforGeeks

Thread.interrupt(): This method sets the interrupt status of the target


thread. If the thread is in a blocking operation like Thread.sleep() , it will
cause the blocking method to throw an InterruptedException .

InterruptedException: This exception is thrown when a thread is


interrupted while it’s in a blocking operation. It’s essential to handle this
exception to ensure the thread can respond appropriately to interruption
requests.

Preserving Interrupt Status: After catching an InterruptedException , it's a


good practice to restore the thread's interrupt status by calling
Thread.currentThread().interrupt() . This ensures that higher-level
interrupt handlers are aware of the interruption.

Practical Considerations:

Resource Management: Ensure that any resources held by the thread are
properly released when an interruption occurs to prevent resource leaks.

Graceful Termination: Interruption is a cooperative mechanism. Threads


should periodically check their interrupt status, especially in long-
running operations, to terminate gracefully when interrupted.

Avoiding Infinite Sleep: While the example uses a long sleep duration, in
real-world applications, it’s advisable to use shorter sleep intervals and
check for interruption more frequently to remain responsive to
interruption requests.

For a visual demonstration and further explanation of thread sleep and


interrupt methods in Java, you may find the following video resource
helpful:

Java Programming - Sleep and Interrupt Method in Threads - Demo


Share

Watch on

Java Programming — Sleep and Interrupt Method in Threads


Volatile
If you have a variable say a counter that is being worked on by a thread, it is
possible the thread keeps a copy of the counter variable in the CPU cache
and manipulates it rather than writing to the main memory. The JVM will
decide when to update the main memory with the value of the counter, even
though other threads may read the value of the counter from the main
memory and may end up reading a stale value.

If a variable is declared volatile then whenever a thread writes or reads to the


volatile variable, the read and write always happen in the main memory. As a
further guarantee, all the variables that are visible to the writing thread also
get written-out to the main memory alongside the volatile variable. Similarly,
all the variables visible to the reading thread alongside the volatile variable
will have the latest values visible to the reading thread.

Volatile comes into play because of multiples levels of memory in hardware


architecture required for performance enhancements. If there’s a single
thread that writes to the volatile variable and other threads only read the
volatile variable then just using volatile is enough, however, if there’s a
possibility of multiple threads writing to the volatile variable then
“synchronized” would be required to ensure atomic writes to the variable.
Reentrant Locks & Condition Variables
Java’s answer to the traditional mutex is the reentrant lock, which comes
with additional bells and whistles. It is similar to the implicit monitor lock
accessed when using which synchronized methods or blocks. With the
reentrant lock, you are free to lock and unlock it in different methods but not
with different threads. If you attempt to unlock a reentrant lock object by a
thread didn’t lock it initially, you’ll get an IllegalMonitorStateException. This
behavior is similar to when a thread attempts to unlock a pthread mutex.

A reentrant lock exposes an API to create new condition variables.

In Java’s concurrency framework, the ReentrantLock class provides a more


flexible and sophisticated mechanism for thread synchronization compared
to the traditional synchronized keyword. One of its powerful features is the
ability to create multiple Condition objects associated with a single lock. This
allows for more granular control over thread coordination, enabling threads
to wait for specific conditions to be met before proceeding.
Understanding ReentrantLock and Condition :

ReentrantLock : A reentrant mutual exclusion lock with the same basic


behavior as the implicit monitor lock accessed using synchronized

methods and statements, but with extended capabilities. It allows the


lock to be acquired multiple times by the same thread without causing a
deadlock.

Condition : Associated with a Lock , a Condition provides a means for one


thread to suspend execution (to "wait") until notified by another thread
that some state condition may now be true. This is similar to the Object

class's wait() and notify() methods but offers greater flexibility.

Creating and Using Condition Variables with ReentrantLock :

To utilize Condition variables with a ReentrantLock , follow these steps:

1. Instantiate a ReentrantLock :

Lock lock = new ReentrantLock();

2. Create Condition Variables from the Lock:

Condition condition1 = lock.newCondition();


Condition condition2 = lock.newCondition();

3. Use the Condition Variables in Critical Sections:

Awaiting a Condition:

lock.lock();
try {
while (!someCondition) {
condition1.await();
}
// Perform actions when the condition is met
} finally {
lock.unlock();
}
Signaling a Condition:

lock.lock();
try {
// Update the condition state
condition1.signal(); // or condition1.signalAll();
} finally {
lock.unlock();
}

Example: Producer-Consumer Problem Using ReentrantLock and Condition

The producer-consumer problem is a classic synchronization scenario


where producers generate data items and place them in a shared buffer, and
consumers retrieve and process these items. Proper synchronization
ensures that producers don’t add data to a full buffer and consumers don’t
remove data from an empty buffer.

Here’s how you can implement this using ReentrantLock and Condition

variables:

import java.util.LinkedList;
import java.util.Queue;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;

public class ProducerConsumerWithReentrantLock {

private static final int BUFFER_CAPACITY = 5;


private final Queue<Integer> buffer = new LinkedList<>();
private final Lock lock = new ReentrantLock();
private final Condition notFull = lock.newCondition();
private final Condition notEmpty = lock.newCondition();

public static void main(String[] args) {


ProducerConsumerWithReentrantLock pc = new ProducerConsumerWithReentrant
Thread producer = new Thread(pc.new Producer());
Thread consumer = new Thread(pc.new Consumer());

producer.start();
consumer.start();
}

class Producer implements Runnable {


@Override
public void run() {
int value = 0;
try {
while (true) {
produce(value++);
Thread.sleep(100); // Simulate time taken to produce an item
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}

private void produce(int value) throws InterruptedException {


lock.lock();
try {
while (buffer.size() == BUFFER_CAPACITY) {
System.out.println("Buffer is full. Producer is waiting.");
notFull.await();
}
buffer.add(value);
System.out.println("Produced: " + value);
notEmpty.signal();
} finally {
lock.unlock();
}
}
}

class Consumer implements Runnable {


@Override
public void run() {
try {
while (true) {
consume();
Thread.sleep(150); // Simulate time taken to consume an item
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}

private void consume() throws InterruptedException {


lock.lock();
try {
while (buffer.isEmpty()) {
System.out.println("Buffer is empty. Consumer is waiting.");
notEmpty.await();
}
int value = buffer.poll();
System.out.println("Consumed: " + value);
notFull.signal();
} finally {
lock.unlock();
}
}
}
}

Shared Buffer: A LinkedList is used as a finite-sized buffer with a capacity


defined by BUFFER_CAPACITY .

Locks and Conditions:

A ReentrantLock ( lock ) manages access to the shared buffer.

Two Condition variables:

notFull : Indicates that the buffer is not full, allowing the producer to add
items.
notEmpty : Indicates that the buffer is not empty, allowing the consumer
to retrieve items.

Producer Thread:

Continuously produces integer values.

Before adding an item to the buffer, it acquires the lock and checks if the
buffer is full. If full, it waits on the notFull condition.

Once space is available, it adds the item, prints a message, and signals
the notEmpty condition to notify the consumer.

Consumer Thread:

Continuously consumes items from the buffer.

Before retrieving an item, it acquires the lock and checks if the buffer is
empty. If empty, it waits on the notEmpty condition.

Once an item is available, it retrieves the item, prints a message, and


signals the notFull condition to notify the producer.

Key Points:

Avoiding Spurious Wakeups: When using Condition variables, it's


essential to account for spurious wakeups—situations where a thread
resumes waiting without an explicit signal. To handle this, always use a
loop to recheck the condition after await() returns. This ensures that the
thread proceeds only when the desired condition is genuinely met.

lock.lock();
try {
while (!conditionMet) {
condition.await();
}
// Proceed when condition is met
} finally {
lock.unlock();
}

Signaling Strategies: The Condition interface provides two signaling


methods:

signal() : Wakes up one waiting thread. Use this when only one thread
needs to proceed.
signalAll() : Wakes up all waiting threads. This is useful when multiple
threads might be waiting for the same condition, and any can proceed.

Choose the appropriate method based on your application’s concurrency


requirements.

Fairness Considerations: ReentrantLock can be instantiated with a fairness


policy. A fair lock favors granting access to the longest-waiting thread,
preventing thread starvation. However, fair locks may have reduced
throughput compared to non-fair locks due to increased overhead. Assess
your application's needs to decide whether to use a fair lock.

Lock fairLock = new ReentrantLock(true); // Fair lock

Performance Implications: While ReentrantLock offers advanced features


like multiple Condition variables and interruptible lock acquisition, it may
introduce more overhead compared to intrinsic locks ( synchronized blocks).
Use ReentrantLock when you need its specific capabilities; otherwise, the
simpler synchronized mechanism might be more efficient.

Best Practices:

Always Release Locks: Ensure that locks are released in a finally block
to prevent deadlocks, even if exceptions occur.

lock.lock();
try {
// Critical section
} finally {
lock.unlock();
}

Minimize Lock Scope: Keep the code within the locked section as brief as
possible to reduce contention and improve performance.

Avoid Locking Unrelated Code: Only protect shared mutable state with
locks. Locking code that doesn’t access shared resources can lead to
unnecessary performance bottlenecks.

Notice, how can we now have multiple condition variables associated


with the same lock. In the synchronized paradigm, we could only have
one wait-set associated with each object.
Java’s util.concurrent package provides several classes that can be used
for solving everyday concurrency problems and should always be
preferred than reinventing the wheel. Its offerings include thread-safe
data structures such as ConcurrentHashMap.

Missed Signals
A missed signal happens when a signal is sent by a thread before the other
thread starts waiting on a condition. This is exemplified by the following
code snippet. Missed signals are caused by using the wrong concurrency
constructs. In the example below, a condition variable is used to coordinate
between the signaller and the waiter thread. The condition is signaled at a
time when no thread is waiting on it causing a missed signal.

In multithreaded Java applications, a missed signal occurs when a thread


sends a notification (via notify() or signal() ) before another thread begins
waiting for that notification (via wait() or await() ). This can lead to
situations where the waiting thread misses the signal and remains blocked
indefinitely, causing potential deadlocks or unresponsive behavior.

Example Demonstrating a Missed Signal:

Consider the following Java program where a Signaller thread sends a


signal, and a Waiter thread waits for that signal. Due to improper
synchronization, the Waiter might miss the signal if it starts waiting after
the Signaller has already sent it.

public class MissedSignalExample {

private static final Object lock = new Object();


private static boolean isSignalled = false;

public static void main(String[] args) throws InterruptedException {


Thread waiter = new Thread(new Waiter(), "Waiter");
Thread signaller = new Thread(new Signaller(), "Signaller");

// Start the Signaller thread first to induce a missed signal scenario


signaller.start();
Thread.sleep(100); // Ensure Signaller runs before Waiter
waiter.start();
}

static class Waiter implements Runnable {


@Override
public void run() {
synchronized (lock) {
while (!isSignalled) {
try {
System.out.println(Thread.currentThread().getName() + "
lock.wait();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
System.out.println("Waiter interrupted.");
}
}
System.out.println(Thread.currentThread().getName() + " received
}
}
}

static class Signaller implements Runnable {


@Override
public void run() {
synchronized (lock) {
System.out.println(Thread.currentThread().getName() + " is sendi
isSignalled = true;
lock.notify();
System.out.println(Thread.currentThread().getName() + " has sent
}
}
}
}

Shared Lock Object: Both Waiter and Signaller threads synchronize on the
same lock object to coordinate their actions.

Signaller Thread:

Acquires the lock and sets the isSignalled flag to true .

Calls lock.notify() to wake up a waiting thread.

Releases the lock after sending the signal.

Waiter Thread:

Acquires the lock and enters a loop, checking the isSignalled flag.

If isSignalled is false , it calls lock.wait() to wait for a notification.

Once notified and the condition is met, it proceeds with its execution.

Issue:

In this setup, if the Signaller thread runs and sends the signal before the
Waiter thread starts waiting, the Waiter will miss the signal. When the
Waiter eventually calls lock.wait() , it will remain blocked indefinitely
because the signal was already sent, and there's no mechanism to re-send it.

Solution:
To prevent missed signals, ensure that the waiting thread is ready to receive
the signal before the signal is sent. This can be achieved by starting the
Waiter thread before the Signaller thread. Additionally, using higher-level
concurrency constructs like java.util.concurrent classes can help manage
such scenarios more effectively.

Revised Example Using CountDownLatch :

Java’s CountDownLatch is a synchronization aid that allows one or more


threads to wait until a set of operations being performed by other threads
completes. It can be used to ensure that the Waiter thread is ready before
the Signaller sends the signal.

import java.util.concurrent.CountDownLatch;

public class CountDownLatchExample {

private static final CountDownLatch latch = new CountDownLatch(1);

public static void main(String[] args) {


Thread waiter = new Thread(new Waiter(), "Waiter");
Thread signaller = new Thread(new Signaller(), "Signaller");

waiter.start();
signaller.start();
}

static class Waiter implements Runnable {


@Override
public void run() {
try {
System.out.println(Thread.currentThread().getName() + " is waiti
latch.await();
System.out.println(Thread.currentThread().getName() + " received
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
System.out.println("Waiter interrupted.");
}
}
}

static class Signaller implements Runnable {


@Override
public void run() {
System.out.println(Thread.currentThread().getName() + " is sending t
latch.countDown();
System.out.println(Thread.currentThread().getName() + " has sent the
}
}
}

CountDownLatch: Initialized with a count of 1, it ensures that the Waiter

thread waits until the Signaller thread decrements the count.


Waiter Thread:

Calls latch.await() to wait until the count reaches zero.

Once the count is decremented by the Signaller , it proceeds with its


execution.

Signaller Thread:

Calls latch.countDown() to decrement the count, releasing the Waiter

thread.

By using CountDownLatch , we ensure that the Waiter does not miss the signal,
regardless of the order in which the threads are started. This approach
provides a more robust solution to the missed signal problem.

Semaphore in Java
Java’s semaphore can be releas()-ed or acquire()-d for signalling amongst
threads. However the important call out when using semaphores is to make
sure that the permits acquired should equal permits returned. Take a look at
the following example, where a runtime exception causes a deadlock.

In Java, a Semaphore is a synchronization aid that controls access to a shared


resource by maintaining a set of permits. Threads can acquire permits using
the acquire() method and release them using the release() method. It's
crucial to ensure that every acquired permit is eventually released; failing to
do so can lead to issues such as deadlocks, where threads are unable to
proceed because the necessary permits are unavailable.

Example Demonstrating Deadlock Due to Unreleased Permits:

Consider the following Java program where a thread acquires a permit from
a semaphore but encounters a runtime exception before releasing the
permit. This scenario can lead to a deadlock, as the permit is never returned,
preventing other threads from acquiring it.

import java.util.concurrent.Semaphore;

public class SemaphoreDeadlockExample {

private static final Semaphore semaphore = new Semaphore(1);

public static void main(String[] args) {


Thread thread1 = new Thread(new Task(), "Thread-1");
Thread thread2 = new Thread(new Task(), "Thread-2");

thread1.start();
thread2.start();
}

static class Task implements Runnable {


@Override
public void run() {
try {
System.out.println(Thread.currentThread().getName() + " attempti
semaphore.acquire();
System.out.println(Thread.currentThread().getName() + " acquired

// Simulate some work


performTask();

} catch (InterruptedException e) {
Thread.currentThread().interrupt();
System.out.println(Thread.currentThread().getName() + " was inte
} finally {
// Ensure the permit is released even if an exception occurs
System.out.println(Thread.currentThread().getName() + " releasin
semaphore.release();
}
}

private void performTask() {


System.out.println(Thread.currentThread().getName() + " performing t
if (Math.random() > 0.5) {
throw new RuntimeException(Thread.currentThread().getName() + "
}
System.out.println(Thread.currentThread().getName() + " completed ta
}
}
}

Semaphore Initialization: A Semaphore is initialized with one permit,


allowing only one thread to access the critical section at a time.

Task Runnable:

Each thread attempts to acquire a permit before entering the critical


section.

The performTask() method simulates work and randomly throws a


RuntimeException to mimic an unexpected error.

The finally block ensures that the permit is released regardless of


whether an exception occurs, preventing deadlocks.

Key Points:

Proper Use of finally Block: Always release the semaphore permit in a


finally block to guarantee that it's returned even if an exception is
thrown. This practice prevents situations where a permit is permanently
lost, leading to deadlocks.

Handling Runtime Exceptions: Be mindful of code that can throw


runtime exceptions between acquiring and releasing a permit. Ensure
that such exceptions don’t prevent the release of permits.

Semaphore Fairness: By default, Semaphore uses a non-fair ordering


policy, which can lead to thread starvation in some cases. If a fair
ordering is desired, you can initialize the semaphore with the fairness
parameter set to true :

Semaphore semaphore = new Semaphore(1, true);

Keep in mind that fair semaphores may have lower throughput due to
increased overhead.

Whenever using locks or semaphores, remember to unlock or release the


semaphore in a finally block.
When using semaphores for thread synchronization, it’s essential to ensure
that every acquired permit is eventually released. This is typically achieved
by placing the release operation in a finally block. Failure to do so can
result in deadlocks, where threads are unable to proceed because the
necessary permits are unavailable. Proper exception handling and
adherence to best practices in concurrency can help prevent such issues.

Spurious Wakeups
Spurious mean fake or false. A spurious wakeup means a thread is woken up
even though no signal has been received. Spurious wakeups are a reality and
are one of the reasons why the pattern for waiting on a condition variable
happens in a while loop as discussed in earlier chapters. There are technical
reasons beyond our current scope as to why spurious wakeups happen, but
for the curious on POSIX based operating systems when a process is
signaled, all its waiting threads are woken up.

In multithreaded Java applications, a spurious wakeup occurs when a thread


waiting on a condition variable (using methods like wait() , sleep() , or
join() ) resumes execution without an explicit notification ( notify() or
notifyAll() ). This unexpected behavior can lead to threads proceeding
under false assumptions, potentially causing inconsistent states or errors. To
handle spurious wakeups effectively, it's essential to use a loop that rechecks
the condition after each wakeup, ensuring that the thread only proceeds
when the desired condition is truly met.

Example Demonstrating Spurious Wakeup Handling:

Consider the following Java program where a Producer thread adds items to
a shared queue, and a Consumer thread removes items from it. Both threads
use a condition variable to coordinate their actions. The Consumer thread
waits for items to be available in the queue before attempting to consume
them. To handle potential spurious wakeups, the consumer checks the
condition in a loop.

import java.util.LinkedList;
import java.util.Queue;

public class SpuriousWakeupExample {

private static final int MAX_CAPACITY = 5;


private static final Queue<Integer> queue = new LinkedList<>();

public static void main(String[] args) {


Thread producer = new Thread(new Producer(), "Producer");
Thread consumer = new Thread(new Consumer(), "Consumer");

producer.start();
consumer.start();
}

static class Producer implements Runnable {


@Override
public void run() {
int item = 0;
while (true) {
synchronized (queue) {
while (queue.size() == MAX_CAPACITY) {
try {
System.out.println("Queue is full. " + Thread.curren
queue.wait();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
System.out.println("Producer interrupted.");
}
}
System.out.println("Producing item: " + item);
queue.add(item++);
queue.notifyAll();
}
// Simulate time taken to produce an item
try {
Thread.sleep(100);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
System.out.println("Producer sleep interrupted.");
}
}
}
}

static class Consumer implements Runnable {


@Override
public void run() {
while (true) {
synchronized (queue) {
while (queue.isEmpty()) {
try {
System.out.println("Queue is empty. " + Thread.curre
queue.wait();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
System.out.println("Consumer interrupted.");
}
}
int item = queue.poll();
System.out.println("Consuming item: " + item);
queue.notifyAll();
}
// Simulate time taken to consume an item
try {
Thread.sleep(150);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
System.out.println("Consumer sleep interrupted.");
}
}
}
}
}

Shared Queue: A LinkedList is used as a shared queue between the producer


and consumer threads.

Producer Thread:

Acquires the lock on the queue and checks if the queue has reached its
MAX_CAPACITY .

If the queue is full, it enters a loop and calls queue.wait() , releasing the
lock and waiting until notified.

Upon being notified, it rechecks the condition to ensure the queue is not
full before adding a new item.

After adding an item, it calls queue.notifyAll() to wake up any waiting


threads.

Consumer Thread:

Acquires the lock on the queue and checks if the queue is empty.

If the queue is empty, it enters a loop and calls queue.wait() , releasing


the lock and waiting until notified.
Upon being notified, it rechecks the condition to ensure the queue is not
empty before consuming an item.

After removing an item, it calls queue.notifyAll() to wake up any waiting


threads.

Key Points:

Using a Loop for Condition Checking: Both the producer and consumer
threads use a while loop to check their respective conditions
( queue.size() == MAX_CAPACITY for the producer and queue.isEmpty() for
the consumer). This loop ensures that after waking up, the thread re-
evaluates the condition before proceeding. This approach handles
spurious wakeups gracefully, as the thread will go back to waiting if the
condition is not met.

Avoiding if for Condition Checking: Using an if statement instead of a


while loop can lead to issues if a spurious wakeup occurs. With an if

statement, the thread would proceed without rechecking the condition,


potentially leading to errors such as consuming from an empty queue or
producing into a full queue.

Synchronization and Notification: The synchronized block ensures that


only one thread accesses the shared queue at a time, maintaining thread
safety. The queue.notifyAll() method wakes up all waiting threads,
allowing them to re-evaluate their conditions.

Spurious wakeups are a reality in multithreaded programming and can


occur without explicit notifications. To handle them effectively, always use a
loop to check the condition when waiting on a condition variable. This
practice ensures that your thread only proceeds when the desired condition
is genuinely satisfied, leading to more robust and reliable concurrent
applications.

For a more in-depth understanding of spurious wakeups and how to handle


them in Java, you might find the following video resource helpful:
Spurious Wake Ups
Share

Watch on

Spurious Wake Ups

Lock Fairness
When locks get acquired by threads, there’s no guarantee of the order in
which threads are granted access to a lock. A thread requesting lock access
more frequently may be able to acquire the lock unfairly greater number of
times than other locks. Java locks can be turned into fair locks by passing in
the fair constructor parameter. However, fair locks exhibit lower throughput
and are slower compared to their unfair counterparts.

In Java’s concurrent programming, the ReentrantLock class provides a


mechanism for thread synchronization, offering more flexibility than the
traditional synchronized blocks. One notable feature of ReentrantLock is its
ability to operate in either fair or unfair mode, determined by a fairness
policy specified during its construction.
Fair vs. Unfair Locks:

Unfair Lock (Default Behavior): In this mode, threads can acquire the
lock in a non-deterministic order. A thread attempting to acquire the lock
may “jump the queue,” obtaining the lock even if other threads have been
waiting longer. This approach can lead to higher throughput but may
cause thread starvation, where some threads are perpetually delayed.

Fair Lock: When fairness is set to true , the lock favors granting access to
the longest-waiting thread, ensuring a first-come, first-served order.
While this prevents thread starvation and promotes equitable access, it
can result in reduced performance due to the overhead of managing the
queue of waiting threads.

Implementing Fair and Unfair Locks with ReentrantLock :

The ReentrantLock constructor accepts a boolean parameter to set the


fairness policy:

Unfair Lock (Default):


ReentrantLock unfairLock = new ReentrantLock(); // Defaults to unfair

Fair Lock:

ReentrantLock fairLock = new ReentrantLock(true); // Fairness set to true

Example Demonstration:

The following example illustrates the behavior of fair and unfair locks.
Multiple threads attempt to acquire a shared lock and perform a simple task.
The program compares the order in which threads acquire the lock under
both fairness policies.

import java.util.concurrent.locks.ReentrantLock;

public class FairUnfairLockExample {

private static final int THREAD_COUNT = 5;


private static final ReentrantLock fairLock = new ReentrantLock(true); // Fa
private static final ReentrantLock unfairLock = new ReentrantLock(); // Un

public static void main(String[] args) throws InterruptedException {


System.out.println("Demonstrating Unfair Lock:");
runTest(unfairLock);

System.out.println("\nDemonstrating Fair Lock:");


runTest(fairLock);
}

private static void runTest(ReentrantLock lock) throws InterruptedException


Thread[] threads = new Thread[THREAD_COUNT];

for (int i = 0; i < THREAD_COUNT; i++) {


threads[i] = new Thread(new Worker(lock), "Thread-" + (i + 1));
threads[i].start();
// Slight delay to increase contention
Thread.sleep(10);
}

for (Thread thread : threads) {


thread.join();
}
}

static class Worker implements Runnable {


private final ReentrantLock lock;

Worker(ReentrantLock lock) {
this.lock = lock;
}

@Override
public void run() {
System.out.println(Thread.currentThread().getName() + " attempting t
lock.lock();
try {
System.out.println(Thread.currentThread().getName() + " acquired
// Simulate work
Thread.sleep(50);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
System.out.println(Thread.currentThread().getName() + " releasin
lock.unlock();
}
}
}
}

Lock Initialization:

Two ReentrantLock instances are created: fairLock with fairness set to


true , and unfairLock with the default (unfair) setting.

Test Execution:

The runTest method initiates multiple threads that attempt to acquire the
provided lock. A slight delay ( Thread.sleep(10) ) between thread starts
increases contention, highlighting the difference between fair and unfair
locking.

Worker Threads:

Each Worker thread tries to acquire the lock, simulates work by sleeping
for 50 milliseconds, and then releases the lock. The console output shows
the order in which threads acquire and release the lock.

Observations:

With the unfair lock, the output may show threads acquiring the lock out
of the order they were started, demonstrating the lack of a strict ordering
policy.

With the fair lock, threads are more likely to acquire the lock in the order
they were started, adhering to the first-come, first-served principle.

Performance Considerations:

While fair locks prevent thread starvation by ensuring orderly access, they
introduce additional overhead due to the management of the waiting queue.
This can lead to lower throughput compared to unfair locks, especially under
high contention. Therefore, the choice between fair and unfair locks should
be guided by the specific requirements of your application, balancing the
need for fairness against performance implications.

Thread Pools
Imagine an application that creates threads to undertake short-lived tasks.
The application would incur a performance penalty for first creating
hundreds of threads and then tearing down the allocated resources for each
thread at the ends of its life. The general way programming frameworks
solve this problem is by creating a pool of threads, which are handed out to
execute each concurrent task and once completed, the thread is returned to
the pool.

Java offers thread pools via its Executor Framework. The framework includes
classes such as the ThreadPoolExecutor for creating thread pools.

In Java, managing numerous short-lived tasks by creating and destroying


threads can lead to significant performance overhead. To mitigate this, the
Executor Framework provides thread pool management, allowing for
efficient task execution. Below is a detailed example demonstrating how to
use the ThreadPoolExecutor to manage a pool of threads for executing
concurrent tasks.
Example: Using ThreadPoolExecutor to Execute Concurrent Tasks

import java.util.concurrent.Executors;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;

public class ThreadPoolExample {

public static void main(String[] args) {


// Define the number of threads in the pool
int corePoolSize = 5;
int maximumPoolSize = 10;
long keepAliveTime = 1;
TimeUnit unit = TimeUnit.MINUTES;

// Create a ThreadPoolExecutor with the specified configuration


ThreadPoolExecutor executor = (ThreadPoolExecutor) Executors.newFixedThr

// Submit tasks for execution


for (int i = 1; i <= 20; i++) {
Runnable task = new Task("Task " + i);
System.out.println("Submitting: " + task);
executor.execute(task);
}

// Shut down the executor gracefully


executor.shutdown();
try {
if (!executor.awaitTermination(60, TimeUnit.SECONDS)) {
executor.shutdownNow();
}
} catch (InterruptedException e) {
executor.shutdownNow();
Thread.currentThread().interrupt();
}
}
}

class Task implements Runnable {


private final String name;

public Task(String name) {


this.name = name;
}

@Override
public void run() {
System.out.println(name + " is being executed by " + Thread.currentThrea
try {
// Simulate a task taking time
Thread.sleep((long) (Math.random() * 1000));
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
System.out.println(name + " has completed execution.");
}
}

ThreadPoolExecutor Configuration:

corePoolSize : The number of threads to keep in the pool, even if they are
idle.

maximumPoolSize : The maximum number of threads allowed in the pool.

keepAliveTime and unit : The time that excess idle threads will wait for
new tasks before terminating.

In this example, Executors.newFixedThreadPool(corePoolSize) is used to create


a thread pool with a fixed number of threads. This is suitable for scenarios
where the number of concurrent tasks is known and constant. For more
complex scenarios, you can use ThreadPoolExecutor directly to have finer
control over the thread pool parameters.

Submitting Tasks:

A loop submits 20 tasks to the executor. Each task is an instance of the


Task class, which implements Runnable .

Task Execution:

Each Task prints its name and the thread executing it, simulates work by
sleeping for a random time, and then prints a completion message.
Shutting Down the Executor:

After submitting all tasks, executor.shutdown() initiates an orderly


shutdown where previously submitted tasks are executed, but no new
tasks will be accepted.

awaitTermination waits for existing tasks to terminate within the


specified timeout. If tasks do not terminate in time, shutdownNow is called
to stop all actively executing tasks.

Considerations:

Thread Pool Sizing: Choosing the appropriate size for the thread pool is
crucial. A pool that is too large can lead to resource exhaustion, while a
pool that is too small may cause underutilization of system resources.
The optimal size depends on factors such as the nature of the tasks and
the system’s capabilities. Software Engineering Stack Exchange

Task Design: Ensure that tasks are designed to handle interruptions


properly, especially if you plan to use methods like shutdownNow that
interrupt running tasks.

Exception Handling: Implement appropriate exception handling within


tasks to prevent unexpected termination of threads due to runtime
exceptions.

By utilizing the ThreadPoolExecutor , you can manage a pool of threads to


execute concurrent tasks efficiently, reducing the overhead associated with
thread creation and destruction. This approach enhances the performance
and scalability of applications that require concurrent task execution.

Java Memory Model


A memory model is defined as the set of rules according to which the
compiler, the processor or the runtime is permitted to reorder memory
operations. Reordering operations allow compilers and the like to apply
optimizations that can result in better performance. However, this freedom
can wreak havoc in a multithreaded program when the memory model is not
well-understood with unexpected program outcomes.

In Java’s multithreaded environment, understanding the Java Memory Model


(JMM) is crucial to ensure correct program behavior. The JMM defines how
threads interact through memory and the rules governing visibility and
ordering of variable accesses. Without proper synchronization, the compiler,
processor, or runtime may reorder instructions for optimization, leading to
unexpected outcomes in concurrent applications.

Example: Instruction Reordering Leading to Unexpected Results

Consider the following Java program demonstrating potential issues due to


instruction reordering:

public class ReorderingDemo {


private static int x = 0, y = 0;
private static int a = 0, b = 0;

public static void main(String[] args) throws InterruptedException {


int iterations = 1_000_000;
int reorderingCount = 0;

for (int i = 0; i < iterations; i++) {


x = 0; y = 0;
a = 0; b = 0;
Thread thread1 = new Thread(() -> {
a = 1;
x = b;
});

Thread thread2 = new Thread(() -> {


b = 1;
y = a;
});

thread1.start();
thread2.start();
thread1.join();
thread2.join();

if (x == 0 && y == 0) {
reorderingCount++;
System.out.println("Reordering occurred at iteration: " + i);
}
}
System.out.println("Total reorderings observed: " + reorderingCount);
}
}

Variables: x, y, a, and b are shared among threads without


synchronization.

Threads:

thread1 sets a = 1 and then assigns x = b.

thread2 sets b = 1 and then assigns y = a.

Expected Behavior: In a sequentially consistent model, at least one of the


assignments to x or y should observe the updated value ( 1 ). Therefore,
the condition x == 0 && y == 0 should theoretically never be true.

Actual Behavior: Due to instruction reordering and lack of


synchronization, it’s possible for both x and y to be 0 in some
iterations, indicating that both threads read the initial values before the
writes occurred. This scenario demonstrates a violation of expected
happens-before relationships.

Preventing Reordering with Proper Synchronization:

To prevent such issues, synchronization mechanisms should be employed to


establish proper happens-before relationships, ensuring memory visibility
and ordering guarantees.

Solution 1: Using synchronized Blocks


public class SynchronizedReorderingDemo {
private static int x = 0, y = 0;
private static int a = 0, b = 0;

public static void main(String[] args) throws InterruptedException {


int iterations = 1_000_000;
int reorderingCount = 0;

for (int i = 0; i < iterations; i++) {


x = 0; y = 0;
a = 0; b = 0;

Thread thread1 = new Thread(() -> {


synchronized (SynchronizedReorderingDemo.class) {
a = 1;
x = b;
}
});

Thread thread2 = new Thread(() -> {


synchronized (SynchronizedReorderingDemo.class) {
b = 1;
y = a;
}
});

thread1.start();
thread2.start();
thread1.join();
thread2.join();

if (x == 0 && y == 0) {
reorderingCount++;
System.out.println("Reordering occurred at iteration: " + i);
}
}
System.out.println("Total reorderings observed: " + reorderingCount);
}
}

Solution 2: Using volatile Variables

public class VolatileReorderingDemo {


private static volatile int x = 0, y = 0;
private static volatile int a = 0, b = 0;

public static void main(String[] args) throws InterruptedException {


int iterations = 1_000_000;
int reorderingCount = 0;

for (int i = 0; i < iterations; i++) {


x = 0; y = 0;
a = 0; b = 0;

Thread thread1 = new Thread(() -> {


a = 1;
x = b;
});

Thread thread2 = new Thread(() -> {


b = 1;
y = a;
});

thread1.start();
thread2.start();
thread1.join();
thread2.join();

if (x == 0 && y == 0) {
reorderingCount++;
System.out.println("Reordering occurred at iteration: " + i);
}
}
System.out.println("Total reorderings observed: " + reorderingCount);
}
}

synchronized Blocks: Using synchronized blocks ensures that only one


thread executes the synchronized code at a time, establishing a happens-
before relationship that prevents instruction reordering.

volatile Variables: Declaring variables as volatile ensures that reads


and writes to these variables are directly from and to the main memory,
preventing caching and reordering optimizations that could lead to
inconsistent views of the variables across threads.

Key Takeaways:

Java Memory Model (JMM): The JMM defines the interaction between
threads and memory, specifying how and when changes made by one
thread become visible to others.

Instruction Reordering: Compilers and processors may reorder


instructions to optimize performance, which can lead to unexpected
behavior in multithreaded programs if not properly synchronized.

Happens-Before Relationship: Establishing a happens-before


relationship through synchronization ensures that memory writes by one
specific statement are visible to another specific statement, preventing
reordering issues.

Synchronization Mechanisms: Using synchronized blocks or declaring


variables as volatile are effective ways to prevent instruction reordering
and ensure correct program behavior in a multithreaded environment.

Key Points to Remember:

Use of sophisticated multi-level memory caches or processor caches.

Reordering of statements by the compiler which may differ from the


source code ordering.
Other optimizations that the hardware, runtime or the compiler may
apply

Understanding and applying these concepts are essential for developing


robust and predictable multithreaded applications in Java.

Different processor architectures have different policies as to when an


individual processor’s cache is reconciled with the main memory.

In Java, when multiple threads access shared variables without proper


synchronization, issues like cache coherence problems can arise. This
occurs because each thread may have its own cached copy of a variable,
leading to inconsistent views of the variable’s value across threads. To
address this, Java provides the volatile keyword, which ensures that a
variable's value is always read from and written to the main memory,
maintaining consistency across threads.

Example: Demonstrating Cache Coherence Issue and Resolving It with


volatile
public class CacheCoherenceDemo {
private static boolean running = true;

public static void main(String[] args) throws InterruptedException {


Thread worker = new Thread(() -> {
int count = 0;
while (running) {
count++;
}
System.out.println("Worker thread terminated. Final count: " + count
});

worker.start();

// Allow the worker thread to run for a short time


Thread.sleep(1000);

// Stop the worker thread


System.out.println("Main thread updating running to false.");
running = false;

// Wait for the worker thread to finish


worker.join();
System.out.println("Main thread terminated.");
}
}

Shared Variable: The running variable is shared between the main thread
and the worker thread.

Worker Thread: The worker thread increments a counter in a loop that


continues as long as running is true .

Main Thread: After sleeping for 1 second, the main thread sets running

to false , intending to stop the worker thread.

Issue:

Without proper synchronization, the worker thread may not see the updated
value of running set by the main thread. This happens because the worker
thread might be reading a cached copy of running that remains true ,

causing it to continue running indefinitely.

Solution: Using volatile to Ensure Visibility

To ensure that the worker thread sees the updated value of running , declare
running as volatile :

public class CacheCoherenceDemo {


private static volatile boolean running = true;
// ... (rest of the code remains the same)
}

volatile Keyword: Declaring running as volatile ensures that any write


to running by one thread is immediately visible to other threads. This
prevents threads from caching the variable's value, thus maintaining
consistency across threads.

Key Points:

Cache Coherence Problem: Occurs when multiple threads have


inconsistent views of a shared variable due to caching, leading to
incorrect program behavior.

volatile Keyword: Ensures that a variable's value is always read from


and written to the main memory, providing visibility guarantees across
threads.

When to Use volatile : Use volatile for variables that are accessed by
multiple threads without other forms of synchronization, and when the
variable's state is independent (i.e., not involved in compound actions
like check-then-act).

Additional Considerations:

Atomicity: The volatile keyword does not guarantee atomicity. For


compound actions, such as incrementing a counter, consider using
atomic classes from the java.util.concurrent.atomic package.

Synchronization: For more complex scenarios involving multiple


variables or compound actions, using synchronized blocks or locks may
be more appropriate to ensure thread safety.

Understanding and applying the volatile keyword appropriately helps


prevent cache coherence issues, ensuring that all threads have a consistent
view of shared variables.

For a deeper understanding of CPU cache coherence in Java concurrency,


you might find the following video helpful:
CPU Cache Coherence + Java Concurrency
Share

Watch on

CPU Cache Coherence + Java Concurrency

The Java language specification (JLS) mandates the JVM to maintain within-
thread as-if-serial semantics. What this means is that, as long as the result of
the program is exactly the same if it were to be executed in a strictly
sequential environment (think single thread, single processor) then the JVM
is free to undertake any optimizations it may deem necessary. Over the
years, much of the performance improvements have come from these clever
optimizations as clock rates for processors become harder to increase.
However, when data is shared between threads, these very optimizations can
result in concurrency errors and the developer needs to inform the JVM
through synchronization constructs of when data is being shared.

Java Multithreading for Senior Engineering Interviews (Part II) | by yugal-


nandurkar | Jan, 2025 | Medium

Java Multithreading Interview Engineering Programming

3 1

Written by yugal-nandurkar Follow


16 Followers · 1 Following

https://github.com/yugal-nandurkar || https://www.linkedin.com/in/yugal-
nandurkar/ || https://medium.com/@microteam93

Responses (1)
Write a response

What are your thoughts?

yogesh rajput
Jan 28

You copied educative.io or they copied you?

1 reply Reply

More from yugal-nandurkar

yugal-nandurkar yugal-nandurkar

Spring Boot (Baeldung Java Multithreading for Senior


Perspective) Engineering Interviews (Part II)
Exploring Spring Boot Java Multithreading for Senior Engineering
Interviews (Part I) | by yugal-nandurkar | Jan,…

Feb 7 Jan 12 6
yugal-nandurkar yugal-nandurkar

Digital Twin in Next-Generation Portfolio Skillset (Spring Boot


Computer Networks (Part I) Developer)
This project develops a digital twin for next- Portfolio Skillset (Java Developer) | by yugal-
generation computer networks through a… nandurkar | Feb, 2025 | Medium

Feb 23 Feb 7

See all from yugal-nandurkar

Recommended from Medium

Umadevi R Sujith C

Will finally Block Execute If There Is Understanding CAP Theorem in


a Return Statement in catch? System Design with a Practical…
Interview Questions Related to Finally!What In the world of distributed systems, the CAP
happens if there is a return statement in the… theorem serves as a foundational concept fo…

Dec 31, 2024 127 4 6d ago


Ajay Rathod Ramesh Fadatare

Top 15 DS Algo Interview Questions REST API Design for Long-Running


for Java Developers(Commonly… Tasks 🚀
Hello guys, if you are you a software engineer Some operations in REST APIs take a long
or specifically a Java Developer and you hav… time to complete, such as processing large…

5d ago 11 Mar 13 3

AKCoding.com Byte Wise 010

Most Asked Data Structure & Centralized Monitoring with Spring


Algorithm Questions in Interviews Boot Admin
Not a Premium Medium member? Click here From Zero to Production-Ready Observability
to access it for free! in 15 Minutes

Feb 20 2 5d ago 29

See more recommendations

Help Status About Careers Press Blog Privacy Rules Terms Text to speech

You might also like