Section 5 Java Multithreading True Senior H1 H2
Section 5 Java Multithreading True Senior H1 H2
Java Multithreading
How would you detect and fix a deadlock in a production Java application?
Use thread dumps and tools like VisualVM or jstack to identify threads blocked on
monitors.
Analyze lock acquisition order and look for cyclic dependencies between thread stacks.
Refactor code to acquire locks in a consistent global order across threads.
Introduce lock timeouts or use java.util.concurrent locks with tryLock to avoid
indefinite blocking.
Monitor lock contention and usage metrics continuously in production to catch
regressions early.
What are the benefits and trade-offs of using ReentrantLock over synchronized
blocks?
ReentrantLock offers finer-grained control, supports tryLock(), and timed lock
acquisition.
Supports fair locking policy and condition variables for advanced wait/notify scenarios.
More verbose than synchronized and easier to misuse (must manually unlock in all
paths).
Allows interruptible lock acquisition, which is useful in cancellation scenarios.
Preferred when advanced coordination or lock polling is required.
How would you debug a race condition that happens intermittently under load?
Use stress testing tools (e.g., jcstress) and randomized inputs to increase failure
likelihood.
Inject artificial delays or reorderings to simulate timing issues in multithreaded paths.
Capture shared state access and modification patterns using logging or trace probes.
Apply volatile or synchronized to enforce memory visibility when needed.
Reproduce issue in isolation and narrow down minimal failing scenario for root cause
analysis.
What is the role of the ForkJoinPool and how is it different from a fixed thread
pool?
ForkJoinPool is designed for work-stealing, ideal for recursive and parallel divide-and-
conquer tasks.
Threads in ForkJoinPool can 'steal' tasks from other queues, improving load balancing.
Fixed thread pools use a shared queue and are better for uniform task workloads.
ForkJoinPool supports fine-grained parallelism with reduced context switching
overhead.
Misuse (e.g., blocking calls inside tasks) can starve worker threads and cause deadlocks.
How does the Java Memory Model impact the visibility of shared variables
between threads?
Without synchronization, threads may cache variables and see stale or inconsistent
values.
Volatile fields ensure visibility but not atomicity; writes are immediately visible to other
threads.
Synchronized blocks establish happens-before relationships that flush thread-local
caches.
Final fields in constructors are guaranteed to be visible if safely published.
Understanding JMM is essential for writing safe lock-free and concurrent code.
How would you design a thread-safe cache for concurrent reads and writes?
Use ConcurrentHashMap for segment-based concurrent access without full locking.
Add expiration with Caffeine or Guava for bounded size and eviction policies.
Use computeIfAbsent for atomic loading behavior without double-checked locking.
Avoid holding locks during expensive computations or I/O operations.
Design read/write patterns to minimize contention and false sharing.
How does the ExecutorService handle task rejection and what strategies exist
to manage it?
ExecutorService uses a bounded queue and RejectExecutionHandler when it cannot
accept tasks.
Default behavior (AbortPolicy) throws RejectedExecutionException if the queue is full.
Custom policies can log, retry, discard, or run tasks on the caller thread.
Tune core/max threads and queue size based on workload patterns and latency SLAs.
Backpressure and metrics should be monitored to prevent silent task loss or overload.