Threads Best Practices: Mutex
Threads Best Practices: Mutex
Mutex:
Mutual exclusion (often abbreviated to mutex) algorithms are used in concurrent programming to avoid the
simultaneous use of a common resource, such as a global variable, by pieces of computer code called critical
sections e.g. producer and consumer accessing queues
A good practive of use of mutex suggest acquiring a lock on a different object then the object in critical section
itself. e.g. if the queue need to be accessed mutually exclusively one at a time by the producer and consumer
thread, Queue itself should not be locked, however define another object for the purpose; the synchronized block
would be applied across this object.
The basic advantage of this approch is to define multiple mutex for a single monitor object, one can define two
mutex one for all the producer threads and other for consumner thread, since both the operation can be
performed in parallel but no two consumer should work in parrallel.
Starvation:
Starvation is a situation when a low priority thread never gets its chance to execute CPU cycle, critical section or
synchronized object;
The synchronized block well handle the automicity, without race conditions or corrupting data i.e. only one
thread at a time can execute code protected by a given monitor object (lock), allowing you to prevent multiple
threads from colliding with each other when updating shared state; but It has few limitations:
Synchronized, Wait/notifyAll doesn't maintain a queue of threads, All the threads gets notified and race for the
critical section. when there are bombardment of threads waiting for critical section there are chances that one of
a thread may never get chance to hold on synchronized block causing starvation.
Starwation can be avoided by giving fair chance to all the waiting threads, commonly known as fair locking.
• JDK 1.5 introduced a locking framework to overcome all these short comming. Here instead of using
Synchronized block one uses "java.util.concurrent.locks.ReentrantLock" to place the critical section on
atomic access; e.g.
Lock lock = new ReentrantLock();
package ub.utils.locker;
import java.util.ArrayList;
import java.util.List;
while(isLockedForThisThread) {
synchronized(this) {
isLockedForThisThread =
isLocked ||
!((QueueObject) waitingThreads.get(0)).equals(queueObject);
if(!isLockedForThisThread){
isLocked = true;
waitingThreads.remove(queueObject);
lockingThread = Thread.currentThread();
return;
}
}
try {
queueObject.doWait();
} catch(InterruptedException e) {
synchronized(this) { waitingThreads.remove(queueObject); }
throw e;
}
}
}
if(this.lockingThread != Thread.currentThread()) {
throw new IllegalMonitorStateException(
"Calling thread has not locked this lock");
}
isLocked = false;
lockingThread = null;
if(waitingThreads.size() > 0) {
((QueueObject) waitingThreads.get(0)).doNotify();
}
}
}
Thread Pool / Thread Pool Executor:
Use thread pool pattern, where a number of threads are created to perform a number of tasks, which are
usually organized in a queue. As soon as a thread completes its task, it will request the next task from the
queue until all tasks have been completed. The thread can then terminate, or sleep until there are new tasks
available.
Advantage here are :
if the number of tasks is very large, then creating a thread for each one may be impractical
Using a thread pool over creating a new thread for each task, thread creation and destruction overhead is
negated
You have flexibility to tune with the parameter "number of threads" in order to achive the best performance
More over one can think of having the number of threads changing dynamically based on the number of
waiting tasks
With the help of beforeExecute and afterExecute, one can easily inject cross concerns, e.g time logging,
initialization of thread params ...
e.g. Thread Pool Extended with Logging and Timing: use of beforeExecute and afterExecute:
public class TimingThreadPool extends ThreadPoolExecutor {
private final ThreadLocal<Long> startTime = new
ThreadLocal<Long>();
private final Logger log = Logger.getLogger("TimingThreadPool");
private final AtomicLong numTasks = new AtomicLong();
private final AtomicLong totalTime = new AtomicLong();
Avoid deadlocks:
1. ALL the below four conditions MUST hold for deadlock to occur, so to avoid it atleast one of the condition
must not occur at any given point of time.
Mutual Resource can not be shared
exclusion Requests are delayed until resource is released
Hold-and-wait Thread holds one resource while waits for another
No preemption Resources are released voluntarily after completion
Circular wait Circular dependencies exist in “waits-for” or “resourceallocation” graphs