Monitor Object: An Object Behavioral Pattern For Concurrent Programming
Monitor Object: An Object Behavioral Pattern For Concurrent Programming
1 Intent
the corresponding consumer handler, whose thread then delivers the message to its remote consumer.1
When suppliers and consumers reside on separate hosts,
the Gateway uses the connection-oriented TCP [2] protocol to provide reliable message delivery and end-to-end ow
control. TCPs ow control algorithm blocks fast senders
when they produce messages more rapidly than slower receivers can process the messages. The entire Gateway should
not block, however, while waiting for ow control to abate
on outgoing TCP connections. To minimize blocking, therefore, each consumer handler can contain a thread-safe message queue that buffers new routing messages it receives
from its supplier handler threads.
One way to implement a thread-safe Message Queue
is to use the Active Object pattern [1], which decouples the
thread used to invoke a method from the thread used to execute the method. As shown in Figure 2, each message queue
active object contains a bounded buffer and its own thread of
control that maintains a queue of pending messages. Using
2 Also Known As
Thread-safe Passive Object
3 Example
Lets reconsider the design of the communication Gateway
described in the Active Object pattern [1] and shown in
Figure 1. The Gateway process contains multiple supplier
Routing
Table
Supplier
Handler
2: find (msg)
Consumer
Handler
Supplier
Handler
1: recv (msg)
INCOMING
MESSAGES
OUTGOING
MESSAGES
Consumer
Handler
1: put (msg)
3: put (msg)
4: get (msg)
5: send (msg)
INCOMING
GATEWAY MESSAGES
Supplier
Handler
OUTGOING
MESSAGES
GATEWAY
Consumer
Handler
2: put (msg)
Message Queue
3: get (msg)
4: send (msg)
OUTGOING MESSAGES
CONSUMER
SUPPLIER
CONSUMER
the Active Object pattern to implement a thread-safe message queue decouples supplier handler threads in the Gateway process from consumer handler threads so all threads
can run concurrently and block independently when ow
control occurs on various TCP connections.
nents, we recommend you read the Active Object pattern before reading the
Monitor Object pattern.
Although the Active Object pattern can be used to implement a functional Gateway, it has the following drawbacks:
Performance overhead: The Active Object pattern provides a powerful concurrency model. It not only synchronizes concurrent method requests on an object, but also can
perform sophisticated scheduling decisions to determine the
order in which requests execute. These features incur nontrivial amounts of context switching, synchronization, dynamic memory management, and data movement overhead,
however, when scheduling and executing method requests.
3. Objects should be able to schedule their methods cooperatively: If an objects methods must block during their
execution, they should be able to voluntarily relinquish their
thread of control so that methods called from other client
threads can access the object. This property helps prevent
deadlock and makes it possible to leverage the concurrency
available on hardware/software platforms.
Programming overhead: The Active Object pattern requires programmers to implement up to six components:
proxies, method requests, an activation queue, a scheduler, a
servant, and futures for each proxy method. Although some
components, such as activation queues and method requests,
can be reused, programmers may have to reimplement or signicantly customize these components each time they apply
the pattern.
6 Solution
For each object accessed concurrently by client threads dene it as a monitor object. Clients can access the services
dened by a monitor object only through its synchronized
methods. To prevent race conditions involving monitor object state, only one synchronized method at a time can run
within a monitor object. Each monitored object contains a
monitor lock that synchronized methods use to serialize their
access to an objects behavior and state. In addition, synchronized methods can determine the circumstances under
which they suspend and resume their execution based on one
or more monitor conditions associated with a monitor object.
7 Structure
4 Context
Monitor object
A monitor object exports one or more methods to
clients. To protect the internal state of the monitor
object from uncontrolled changes or race conditions,
all clients must access the monitor object only through
these methods. Each method executes in the thread of
the client that invokes it because a monitor object does
not have its own thread of control.2
5 Problem
Many applications contain objects that are accessed concurrently by multiple client threads. For concurrent applications
to execute correctly, therefore, it is often necessary to synchronize and schedule access to these objects. In the presence of this problem, the following three requirements must
be satised:
Synchronized methods
Synchronized methods implement the thread-safe services exported by a monitor object. To prevent race
conditions, only one synchronized method can execute
within a monitor at any point in time, regardless of the
number of threads that invoke the objects synchronized
methods concurrently or the number of synchronized
methods in the objects class.
For instance, the put and get operations on the consumer handlers message queue should be synchronized
methods to ensure that routing messages can be inserted
2 In contrast, an active object [1] does have its own thread of control.
and removed simultaneously by multiple threads without corrupting a queues internal state.
Monitor lock
Each monitor object contains its own monitor lock.
Synchronized methods use this monitor lock to serialize method invocations on a per-object basis. Each
synchronized method must acquire/release the monitor
lock when the method enters/exits the object, respectively. This protocol ensures the monitor lock is held
whenever a method performs operations that access or
modify its objects state.
For instance, a Thread Mutex [3] could be used to
implement the message queues monitor lock.
2. Synchronized method thread suspension: If a synchronized method must block or cannot otherwise make immediate progress, it can wait on one of its monitor conditions, which causes it to leave the monitor object temporarily [5]. When a synchronized method leaves the monitor
object, the monitor lock is released automatically and the
clients thread of control is suspended on the monitor condition.
Monitor condition
Multiple synchronized methods running in separate
threads can cooperatively schedule their execution sequences by waiting for and notifying each other via
monitor conditions associated with their monitor object.
Synchronized methods use monitor conditions to determine the circumstances under which they should suspend or resume their processing.
3.
Method condition notication: A synchronized
method can notify a monitor condition in order to resume a
synchronized methods thread that had previously suspended
itself on the monitor condition. In addition, a synchronized
method can notify all other synchronized methods that previously suspended their threads on a monitor condition.
Monitor Object
: client
thread 1
synchronized_method_1()
...
synchronized_method_m()
monitor_lock_
monotor_condition_1_
...
monitor_condition_n_
SYNCHRONIZED
METHOD INVOCATION
& SERIALIZATION
SYNCHRONIZED METHOD
THREAD SUSPENSION
METHOD CONDITION
NOTIFICATION
SYNCHRONIZED METHOD
THREAD RESUMPTION
8 Dynamics
: client
thread 2
method_1()
acquire()
do_work()
wait()
release()
suspend thread 1
method_2() acquire()
do_work()
notify()
method_2() release()
return
resume thread 1
resume method_1()
acquire()
method_1() return
do_work()
release()
9 Implementation
class Message_Queue
{
public:
enum {
MAX_MESSAGES = /* ... */;
};
class Message_Queue
{
public:
// ... See above ....
private:
// = Private helper methods (non-synchronized
//
and do not block).
// ...
private:
// ...
};
A monitor lock can be implemented using a mutex. A mutex makes collaborating threads wait while the thread holding the mutex executes code in a critical section. Monitor conditions can be implemented using condition variables [4]. Unlike a mutex, a condition variable is used by
a thread to make itself wait until an arbitrarily complex condition expression involving shared data attains a particular
state.
A condition variable is always used in conjunction with
a mutex, which the client thread must acquire before evaluating the condition expression. If the condition expression
is false, the client atomically suspends itself on the condition variable and releases the mutex so that other threads can
change the shared data. When a cooperating thread changes
this data, it can notify the condition variable, which atomically resumes a thread that had previously suspended itself
on the condition variable and acquires its mutex again.
With its mutex held, the newly resumed thread then reevaluates its condition expression. If the shared data has attained the desired state the thread continues. Otherwise, it
suspends itself on the condition variable again until its resumed. This process can repeat until the condition expression becomes true.
In general, a condition variable is more appropriate than
a mutex for situations involving complex condition expressions or scheduling behaviors. For instance, condition variables can be used to implement thread-safe message queues.
In this use case, a pair of condition variables can cooperatively block supplier threads when a message queue is full
and block consumer threads when the queue is empty.
In our Gateway example, the Message Queue denes
its internal state as illustrated below:
};
class Message_Queue
{
// ... See above ....
private:
// Internal Queue representation.
...
private:
#if defined (_POSIX_PTHREAD_SEMANTICS)
pthread_cond_t cond_;
#else
// Condition variable emulations.
#endif /* _POSIX_PTHREAD_SEMANTICS */
The constructor initializes the condition variable and associates it with the Thread Mutex passed as a parameter. The destructor destroys the condition variable, which
release any resources allocated by the constructor. Note that
the mutex is not owned by the Thread Condition, so
it is not destroyed in the destructor.
monitor lock is held and simply check for the boundary conditions in the queue:
bool
Message_Queue::empty_i (void) const
{
return message_count_ <= 0;
}
bool
Message_Queue::full_i (void) const
{
return message_count_ > max_messages_;
}
void
Message_Queue::put (const Message &msg)
{
// Use the Scoped Locking idiom to
// acquire/release the <monitor_lock_> upon
// entry/exit to the synchronized method.
Guard<Thread_Mutex> guard (monitor_lock_);
Apply the Thread-safe Interface idiom: In this substep, the interface and implementation methods are implementing according to the Thread-safe Interface idiom.
For instance, the following Message Queue methods
check if a queue is empty, i.e., contains no Messages at
all, or full i.e., contains more than max messages in it.
We show the interface methods rst:
Note how this synchronized method only performs the synchronization and scheduling logic needed to serialize access
to the monitor object and wait while the queue is full, respectively. Once theres room in the queue, it forwards to
the put i method, which inserts the message into the queue
and updates the bookkeeping information. Moreover, the
put i need not be synchronized because the put method
never calls it without rst acquiring the monitor lock .
Likewise, the put i method need not check to see if the
queue is full because it is never called as long as full i
returns true.
The get method removes the Message from the front of
a queue and returns it to the caller.
bool
Message_Queue::empty (void) const
{
Guard<Thread_Mutex> guard (monitor_lock_);
return empty_i ();
}
bool
Message_Queue::full (void) const
{
Guard<Thread_Mutex> guard (monitor_lock_);
return full_i ();
}
Message
Message_Queue::get (void)
{
// Use the Scoped Locking idiom to
// acquire/release the <monitor_lock_> upon
// entry/exit to the synchronized method.
Guard<Thread_Mutex> guard (monitor_lock_);
These methods illustrate a simple example of the Threadsafe Interface idiom outlined above. They use the Scoped
Locking idiom [6] to acquire/release the monitor lock and
then immediately forward to the corresponding implementation method. As shown next, these methods assume the
Supplier
Handler
}
// Dequeue the first <Message> in the queue
// and update the <message_count_>.
Message m = get_i ();
Routing
Table
Consumer
Handler
Message Queue
2: find (msg)
Consumer
Handler
Message Queue
Supplier
Handler
1: recv (msg)
return m;
// Destructor of <guard> releases <monitor_lock_>.
}
INCOMING
MESSAGES
SUPPLIER
10 Example Resolved
OUTGOING
MESSAGES
CONSUMER
3: put (msg)
4: get (msg)
5: send (msg)
INCOMING
GATEWAY MESSAGES
OUTGOING
MESSAGES
SUPPLIER
CONSUMER
class Consumer_Handler
{
public:
Consumer_Handler (void);
};
Consumer_Handler::Consumer_Handler (void)
{
// Spawn a separate thread to get messages
// from the message queue and send them to
// the remote consumer via TCP.
Thread_Manager::instance ()->spawn (svc_run,
this);
}
private:
// Message queue implemented as a
// monitor object.
Message_Queue message_queue_;
// Connection to the remote consumer.
SOCK_Stream connection_;
void *
Consumer_Handler::svc_run (void *args)
{
Consumer_Handler *this_obj =
reinterpret_cast<Consumer_Handler *> (args);
for (;;) {
// This thread blocks on <get> until the
// next <Message> is available.
Message msg =
this_obj->message_queue_.get ();
// Transmit message to the consumer.
this_obj->connection_.send (msg,
msg.length ());
void
Message_Queue::put (const Message &msg,
Time_Value *timeout)
throw (Timedout)
{
// ... Same as before ...
}
}
The Message Queue is implemented as a monitor object. Therefore, the send operation on the connection
can block in a Consumer Handler without affecting
the quality of service of other Consumer Handlers or
Supplier Handlers.
11 Variants
The following are variations of the Monitor Object pattern.
Timed synchronized method invocations: Many applications can benet from timed synchronized method invocations. Timed invocations enable clients to bound the amount
of time they are willing to wait for a synchronized method to
enter its monitor objects critical section.
The Message Queue monitor object interface dened
earlier can be modied to support timed synchronized
method invocations, as follows:
class Message_Queue
{
public:
// = Message queue synchronized methods.
private:
typename SYNCH_STRATEGY::MUTEX monitor_lock_;
typename SYNCH_STRATEGY::CONDITION not_empty_;
typename SYNCH_STRATEGY::CONDITION not_full_;
// ...
};
// ...
};
class MT_SYNCH {
public:
// Synchronization traits.
typedef Thread_Mutex MUTEX;
typedef Thread_Condition CONDITION;
};
If timeout is 0 then both get and put will block indenitely until a Message is either removed or inserted into a
Message Queue monitor object, respectively. Otherwise,
if the timeout period expires, the Timedout exception is
thrown and the client must be prepared to handle this exception.
class NULL_SYNCH {
Simplify synchronization of methods invoked concurrently on an object: Clients need not be concerned with
concurrency control when invoking methods on a monitor
object. If a programming language doesnt support monitor
objects as a language feature, developers can use idioms like
Scoped Locking [6] to simplify and automate the acquisition
and release of monitor locks that serializes access to internal
monitor object methods and state.
// Synchronization traits.
typedef Null_Mutex MUTEX;
typedef Null_Thread_Condition CONDITION;
};
Thus, to dene a thread-safe Message Queue, we just parameterize it with the MT SYNCH strategy, as follows:
Message_Queue<MT_SYNCH> message_queue;
12 Known Uses
The following are some known uses of the Monitor Object
pattern:
Dijkstra/Hoare monitors: Dijkstra [9] and Hoare [5] dened programming language features called monitors to encapsulate service functions and their internal variables into
thread-safe modules. To prevent race conditions, a monitor
contains a lock that allows only one function at a time to be
active within the monitor. Functions that want to temporarily leave the monitor can block on a condition variable. It
is the responsibility of the programming language compiler
to generate run-time code that implements and manages the
monitor lock and condition variables.
class Inner {
protected boolean cond_ = false;
public synchronized void awaitCondition () {
while (!cond)
try { wait (); }
catch (InterruptedException e) {}
// Any other code.
}
public synchronized
void notifyCondition (boolean c) {
cond_ = c;
notifyAll ();
}
}
class Outer {
protected Inner inner_ =
new Inner ();
public synchronized void process () {
inner_.awaitCondition ();
}
13 Consequences
The Monitor Object pattern provides the following benets:
Acknowledgements
public synchronized
void set (boolean c) {
inner_.notifyCondition (c);
}
References
[1] R. G. Lavender and D. C. Schmidt, Active Object: an Object Behavioral Pattern for Concurrent Programming, in Pattern Languages
of Program Design (J. O. Coplien, J. Vlissides, and N. Kerth, eds.),
Reading, MA: Addison-Wesley, 1996.
14 See Also
The Monitor Object pattern has several properties in common with the Active Object pattern [1]. For instance, both
patterns can be used to synchronize and schedule methods
invoked concurrently on an object. One difference is that an
active object executes its methods in a different thread than
its client(s), whereas a monitor object executes its methods
in its client threads. As a result, active objects can perform
more sophisticated, albeit more expensive, scheduling to determine the order in which their methods execute. Another
difference is that monitor objects typically couple their synchronization logic more closely with their methods functionality. In contrast, it is easier to decouple an active objects functionality from its synchronization policies because
it has a separate scheduler.
For example, it is instructive to compare the Monitor Object solution in Section 10 with the solution presented in
the Active Object [1] pattern. Both solutions have similar overall application architectures. In particular, the
Supplier Handler and Consumer Handler implementations are almost identical. The primary difference is
that the Message Queue itself is easier to program and is
more efcient when its implemented using the Monitor Object pattern rather than the Active Object pattern.
If a more sophisticated queueing strategy was necessary,
however, the Active Object pattern might be more appropriate. Likewise, because active objects execute in different threads than their clients, there are use cases where active objects can improve overall application concurrency by
executing multiple operations asynchronously. When these
operations are complete, clients can obtain their results via
futures.
10