0% found this document useful (0 votes)
86 views12 pages

Object Interconnections C++ PDF

This document describes a thread pool concurrency model for multi-threaded CORBA servers as an alternative to the thread-per-request model. It discusses how the thread pool model avoids the overhead of dynamically spawning a new thread for each request by pre-spawning a fixed number of threads at startup to service all incoming requests. The main components are a main thread to receive requests, a request queue, and a pool of threads that remove requests from the queue and process them. C, C++, and CORBA implementations of a stock quote server using this thread pool model are presented.

Uploaded by

MSohaibAkram
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views12 pages

Object Interconnections C++ PDF

This document describes a thread pool concurrency model for multi-threaded CORBA servers as an alternative to the thread-per-request model. It discusses how the thread pool model avoids the overhead of dynamically spawning a new thread for each request by pre-spawning a fixed number of threads at startup to service all incoming requests. The main components are a main thread to receive requests, a request queue, and a pool of threads that remove requests from the queue and process them. C, C++, and CORBA implementations of a stock quote server using this thread pool model are presented.

Uploaded by

MSohaibAkram
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Object Interconnections

Comparing Alternative Programming Techniques for Multi-threaded CORBA Servers: Thread Pool
(Column 6)

Douglas C. Schmidt Steve Vinoski


[email protected] [email protected]
Department of Computer Science Hewlett-Packard Company
Washington University, St. Louis, MO 63130 Chelmsford, MA 01824

This column will appear in the April 1996 issue of the


SIGS C++ Report magazine.

QUOTE SERVER : Request 3: ENQUEUE


Queue REQUEST
1 Introduction
pool
Modern OS platforms like Windows NT, and OS/2 and many thread 4: DEQUEUE & main
PROCESS
flavors of UNIX provide extensive library and system call thread
REQUEST
support for multi-threaded applications. However, program- pool
ming multi-threaded applications is hard and programming thread
distributed multi-threaded applications is even harder. In par- pool 2: RECEIVE
ticular, developers must address sources of accidental and thread REQUEST
inherent complexity:

 Accidental complexity of multi-threaded programming 5: RETURN QUOTE VALUE


arises from limitations with programming tools and design
techniques. For example, many debuggers cant handle 1: REQUEST
threaded programs and cant step across host boundaries. QUOTE SERVER
Likewise, algorithmic design [1] makes it hard to reuse ap-
CLIENT
plication components because it tightly couples the structure CLIENT CLIENT
of a threaded application to the functions it performs.

 Inherent complexity of multi-threaded programming


arises from challenges such as avoiding deadlock and live-
lock, eliminating race conditions for shared objects, and Figure 1: thread pool Architecture for the Stock Quote
minimizing the overhead of context switch, synchronization, Server
and data movement. An inherently complex aspect of pro-
gramming multi-threaded distributed applications (particu-
larly servers) involves selecting the appropriate concurrency 2 The Thread Pool Concurrency
model, which is the focus of this column. Model
Our previous column examined several ways to program The thread pool concurrency model is a variation of the
multi-threaded stock quote servers using C, C++ wrappers, thread-per-request we examined last column. The main ad-
and two versions of CORBA (HP ORB Plus and MT Or- vantage of thread-per-request is its simplicity, which is why
bix). In that column, we focused on the thread-per-request its used in many multi-threaded ORBs (such as Orbix and
concurrency model, where every incoming request causes a HP ORB Plus). However, dynamically spawning a thread to
new thread to be spawned to process it. This column exam- handle each new request causes excessive resource utiliza-
ines and evaluates another concurrency model: thread pool, tion if the number of requests becomes very large and the
which pre-spawns a fixed number of threads at start-up to OS resources required to support threads dont scale up effi-
service all incoming requests. We illustrate this model by ciently.
developing new multi-threaded C, C++, and CORBA imple- The thread pool model avoids this overhead by pre-
mentations of the stock quote server. spawning a fixed number of threads at start-up to service all

1
incoming requests. This strategy amortizes the cost of thread HANDLE listener = create_server_endpoint (port);
creation and bounds the use of OS resources. Client requests Handle_Queue handle_queue;
can execute concurrently until the number of simultaneous
requests exceeds the number of threads in the pool. At this /* Initialize the thread-safe message queue. */
handle_queue_init (&handle_queue);
point, additional requests must be queued (or rejected) until
a thread becomes available. /* Initialize the thread pool. */
thread_pool_init (&handle_queue, pool_size);
Figure 1 illustrates the main components in this concur-
rency model. These components include a main thread, a /* The event loop for the main thread. */
svc_run (&handle_queue, listener);
request queue, and a set of pool threads. The main thread /* NOTREACHED */
receives new requests and inserts them into the tail of the }
request queue, while the pool threads remove requests from
the head of the queue and service them. Well explore the The svc run function runs the main threads event loop,
implementation and use of these components in this column as follows:
using C, C++ wrappers, and CORBA, respectively. void svc_run (Handle_Queue *handle_queue,
HANDLE listener)
{
/* Main event loop. */
3 The Multi-threaded C Thread Pool
for (;;) {
Solution /* Wait to accept a new connection. */
HANDLE handle = accept (listener, 0, 0);

3.1 C Code /* Enqueues the request for processing


by a thread in the pool. */
The following example shows a solution written using C, handle_queue_insert (handle_queue, handle);
}
sockets, and Solaris threads [2]1 As in previous columns, we /* NOTREACHED */
use a set of C utility functions to receive stock quote requests }
from clients (recv request), look up quote information
(lookup stock price), and return the quote to the client The main thread runs an event loop that continuously accepts
(send response). new connections from clients and enqueues each connec-
tion in a Handle Queue, which is a thread-safe queue of
/* WIN32 already defines this. */ HANDLEs. Subsequently, a thread in the thread pool will re-
#if defined (unix)
typedef int HANDLE; move the HANDLE from the queue, extract the clients stock
#endif /* unix */ quote request, look up the result, and return the result to the
HANDLE create_server_endpoint (u_short port); client.
int recv_request (HANDLE, struct Quote_Request *); The Handle Queue plays several roles in this design.
int send_response (HANDLE, long stock_value); First, it decouples the main thread from the pool threads.
int handle_quote (HANDLE);
This allows multiple pool threads to be active simultaneously
These functions were first implemented in the October 1995 and offloads the responsibility for maintaining the queue
issue of the C++ Report and were revised to become thread- from kernel-space to user-space. Second, it enforces flow
safe in the February 1996 issue. control between clients and the server. When theres no more
room in the queue, the main thread blocks, which will back-
3.1.1 The main() Thread propagate to the clients, preventing them from establishing
new connections. New connection requests will not be ac-
Our server main is similar to the one we presented for the cepted until pool threads have a chance to catch up and can
multi-threaded C solution in our last column. The key dif- unblock the main thread.
ference is that we dont dynamically spawn a thread for each Each thread in the thread pool is spawned by the
new client request. Instead, we create a thread-safe message thread pool init function:
queue, a pool of threads, and start an event loop in the main
void
thread, as shown below: thread_pool_init (Handle_Queue *handle_queue,
int pool_size)
const int DEFAULT_PORT = 12345; {
const int DEFAULT_POOL_SIZE = 4; int i;
int main (int argc, char *argv[]) for (i = 0; i < pool_size; i++)
{ /* Spawn off the thread pool. */
u_short port = /* Port to listen for connections. */ thr_create
argc > 1 ? atoi (argv[1]) : DEFAULT_PORT; (0, /* Use default thread stack. */
int pool_size = /* Size of the thread pool. */ 0, /* Use default thread stack size. */
argc > 2 ? atoi (argv[2]) : DEFAULT_POOL_SIZE; &pool_thread, /* Entry point. */
(void *) handle_queue, /* Entry point arg. */
/* Create a passive-mode listener endpoint. */ THR_DETACHED | THR_NEW_LWP, /* Flags. */
0); /* Dont bother returning thread id. */
1 Porting our implementation to POSIX pthreads or Win32 threads is }
straightforward. }

2
3.1.2 The pool thread() Function void handle_queue_init (Handle_Queue *handle_queue,
u_int max)
Each newly created thread executes the following event loop {
handle_queue->max_count_ = max;
in the pool thread function: handle_queue->count_ = 0;
handle_queue->head_ = handle_queue->tail_ = 0;
void *pool_thread (void *arg)
{ /* Initialize synchronization variables that
Handle_Queue *handle_queue = are local to a single process. */
(Handle_Queue *) arg; mutex_init (&handle_queue->lock_,
USYNC_THREAD, 0);
/* The event loop for the each cond_init (&handle_queue->notempty_,
thread in the thread pool. */ USYNC_THREAD, 0);
cond_init (&handle_queue->notfull_,
for (;;) { USYNC_THREAD, 0);
HANDLE handle; }
/* Get next available HANDLE. */
handle_queue_remove (handle_queue, &handle); Three synchronization variables are used to implement
the thread-safe Handle Queue: two condition variables
/* Return stock quote to client. */
handle_quote (handle); (cond t notempty and notfull ) and one mutex
(mutex t lock ). The condition variables enable threads
/* Close handle to prevent leaks. */
close (handle); to insert and remove HANDLEs to and from the queue con-
} currently. The mutex lock is used by the condition vari-
/* NOTREACHED */
return 0; ables to serialize access to the internal state of the queue, as
} shown in the handle queue insert function below:
When a pool thread becomes available, it will dequeue the void
handle_queue_insert (Handle_Queue *handle_queue,
next handle (corresponding to a client request), use it to HANDLE handle)
look up the value of the stock quote, and return the quote to {
/* Ensure mutual exclusion for queue state. */
the client. mutex_lock (&handle_queue->lock_);

/* Wait until theres room in the queue. */


3.1.3 The Thread-Safe Handle Queue while (handle_queue->count_
== handle_queue->max_count_)
Most of the complexity in the thread pool implementation cond_wait (&handle_queue->notfull_,
&handle_queue->lock_);
resides in the thread-safe Handle Queue. The main event
loop thread uses this queue to exchange HANDLEs with the /* Code to insert handle into queue omitted... */
pool threads. We implement the queue as a C struct con- /* Inform waiting threads that queue has a msg. */
taining an array of HANDLEs, bookkeeping information, and cond_signal (&handle_queue->notempty_);
synchronization variables: /* Release lock so other threads can proceed. */
mutex_unlock (&handle_queue->lock_);
#define MAX_HANDLES 100 }
/* Defines the message queue data. */
typedef struct Handle_Queue The handle queue insert function is called by the
{
/* Buffer containing HANDLEs -- managed thread running the main event loop when it accepts a new re-
as a circular queue. */ quest from a client. The clients HANDLE is inserted into the
HANDLE queue_[MAX_HANDLES]; queue if theres room. Otherwise, the main event loop thread
/* Keep track of beginning and end of queue. */ blocks until the notfull condition is signaled. This con-
u_int head_, tail_; dition is signaled when a pool thread dequeues a HANDLE
/* Upper bound on number of queued messages. */ from the queue via the following handle queue remove
u_int max_count_; function:
/* Count of messages currently queued. */ void
u_int count_; handle_queue_remove (Handle_Queue *handle_queue,
HANDLE *first_handle)
/* Protect queue state from concurrent access. */ {
mutex_t lock_; mutex_lock (&handle_queue->lock_);
/* Block consumer threads until queue not empty. */ /* Wait while the queue is empty. */
cond_t notempty_; while (handle_queue->count_ == 0)
cond_wait (&handle_queue->notempty_,
/* Block consumer threads until queue not full. */ &handle_queue->lock_);
cond_t notfull_;
/* Code to remove first_handle from
} Handle_Queue; queue omitted... */

The Handle Queue data structure is managed by the fol- /* Inform waiting threads that queue isnt full. */
cond_signal (&handle_queue->notfull_);
lowing C functions. The handle queue init function mutex_unlock (&handle_queue->lock_);
initializes internal queue state: }

3
The handle queue remove function is called by all the /* Close handle to prevent leaks. */
pool threads. This function removes the next available close (h);
}
HANDLE from the queue, blocking if necessary until the /* NOTREACHED */
queue is no longer empty. After it removes the next HANDLE }
it signals the notfull condition to inform the main event
loop thread that theres more room in the queue.2 The main program is similar to the one shown in Sec-
tion 3.1.1, as shown below:

3.2 Evaluating the C Thread Pool Solution int main (int argc, char *argv[])
{
/* ... */
Depending on the degree of host parallelism and client ap-
plication behavior, the new thread pool solution can improve /* Create a passive-mode listener endpoint. */
listener = create_server_endpoint (port);
the performance of the original thread-per-request approach.
In particular, it will bound the amount of thread resources /* Initialize the thread pool. */
used by the server. There are still a number of drawbacks, for (i = 0; i < pool_size; i++)
however: /* Spawn off the thread pool. */
 Too much infrastructure upheaval:
thr_create
The implementa- (0, /* Use default thread stack. */
tion of the thread pool concurrency model shown above is 0, /* Use default thread stack size. */
&pool_thread, /* Entry point. */
an extension of the thread-per-request server from our pre- (void *) listener, /* Entry point arg. */
vious column. We were able to reuse the core stock quote THR_DETACHED | THR_NEW_LWP, /* Flags. */
0); /* Dont bother returning thread id. */
routines (such as recv request, send response, and }
handle quote). However, the surrounding software ar-
/* Block waiting for a notification to
chitecture required many changes. Some changes were close down the server. */
relatively minor (such as pre-spawning a thread pool
/* ... */
rather than a thread-per-request). Other changes required
significant work (such as implementing the thread-safe /* Unblock the threads by closing
down the listener. */
Handle Queue). close (listener);
 Lack of flexibility and reuse: Despite all the effort spent }

on our thread-safe Handle Queue, the current implemen-


The main difference between this main and the previous
tation is tightly coupled to the queueing of HANDLEs. Closer
one is that we no longer need to use the thread-safe mes-
examination reveals that the synchronization patterns used in
sage queue since each thread in the pool blocks directly on
handle queue insert and handle queue remove
the accept call.
can be factored out and reused for other types of thread-safe
There are factors that may make this new approach less
queue management. Unfortunately, it is hard to do this flex-
desirable in some usecases, however:
ibly, efficiently, and robustly with the current solution be-
cause C lacks features like parameterized types and method
inlining.
 Reprioritize request processing It may be desirable
to handle incoming requests in a different order than
 High queueing overhead: Another problem with the they arrive. By separating request processing from pas-
thread pool solution shown above is that the it may incur a sive connection establishment, the thread-safe queueing
non-trivial amount of context switching and synchronization mechanism makes it possible to reorder the requests rel-
overhead due to implement the thread-safe message queue. ative to some priority scheme.
One way to eliminate this overhead is to remove the explicit
message queue and have each of the threads in the pool block
 Limits on OS socket accept queue Many implemen-
tations of sockets limit the number of connections that
in an accept call, as follows:
can be queued by the operating system. Typically, this
void *pool_thread (void *arg) number is fairly low (e.g., 8 to 10). On highly active
{ servers (such as many WWW sites), this low limit will
HANDLE listener = (HANDLE *) arg;
HANDLE handle; prevent clients from accessing the server, even though
there may be available resources to process the client
/* Each thread accepts connections
and performs the clients request. */ requests. By queueing the requests in user-space, our
original approach may be more scalable in many situa-
while ((handle = accept (listener)) != -1)
/* Return stock quote to client. */ tions.
handle_quote (handle);
 Lack of atomicity for accept Some operating sys-
2 There are techniques for minimizing the number of calls to
tems (e.g., kernels based on BSD UNIX) implement
accept as a system call, so that calls to accept are
cond signal, which can improve performance significantly by reducing
context switching overhead. These techniques are beyond the scope of this atomic. Other operarting systems (e.g., many kernels
column and are discussed in [2, 3]. based on System V UNIX) implement it as a library

4
call, so that calls to accept are not atomic. If accept handle = accept (listener, 0, 0);
is not atomic then its possible for threads to receive FD_SET (handle, &read_hs);
if (maxhp1 <= handle)
EPROTO errors from accept, which means protocol maxhp1 = handle + 1;
error [4]. One solution to this problem is to explicitly }
temp_hs = read_hs;
add mutexes around the accept call, but this locking }
can itself become a bottleneck. /* NOTREACHED */
}
 Caching open connections Our alternative thread pool
solution forces each thread to allocate a new connection In addition, the pool thread function would have to
since threads are always blocked in accept. As shown change (to emphasize the differences weve prefixed the
below, this may be inefficient in some situations. changes with /* !!!):
void *pool_thread (void *arg)
Therefore, well continue to use the thread-safe message {
queue example throughout the remainder of this paper. Be Handle_Queue *handle_queue =
(Handle_Queue *) arg;
aware, however, that there are other ways to implement the
thread pool concurrency model. Some of these approaches /* The event loop for each
thread in the thread pool. */
may be better suited for your requirements in certain circum-
stances. for (;;) {
 High connection management overhead: All the
HANDLE handle;

thread pool and thread-per-request server implementations /* Get next available HANDLE. */
handle_queue_remove (handle_queue, &handle);
weve examined thus far have set up and torn down a con-
nection for each client request. This approach works fine if /* !!! Return stock quote to client. A
return of 0 means the client shut down. */
clients only request a single stock quote at a time from any if (handle_quote (handle) == 0) {
given server. When clients make a series of requests to the /* !!! Clear the bit in read_hs (i.e., the
fd_set) so the main event loop will ignore
same server, however, the connection management overhead this handle until its reconnected. */
can become a bottleneck. FD_CLR (handle, &read_hs);
One way to fix this problem is to keep each connection /* Close handle to prevent leaks. */
open until the client explicitly closes it down. However, ex- close (handle);
}
tending the C solution to implement this connection caching /* NOTREACHED */
strategy is subtle and error-prone. Several obvious solu- return 0;
tions will cause race conditions between the main thread and }
}
the pool threads. For example, the select event demul-
tiplexing call can be added to the original svc run event Unfortunately, this code contains several subtle race con-
loop, as follows: ditions. For instance, more than one thread can access
the fd set global variable read hs concurrently, which
// Global variable shared by the svc_run()
// and pool_thread() methods. can confuse the svc run methods demultiplexing strat-
static fd_set read_hs; egy. Likewise, the main thread can insert the same HANDLE
void svc_run (Handle_Queue *handle_queue, into the Handle Queue multiple times. Therefore, multi-
HANDLE listener) ple pool threads can read from the same HANDLE simultane-
{
HANDLE maxhp1 = listener + 1; ously, potentially causing inconsistent results.
fd_set temp_hs; Alleviating these problems will force us to rewrite por-
/* fd_sets maintain a set of HANDLEs that tions of the server by adding new locks and modifying the
select () uses to wait for events. */ existing handle quote code. Rather than spending any
FD_ZERO (&read_hs);
FD_ZERO (&temp_hs); more effort revising the C version, well incorporate these
FD_SET (listener, &read_hs); changes into the C++ solution in the next section.
/* Main event loop. */

for (;;) {
HANDLE handle;
4 The Multi-threaded C++ Wrappers
/* Demultiplex connection and data events */
select (maxhp1, &temp_hs, 0, 0, 0);
Thread Pool Solution
/* Check for stock quote requests and
insert the handle in the queue. */
4.1 C++ Wrapper Code
for (handle = listener + 1;
handle < maxhp1; This section illustrates a C++ thread pool implementation
handle++) based on ACE [5]. The C++ solution is structured using the
if (FD_ISSET (handle, &temp_hs))
handle_queue_insert (handle_queue, handle);
following four classes (shown in Figure 2):

/* Check for new connections. */


 Quote Handler: This class interacts with clients by re-
if (FD_ISSET (listener, &temp_hs)) { ceiving quote requests, looking up quotes in the database,
and returning responses.

5
is only one of these, well define it using the Singleton pat-
tern [7]. Doing this is easy using the following components
QUOTE SERVER 2: HANDLE INPUT provided by STL and ACE:
: Request
Queue 3: ENQUEUE REQUEST
// Forward declaration.
pool template <class PEER_STREAM>
thread : Quote class Quote_Handler;
: Quote
4: DEQUEUE & Handler
: Quote Handler // Use the STL pair component to create a
pool PROCESS Handler // tuple of objects to represent a client request.
thread REQUEST typedef pair<Quote_Handler<SOCK_Stream> *,
: Quote Quote_Request *>
Acceptor Quote_Tuple;
pool
pool
thread
thread : Reactor // An ACE thread-safe queue of Quote_Pairs.
typedef Message_Queue<Quote_Tuple> Quote_Queue;

// An ACE Singleton that accesses the Quote_Queue.


typedef Singleton<Quote_Queue, Mutex> Request_Queue;
5: RETURN QUOTE VALUE
1: REQUEST The STL pair class is a template that stores two values.
QUOTE
SERVER We use pair to create a tuple containing pointers to a
CLIENT Quote Handler and a Quote Request. This tuple con-
CLIENT tains the information necessary to process client requests ef-
CLIENT
ficiently and correctly in the thread pool model.
The ACE Message Queue is a flexible, type-safe C++
wrapper that uses templates to generalize the type of data
that can be stored in the C Handle Queue implementation
Figure 2: ACE C++ Architecture for the Thread Pool Stock
from Section 3:
Quote Server
template <class TYPE, size_t MAX_SIZE = 100U>
class Message_Queue
 Quote Acceptor: A factory that implements the strategy {
public:
for accepting connections from clients, followed by creating int insert (const TYPE &);
int remove (TYPE &);
and activating Quote Handlers. // ...
 Reactor: Encapsulates the select and poll event private:
demultiplexing system calls with an extensible and
portable callback-driven object-oriented interface. The // Buffer of TYPE, managed as a queue.
TYPE queue_[MAX_SIZE];
Reactor dispatches the handle input methods of
Quote Acceptor and Quote Handler when connec- // ...
tion events and quote requests arrive from clients, respec-
The ACE Singleton class is an adapter that turns ordi-
tively.
nary classes into Singletons [7], as follows:
 Request Queue: This thread-safe queue passes client re- template <class TYPE, class LOCK = Mutex>
quests from the main thread to the pool threads. class Singleton
{
The C++ implementation of the thread pool model is public:
static TYPE *instance (void) {
considerably easier to develop than the C solution because // Perform the Double-Checked Locking
we dont need to rewrite all the infrastructure code from // pattern to ensure proper initialization.
scratch. For instance, variations of the Quote Handler, if (instance_ == 0) {
Guard<LOCK> lock (lock_);
Quote Acceptor, and Reactor have been used in pre- if (instance_ == 0)
vious implementations of the quote server in the Octo- instance_ = new TYPE;
}
ber 1995 and February 1996 C++ Report. Likewise, the return instance_;
Request Queue can be implemented by using compo- }
nents available with C++ libraries like ACE and STL [6]. protected:
Below, we illustrate how these components are used to con- // Singleton instance of TYPE.
static TYPE *instance_;
struct a multi-threaded quote server based on the C++ thread
pool concurrency model. // Lock to ensure serialization.
static LOCK lock_;
};
4.1.1 The Thread-Safe C++ Request Queue
The ACE Singleton adapter avoids subtle race conditions
Well start off by using several ACE and STL classes to cre- by using the Double-Checked Locking pattern [8]. This pat-
ate a thread-safe C++ queue that holds a tuple containing in- tern allows atomic initialization, regardless of thread initial-
formation necessary to process a client request. Since there ization order, and eliminates subsequent locking overhead).

6
Using the ACE Singleton wrapper in conjunction with // typeid (qt->second) == Quote_Request *
the ACE Message Queue and STL pair, the thread pool if (qt->first->handle_quote
(qt->second) == 0)
server can insert and remove Quote Handler objects as // Client shut down, so close down too.
follows: qt->first->close ();
delete qt->second;
}
Quote_Tuple qt (quote_handler, quote_request); /* NOTREACHED */
// ... }
Request_Queue::instance ()->insert (qt);
// !!! Complete the processing of a request.
// ... int handle_quote (Quote_Request *req) {
Request_Queue::instance ()->remove (qt); int value;
{
The first time that insert or remove is called, the // Constructor of m acquires lock.
Read_Guard<RW_Mutex> m (lock_);
Singleton::instance method dynamically allocates
and initializes the thread-safe Request Queue. The // Lookup stock price via Singleton.
value = QUOTE_DB::instance ()->
Singleton pattern also minimizes the need for global ob- lookup_stock_price (*req);
jects, which is important in C++ since the order of ini-
// Destructor of m releases lock.
tialization of global objects in C++ programs is not well- }
defined. Therefore, well use the same approach for the return send_response (value);
}
Quote Database and the Reactor:
// Close down the handler and release resources.
// Singleton for looking up quote values. void close (void) {
typedef Singleton<Quote_Database> QUOTE_DB; // Close down the connection.
this->peer_.close ();
// Singleton event demultiplexing and dispatching.
typedef Singleton<Reactor> REACTOR; // Reference counting omitted...

// Commit suicide to avoid memory leaks...


4.1.2 The Quote Handler Class delete this;
}
The Quote Handler class is responsible for processing
private:
client quote requests. Its implementation differs consider- // Ensure mutual exclusion to QUOTE_DB.
ably from the one used for the thread-per-request concur- RW_Mutex lock_;
};
rency model in the February C++ Report.
template <class STREAM> // IPC interface Each thread in the pool executes the static pool thread
class Quote_Handler function. This function runs an event loop that continuously
: public Svc_Handler<STREAM>
// This ACE base class defines "STREAM peer_;" removes Quote Tuples from the queue. The first field
{ in this tuple is the Quote Handler associated with the
public:
// !!! This method is called by the Quote_Acceptor client and the second field is a client Quote Request.
// to initialize a newly connected Quote_Handler, The pool thread uses the first field to invoke the
// which registers with the Reactor Singleton.
virtual int open (void) { handle quote method, which lookups the value of the
REACTOR::instance ()->register_handler desired stock and returns it to the client.
(this, READ_MASK); When the client closes down, the Quote Handler
}
cleans up the connection. Even though the client has already
// !!! This method is called by the Reactor when closed the connection, note that the close function must
// a quote request arrives. It inserts the request
// and the Quote_Handler into the thread-safe queue. perform reference counting on its target Quote Handler
virtual int handle_input (void) { object (to save space, weve omitted this code). If
Quote_Request *request = new Quote_Request;
if (recv_request (*request) <= 0) this reference counting were not performed, the close
return -1; // Destroy handler... function could prematurely delete the Quote Handler.
else {
Quote_Tuple qt (request, this) This could cause the pool thread function to invoke
handle quote on a dangling first pointer, which in
// Insert tuple into queue, blocking if full.
Request_Queue::instance ()->insert (qt); turn would probably cause the server to crash.
} Note that both handle input and pool thread can
}
block since each manipulates the global thread-safe queue.
// !!! Static method that runs in the thread, The handle input method will block if the queue is full,
// dequeueing next available Quote_Request. whereas the pool thread function will block if the queue
static void *pool_thread (void *) {
for (;;) { is empty.
Quote_Tuple qt;

// Get next request from queue. This 4.1.3 The Quote Acceptor Class
// call blocks if queue is empty.
Request_Queue::instance ()->remove (qt);
The Quote Acceptor class is an implementation of the
// typeid (qt->first) == Quote_Handler * Acceptor pattern [9] that creates Quote Handlers to pro-

7
cess quote requests from clients. Its implementation is simi- The main threads event loop runs continuously, han-
lar to the one shown in our previous column: dling events like client connections and quote requests.
typedef Acceptor <Quote_Handler <SOCK_Stream>, The servers event handling is driven by callbacks from
// Quote service. the REACTOR Singleton to the Quote Acceptor and
SOCK_Acceptor> // Passive conn. mech. Quote Handler objects. Since this server uses the thread
Quote_Acceptor;
pool model, requests can be handled concurrently by any
The Quote Acceptors available thread.
strategy for initializing a Quote Handler is driven by up-
calls from the Reactor. Whenever a new client connects
with the server, the Quote Acceptors handle input 4.2 Evaluating the C++ Thread Pool Solution
method dynamically creates a Quote Handler, accepts
the connection into the handler, and automatically calls the The C++ implementation solves the drawbacks with the C
Quote Handler::open method. In the thread pool im- version shown in Section 3.2 as follows.
plementation, this open method registers itself with the
Reactor, as we showed in Section 4.1.2 above.
 Less infrastructure upheaval: Compared to the
changes between our C program in our last column and the
C program shown in this column, the changes between the
4.1.4 The main() Server Function respective C++ programs are much fewer and more local-
The server main is responsible for creating a thread pool ized. In addition to creating a thread-safe Request Queue
and the Quote Acceptor, as follows: Singleton, the primary changes to our C++ thread pool im-
plementation are in the Quote Handler class and in our
// !!! Default constants.
const int DEFAULT_PORT = 12345; server main routine.
const int DEFAULT_POOL_SIZE = 4; In our last column, our Quote Handler::open
int main (int argc, char *argv[]) function spawned a thread to handle each incoming re-
{ quest. Here, open has been changed to register the new
u_short port =
argc > 1 ? atoi (argv[1]) : DEFAULT_PORT; Quote Handler with the Reactor. Then, when client
int pool_size = // !!! Size of the thread pool. requests arrive, the Quote Handlers handle input
argc > 2 ? atoi (argv[2]) : DEFAULT_POOL_SIZE;
method will queue both the request and the handler until a
// !!! Create a pool of threads to thread from the pool becomes available to service it. The
// handle quote requests from clients. only other change required was to make main create the
Thread::spawn_n
(pool_size, thread-safe queue, the thread pool, and the Reactor before
Quote_Handler<SOCK_Stream>::pool_thread, entering into its event loop.
(void *) 0,

 Greater flexibility and reuse:


THR_DETACHED | THR_NEW_LWP);
Fewer changes were re-
// !!! Factory that produces Quote_Handlers.
Quote_Acceptor acceptor (port);
quired in the C++ version than in the C version due to the
encapsulation of connection handling, queueing, and request
svc_run (acceptor); servicing within C++ classes.

 Minimal connection management overhead:


/* NOTREACHED */
return 0; The C++
}
solution keeps each client connection open until the client
First, the ACE method spawn n [3] is called to cre- closes it down. In addition, by using the thread-safe
ate a pool of n threads. Each thread executes the Request Queue and the Quote Tuple, we can avoid
Quote Handler::pool thread function. Next, a the subtle race conditions that plagued the earlier C version.
Quote Acceptor object is created. This object is
Obviously, the C++ solution is not without its drawbacks.
used to accept connections from clients and create
For instance, weve omitted the code that performs refer-
Quote Handler objects to service them. Finally, the fol-
ence counting to ensure that a Quote Handler is not
lowing svc run function is called to run the main threads
deleted until all of the Quote Requests stored in the
event loop:
Request Queue are removed. In addition, the program-
void svc_run (Quote_Acceptor &acceptor) mer must either be able to buy or build a thread-safe queue
{
// !!! Install Quote_Acceptor with Reactor. class. Developing such a class is not trivial, especially when
REACTOR::instance ()->register_handler (&acceptor); portability among different threads packages, OS platforms,
// !!! Event loop that dispatches all events as and C++ compilers is required. The Standard Template Li-
// callbacks to appropriate Event_Handler subclass brary (STL) is of no help here since the draft C++ standard
// (such as the Quote_Acceptor or Quote_Handlers).
does not require its queue class to be thread-safe. Fortu-
for (;;) nately, we are able to leverage the ACE components to sim-
REACTOR::instance ()->handle_events ();
/* NOTREACHED */ plify our implementation. ACE has been ported to most ver-
} sions of UNIX, as well as the Microsoft Win32 platform.

8
5 The Multi-threaded CORBA In contrast, our current implementation of My Quoter does
not inherit from any generated skeleton. Instead, it uses
Thread Pool Solution an alternative provided by Orbix called the TIE approach,
This section illustrates how to implement the thread pool which is based on object composition rather than inheritance:
concurrency model with MT-Orbix. The solution we de- class My_Quoter // Note lack of inheritance!
{
scribe below uses the same general design as our C++ im- // ...
plementation above. It also uses many of the same compo- };
nents (such as the ACE Singleton and Message Queue
classes). We use the Orbix TIE approach to associate the CORBA
interfaces with our implementation as follows:
DEF_TIE_Quoter (My_Quoter)
5.1 Implementing Thread Pools in MT-Orbix
The TIE approach is an example of an object form of the
The My Quoter implementation class shown below is al-
Adapter pattern [7], whereas the inheritance approach we
most identical to the one we used in our previous column
used last column uses the class form of the pattern. The
to implement the thread-per-request model. The main dif-
object form of the Adapter uses delegation to tie the in-
ference is the use of object composition to associate the
terface of the My Quoter object implementation class to
My Quoter implementation class with the Quoter IDL
the interface expected by the Quoter skeleton generated by
interface. Well discuss this below, but first, heres the com-
MT-Orbix. When a request is received, the Orbix Object
plete implementation:
Adapter upcalls the TIE object. In turn, this object dispatches
class My_Quoter // Note the absence of inheritance! the call to the My Quoter object that is associated with the
{ TIE object.
public:
// Constructor The TIE approach is mentioned in the C++ Language
My_Quoter (const char *name); Mapping chapters of the CORBA 2.0 specification [10]. Not
// Returns the current stock value. surprisingly, the idea for putting it there originally came from
virtual CORBA::Long get_quote IONA Technologies, the makers of Orbix. Conforming ORB
(const char *stock_name,
CORBA::Environment &env) implementations are not required to support either the TIE
{ approach or the inheritance approach, however.3
CORBA::Long value;
{
// Constructor of m acquires lock. 5.1.2 The C++ Thread-Safe Request Queue
Read_Guard<RW_Mutex> m (lock_);

value = QUOTE_DB::instance ()-> The Request Queue used by the CORBA implementation
is reused almost wholesale from the C++ implementation
lookup_stock_price (stock_name);
// Destructor of m releases lock.
} shown in Section 4.1.1:
if (value == -1)
// Raise exception. // An ACE Singleton that accesses an ACE
env.exception (new Stock::Invalid_Stock); // thread-safe queue of CORBA Request pointers.
return value; typedef Singleton<Message_Queue<CORBA::Request *>,
Mutex>
}
Request_Queue;
protected:
// Serialize access to database. The primary difference is that we parameterize it with a
RW_Mutex lock_; CORBA::Request pointer, rather than a Quote Tuple.
};
The reason for this is that MT-Orbix performs the low-level
As before, its necessary to protect access to the quote demultiplexing, so we dont have to do it ourselves.
database with a readers/writer lock since multiple requests
can be processed simultaneously by threads in the pool. 5.1.3 Thread Filters
Orbix implements a non-standard CORBA extension called
5.1.1 Associating the IDL Interface with an Implemen- thread filters. Each incoming CORBA request is passed
tation through a chain of filters before being dispatched to its target
If youve been following our columns carefully, youll notice object implementation. To dispatch an incoming CORBA
that the Orbix implementation of the My Quoter class in request to a waiting thread, a subclass of ThreadFilter
the May 1995 C++ Report inherited from a skeleton called must be defined to override the inRequestPreMarshal
QuoterBOAImpl. This class was automatically generated method. By using a ThreadFilter, the MT Orbix ORB
by the Orbix IDL compiler, i.e.: and Object Adapter are unaffected by the choice of concur-
rency model selected by a CORBA server.
class My_Quoter 3 The lack of a clear specification of whether CORBA C++ server skele-
// Inherits from an automatically-generated
// CORBA skeleton class. tons use inheritance or delegation is another indication of the CORBA
: virtual public Stock::QuoterBOAImpl server-side portability problems we have described in previous columns.

9
not perform the operation dispatch itself, nor should it return
the result to the client. These operations will be performed
QUOTE SERVER 4: ENQUEUE REQUEST by one of the threads in the thread pool, as shown in Figure 3.
Figure 3 illustrates the role of the TP Thread Filter
: TP
: Request MY_QUOTER in the MT Orbix architecture for the Thread Pool stock quote
Thread
Queue FACTORY
Filter server. Our quote server must explicitly create an instance of
TP Thread Filter to get it installed into the Orbix filter
5: DEQUEUE REQUEST chain:
2: RECEIVE
pool pool pool 3: INVOKE TP_Thread_Filter tp_filter;
thread thread thread FILTER(S)
The constructor of this object automatically inserts the thread
6: UPCALLS : OBJECT pool thread filter at the end of the filter chain.
ADAPTER
: My_Quoter The pool thread static method serves as the entry
: My_Quoter
: My_Quoter point for each thread in the thread pool, as shown below:
void *TP_Thread_Filter::pool_thread (void *)
{
7: RETURN QUOTE VALUE // Loop forever, dequeueing new Requests,
// and dispatching them....

1: REQUEST for (;;) {


CLIENT SERVER
CORBA::Request *req;
QUOTE
// Called by pool threads to dequeue
// the next available message. Will block
CLIENT // if queue is empty.
CLIENT
Request_Queue::instance ()->remove (req);

// This call will perform the upcall,


// send the reply (if any) and
// delete the Request for us...
CORBA::Orbix.continueThreadDispatch (*req);
Figure 3: MT Orbix Architecture for the Thread Pool Stock }
Quote Server
return 0;
}
The following class defines a server-specific thread fil-
ter that handles incoming requests in accordance with the All threads wait for requests to arrive on the head of the mes-
Thread Pool concurrency model: sage queue stored in our TP Thread Filter. The MT-
Orbix method continueThreadDispatch will con-
class TP_Thread_Filter : public CORBA::ThreadFilter tinue processing the request until it sends a reply to the client.
{
public: At this point, the thread will loop back to retrieve the next
// Intercept request insert at end of msg_que. CORBA request. If there is no request available the thread
virtual int inRequestPreMarshal (CORBA::Request &,
CORBA::Environment &); will block until a new request arrives on the message queue.
Likewise, if all the threads are busy, the queue will continue
// A pool thread uses this as its entry point,
// so this must be a static method. growing until it reaches its high-water mark, at which point
static void *pool_thread (void *); the thread running the inRequestPreMarshal method
};
will block. This relatively crude form of flow control was
Orbix calls inRequestPreMarshal method before also used in the C and C++ implementations shown earlier.
the incoming request is processed. In the Thread Pool model, Naturally, robust servers should be programmed more care-
requests are inserted in FIFO order at the end of a thread-safe fully to detect and handle queue overflow conditions.
Message Queue as they arrive, as follows: The main server program implements the Thread Pool
concurrency model by spawning off pool size number of
TP_Thread_Filter::inRequestPreMarshal threads, as follows:
(CORBA::Request &req,
CORBA::Environment&) const int DEFAULT_POOL_SIZE = 4;
{
// Will block if queue is full... int main (int argc, char *argv[])
Request_Queue::instance ()->insert (&req); {
// Initialize the factory implementation.
// Well dispatch the request later. My_Quoter_var my_quoter =
return -1; new TIE_My_Quoter (My_Quoter) (new My_Quoter);
}

Note that this method must return the magic number ,1 to


int pool_size = argc == 1 ? DEFAULT_POOL_SIZE
: atoi (argv[1]);
indicate to the Orbix Object Adapter that it has dealt with the // Create a pool of threads to handle
request. This value informs the Object Adapter that it need // quote requests from clients.

10
 Optimized connection management overhead: MT-
Thread::spawn_n (pool_size, Orbix can perform certain optimizations (such as caching
Thread_Filter::pool_thread,
(void *) 0, connections in a thread-safe manner) without requiring any
THR_DETACHED | THR_NEW_LWP); programmer intervention. It also separates the concerns of
// Wait for work to do in the main thread application development from those involving the choice of
// (which is also the thread that shepherds suitable transports and protocols for the application. In other
// CORBA requests through TP_Thread_Filter).
TRY { words, using an ORB allows an application to be developed
CORBA::Orbix.impl_is_ready ("Quoter", independently of the underlying communication transports
IT_X);
} CATCHANY { and protocols.
cerr << IT_X << endl;
} ENDTRY The primary drawback, of course, is that the mechanisms
return 0; used by MT-Orbix are not standardized across the indus-
} try. In general, all the multi-threading techniques we dis-
cuss in this column arent standardized yet, and in particular
When the Quote server first starts up, it creates a the TP Thread Filter approach shown above is propri-
My Quoter object to service client quote requests. It then etary to Orbix. The fact that the CORBA solution shown
creates a pool of threads to service incoming requests us- here is not portable is yet another indication of the server-
ing the ACE spawn n method. Finally, the main server side portability problems with CORBA that weve discussed
thread calls Orbix.impl is ready to notify Orbix that in previous columns.
the Quoter implementation is ready to service requests. Despite these issues, it is important to note that the con-
The main thread is responsible for sheparding CORBA re- currency models, patterns, and techniques we discussed in
quests through the filter chain to the TP Thread Filter. this article are reusable. Our goal is to help you navigate
Finally, the object we initially created is implicitly de- through the space of design alternatives. We hope that youll
stroyed by the destructor of the My Quoter var. The be able to apply them to your projects, regardless of whether
OMG C++ Mapping provides for each IDL interface a you program in CORBA, DCE, Network OLE, ACE, or any
var class that can manage object references ( ptr other distributed computing toolkit.
types) of that interface type. If we didnt use a
My Quoter var type here, our code would have to man-
ually duplicate and release the object as required. By using 6 Concluding Remarks
a My Quoter var, we let the smart pointer perform the re-
source management. In this column, we examined the thread pool concurrency
model and illustrated how to use it to develop multi-threaded
servers for a distributed stock quote application. This
5.2 Evaluating the MT-Orbix Thread Pool So- example illustrated how object-oriented techniques, C++,
lution CORBA, and higher-level abstractions like the Singleton pat-
tern help to simplify programming and improve extensibility.
The following benefits arise from using MT-Orbix to imple-
ment the thread pool concurrency model: Our next column will explore yet another concurrency
model: thread-per-session. This model is supported by a
 Almost no infrastructure upheaval: The implemen- number of CORBA implementations including MT-Orbix
tation of the MT-Orbix thread pool concurrency model and ORBeline. Having a choice of concurrency models can
shown above is almost identical to the thread-per-request help developers meet the performance, functionality, and
server from our previous column. The primary changes we maintenance requirements of their applications. The key
added were cosmetic (such as using Singletons rather than to success, of course, lies in thoroughly understanding the
global variables and using the object composition to tie tradeoffs between different models. As always, if there are
the Quoter skeleton with the My Quoter implementation any topics that youd like us to cover, please send us email at
rather than using inheritance). The ability to quickly and [email protected].
easily modify applications in this manner allows them to be Thanks to Prashant Jain, Tim Harrison, Ron Resnick, and
rapidly tuned and redeployed when necessary. Esmond Pitt for comments on this column.
 Increased flexibility and reuse: The flexibility and
reuse of the MT-Orbix solution is similar to the ACE C++
solution. The main difference is that MT-Orbix is responsi-
References
ble for most of the low-level demultiplexing and concurrency [1] G. Booch, Object Oriented Analysis and Design with Ap-
control that we had to implement by hand in our C++ solu- plications (2nd Edition). Redwood City, California: Ben-
tion. In particular, MT-Orbix hides all its internal synchro- jamin/Cummings, 1993.
nization mechanisms from the server programmer. Thus, we [2] J. Eykholt, S. Kleiman, S. Barton, R. Faulkner, A. Shivalin-
are only responsible for locking server-level objects (such as giah, M. Smith, D. Stein, J. Voll, M. Weeks, and D. Williams,
the Request Queue).

11
Beyond Multiprocessing... Multithreading the SunOS Ker-
nel, in Proceedings of the Summer USENIX Conference, (San
Antonio, Texas), June 1992.
[3] D. C. Schmidt, An OO Encapsulation of Lightweight OS
Concurrency Mechanisms in the ACE Toolkit, Tech. Rep.
WUCS-95-31, Washington University, St. Louis, September
1995.
[4] W. R. Stevens, UNIX Network Programming, Second Edition.
Englewood Cliffs, NJ: Prentice Hall, 1997.
[5] D. C. Schmidt, ACE: an Object-Oriented Framework for
Developing Distributed Applications, in Proceedings of the
6th USENIX C++ Technical Conference, (Cambridge, Mas-
sachusetts), USENIX Association, April 1994.
[6] A. Stepanov and M. Lee, The Standard Template Library,
Tech. Rep. HPL-94-34, Hewlett-Packard Laboratories, April
1994.
[7] E. Gamma, R. Helm, R. Johnson, and J. Vlissides, Design Pat-
terns: Elements of Reusable Object-Oriented Software. Read-
ing, MA: Addison-Wesley, 1995.
[8] D. C. Schmidt and T. Harrison, Double-Checked Locking
An Object Behavioral Pattern for Initializing and Access-
ing Thread-safe Objects Efficiently, in Pattern Languages of
Program Design (R. Martin, F. Buschmann, and D. Riehle,
eds.), Reading, MA: Addison-Wesley, 1997.
[9] D. C. Schmidt, Design Patterns for Initializing Network Ser-
vices: Introducing the Acceptor and Connector Patterns,
C++ Report, vol. 7, November/December 1995.
[10] Object Management Group, The Common Object Request
Broker: Architecture and Specification, 2.0 ed., July 1995.

12

You might also like