Pthreads Programming - Synchronizing Pthreads
Pthreads Programming - Synchronizing Pthreads
Pthreads Programming
Thread Pools
We designed our ATM server according to the boss/worker model for multithreaded
programs. The boss creates worker threads on demand. When it receives a request, the
boss creates a new worker thread to service that request and that request alone. When the
worker completes this request, it exits. This might be ideal if we got a nickel for each thread
we created, but it can slow our server in a couple of different ways:
∙ We don't reuse idle threads to handle new requests. Rather, we create≈and destroy≈a
thread for each request we receive. Consequently, our server spends a lot of time in the
Pthreads library.
∙ We've added to each request's processing time (a request's latency, to use a term from
an engineering design spec) the time it takes to create a thread. No wonder our ATM
customers keep tapping the Enter button and scowling at the camera!
We'll address these performance snags by redesigning our server to use a thread pool, a
very common and very important design technique. Ina server that uses a thread pool, the
boss thread creates a fixed number of worker threads up front. Like their boss, these
worker threads survive for the duration of the program. When the boss receives a new
request, it places it on a queue. Workers remove requests from the queue and process
them. When a worker completes a request, it simply removes another one from the queue.
Figure 3-4 shows the components of a thread pool.
The focal point of a thread pool is the request queue. Each request describes a unit of
work. (This description might be the name of a routine; it might be just a flag.) Worker
threads continually monitor the queue for new work requests; the boss thread places new
requests on the queue.
A thread pool has some basic characteristics:
∙ Number of worker threads. This limits the number of requests that can be in progress at
the same time.
∙ Request queue size. This limits the number of requests that can be waiting for service.
∙ Behavior when all workers are occupied and the request queue is full. Some requesters
may want to block until their requests can be queued and only then resume execution.
Others may prefer immediate notification that the pool is full. (For instance, network-
based applications typically depend on a status value to avoid "dropping requests on the
floor" when the server is overloaded.)
Adding work
In Example 3-25, the tpool_add_work routine adds work requests to the queue.
Example 3-27: Using the Thread Pool from the atm_server_init Routine
(atm_svr_tpool.c)
#define ATM_MAX_THREADS 10
#define ATM_MAX_QUEUE 10
tpool_t atm_thread_pool;
void atm_server_init(int argc, char **argv)
{
/* Process input arguments */
.
.
.
tpool_init(&atm_thread_pool, ATM_MAX_THREADS, ATM_MAX_QUEUE, 0);
/* Initialize database and communications */
.
.
.
}
Now, we simply need to change the main routine of our ATM server so that it:
∙ Calls tpool_add_work for each new request instead of calling pthread_create directly to
create a new thread.
∙ Calls tpool_destroy to synchronize shutdown of the threads and to release resources.
There's no need for the thread exit notification we used in the previous examples.
Example 3-28 implements these changes.
Example 3-28: Using the Thread Pool from the main Routine (atm_svr_tpool.c)
extern int
main(int argc, char **argv)
{
workorder_t *workorderp;
int trans_id;
void *status;
atm_server_init(argc, argv);
for(;;) {
/*** Wait for a request ***/
server_comm_get_request(&workorderp->conn, workorderp-
>req_buf);
/*** Is it a shutdown request? ***/
sscanf(workorderp->req_buf, "%d", &trans_id);
if (trans_id == SHUTDOWN) {
char resp_buf[COMM_BUF_SIZE];
tpool_destroy(atm_thread_pool, 1);
/* process it here with main() thread */
if (shutdown_req(workorderp->req_buf, resp_buf)) {
server_comm_send_response(workorderp->conn, resp_buf);
free(workorderp);
break;
}
}
/*** Use a thread to process this request ***/
}
server_comm_shutdown();
return 0;
}